patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11861218 | DETAILED DESCRIPTION Various embodiments will be described hereinafter with reference to the accompanying drawings. In general, according to one embodiment, a memory system connectable to a host includes a nonvolatile memory including a plurality of blocks and a controller electrically connected to the nonvolatile memory and configured to manage a plurality of write destination blocks allocated from the blocks, and execute a first write operation involving transferring same data to the nonvolatile memory once or more. The controller receives, from the host, write commands each designating a location on a memory of the host where write data to be written exists, a length of the write data, and an identifier indicating a block where the write data is to be written. After receiving one or more write commands having a first identifier indicating a first write destination block in the write destination blocks, the controller acquires, from the host, write data having the same first size as a data write unit of the nonvolatile memory and obtained by dividing write data associated with one write command in the write commands having the first identifier into a plurality of write data or combining write data associated with two or more write commands in the write commands having the first identifier. The controller writes the acquired write data having the first size to the first write destination block by the first write operation. First, a relation between a memory system according to the present embodiment and a host will be described with reference toFIG.1. The memory system is a semiconductor storage device configured to write data to a nonvolatile memory and to read data from the nonvolatile memory. The memory system is realized as a flash storage device3based on NAND flash technology. A host (host device)2is configured to control a plurality of flash storage devices3. The host2is realized by an information processing apparatus configured to use a flash array including the flash storage devices3as a storage. The information processing apparatus may be a personal computer or a server computer. The flash storage device3may be used as one of a plurality of storage devices provided in the storage array. The storage array may be connected to the information processing apparatus such as the server computer via a cable or a network. The storage array includes a controller that controls a plurality of storages (for example, the flash storage devices3) in the storage array. When the flash storage device3is applied to the storage array, the controller of the storage array may function as a host of the flash storage device3. Hereinafter, the case where the information processing apparatus such as the server computer functions as the host2will be described by way of example. The host (server)2and the flash storage devices3are interconnected via an interface50(internal interconnection). The interfaces50for the interconnection is not limited thereto and PCI Express (PCIe) (registered trademark), NVM Express (NVMe) (registered trademark), Ethernet (registered trademark), NVMe over Fabrics (NVMeOF), and the like can be used as the interface50. As a typical example of the server computer functioning as the host2, there is a server computer (hereinafter, referred to as the server) in a data center. In the case where the host2is realized by the server in the data center, the host (server)2may be connected to a plurality of end user terminals (clients)61via a network60. The host2can provide various services to these end user terminals61. Examples of the services that can be provided by the host (server)2include (1) a platform as a service (PaaS) that provides a system running platform to each client (each end user terminal61), (2) an infrastructure as a service (IaaS) that provides an infrastructure such as a virtual server to each client (each end user terminal61), and the like. A plurality of virtual machines may be executed on a physical server functioning as the host (server)2. Each of the virtual machines running on the host (server)2can function as a virtual server configured to provide various services to the client (end user terminal61) corresponding to the virtual machine. In each virtual machine, an operating system and a user application, which are used by the corresponding end user terminal61, are executed. The operating system corresponding to each virtual machine includes an I/O service. The I/O service may be a block I/O service based on a logical block address (LBA) or a key-value store service. The I/O service may include an address translation table for managing the mapping between each of tags for identifying data to be accessed and each of physical addresses of the flash storage device3. The tag may be a logical address such as the LBA or a key of a key-value store. In the operating system corresponding to each virtual machine, the I/O service issues an I/O command (a write command and a read command) in response to a write/read request from the user application. The I/O command is sent to the flash storage device3via a command queue. The flash storage device3includes a nonvolatile memory such as a NAND flash memory. The flash storage device3manages a plurality of write destination blocks allocated from a plurality of blocks in the nonvolatile memory. The write destination block means a block where data is to be written. The write command sent from the host2to the flash storage device3designates a location on a memory of the host2where write data to be written exists, a length of the write data, and an identifier indicating the block where the write data is to be written. Therefore, the host2can designate a specific write destination block where data is to be written. As a result, for example, the host2can realize data placement in which data of a user application corresponding to a certain end user terminal61(client) is written to one or more specific write destination blocks and data of a user application corresponding to another end user terminal61(client) is written to one or more other specific write destination blocks. The identifier indicating the block where the write data is to be written may be represented by a block address (block number) designating a specific write destination block. In the case where the flash storage device3includes a plurality of NAND flash memory chips, the block address may be represented by a combination of a block address and a chip number. In the case where the flash storage device3supports stream write, the identifier indicating the block where the write data is to be written may be an identifier (stream ID) of one stream in a plurality of streams. In the stream write, a plurality of write destination blocks are associated with a plurality of streams, respectively. In other words, when the flash storage device3receives a write command including a certain stream ID from the host2, the flash storage device3writes data to a write destination block associated with a stream corresponding to the stream ID. When the flash storage device3receives a write command including another stream ID from the host2, the flash storage device3writes data to another write destination block associated with another stream corresponding to another stream ID. In the flash storage device3, a management table for managing the mapping between each od stream IDs and each of block addresses may be used. The flash storage device3can be realized as any storage device among the following type #1-storage device, type #2-storage device, and type #3-storage device. The type #1-storage device is a type of storage device in which the host2designates both a block where data is to be written and a page where the data is to be written. A write command to be applied to the type #1-storage device includes a block address, a page address, a data pointer, and a length. The block address designates a block where write data received from the host2is to be written. The page address designates a page in the block where the write data is to be written. The data pointer indicates a location on a memory in the host2where the write data exists. The length indicates a length of the write data. The type #2-storage device is a type of storage device in which the host2designates a block where data is to be written and the storage device designates a location (page) in the block where the data is to be written. A write command to be applied to the type #2-storage device includes a tag (for example, an LBA or key) for identifying the write data to be written, a block address, a data pointer, and a length. Further, the write command may include a QoS domain ID. The QoS domain ID designates one of a plurality of regions obtained by logically dividing the NAND flash memory. Each of the regions includes a plurality of blocks. The type #2-storage device can determine a page where data is to be written, in consideration of bad pages and page write order restrictions. That is, in the case where the flash storage device3is realized as the type #2-storage device, the flash storage device3hides page write order restrictions, bad pages, page sizes, and the like while causing the host2to handle the block. As a result, the host2can recognize a block boundary and can manage which user data exists in which block without being conscious of the page write order restrictions, the bad pages, and the page sizes. The type #3-storage device is a type of storage device in which the host2designates a tag (for example, an LBA) for identifying data and the storage device determines both a block and a page where the data is to be written. A write command to be applied to the type #3-storage device includes a tag (for example, an LBA or key) for identifying the write data to be written, a stream ID, a data pointer, and a length. The stream ID is an identifier of a stream associated with the write data. In the case where the flash storage device3is realized as the type #3-storage device, the flash storage device3refers to a management table for managing the mapping between each of stream IDs and each of block addresses and determines a block where the write data is to be written. Further, the flash storage device3manages the mapping between each of tags (LBAs) and each of physical addresses of the NAND flash memory by using an address translation table called a logical-to-physical address translation table. The flash storage device3may be any one of the type #1-storage device, the type #2-storage device, and the type #3-storage device. However, the flash storage device3manages a plurality of write destination blocks allocated from a plurality of blocks included in the NAND flash memory and writes write data associated with a certain write command to a write destination block designated by the write command. In the type #1-storage device, page write order in the write destination block is designated by the host2. Therefore, in the case where the flash storage device3is realized as the type #1-storage device, the flash storage device3writes data to each page in the write destination block in the order corresponding to the page address designated by each write command from the host2. In the type #2-storage device, the write destination block is designated by the block address included in the write command from the host2. However, the write destination page in the write destination block is determined by the flash storage device3. Therefore, in the case where the flash storage device3is realized as the type #2-storage device, the flash storage device3determines the write destination page so that data is written in order of a first page to a final page of the write destination block designated by the block address included in the write command. In the type #3-storage device, the flash storage device3selects the block associated with the stream ID included in the write command as the write destination block and determines the write destination page in the write destination block. Therefore, in the case where the flash storage device3is realized as the type #3-storage device, the flash storage device3determines the write destination page so that data is written in order of a first page to a final page of the write destination block, for example. The write destination blocks managed by the flash storage device3can be used by a plurality of end users (clients) sharing the flash storage device3, respectively. In this case, in the flash storage device3, the write destination blocks of which the number is equal to or larger than the number of end users sharing the flash storage device3are simultaneously opened. FIG.2illustrates a configuration example of the flash storage device3. The flash storage device3includes a controller4and a nonvolatile memory (NAND flash memory)5. Further, the flash storage device3may include a random access memory, for example, a DRAM6. The NAND flash memory5includes a memory cell array including a plurality of memory cells arranged in a matrix. The NAND flash memory5may be a NAND flash memory of a two-dimensional structure or a NAND flash memory of a three-dimensional structure. The memory cell array of the NAND flash memory5includes a plurality of blocks BLK0to BLKm−1. Each of the blocks BLK0to BLKm−1 includes a plurality of pages (in this case, pages P0to Pn−1). The blocks BLK0to BLKm−1 function as erase units. A block may also be referred to as an “erase block”, a “physical block”, or a “physical erase block”. The pages P0to Pn−1 are units of a data write operation and a data read operation. The controller4is electrically connected to the NAND flash memory5to be the nonvolatile memory via a NAND interface13such as a Toggle NAND flash interface and an open NAND flash interface (ONFI). The controller4operates as a memory controller configured to control the NAND flash memory5. The controller4may be realized by a circuit such as a system-on-a-chip (SoC). As illustrated inFIG.3, the NAND flash memory5may include a plurality of NAND flash memory chips (NAND flash memory dies). Each of the NAND flash memory chips can operate independently. Therefore, the NAND flash memory chip functions as a unit in which a parallel operation is enabled. InFIG.3, the case where 16 channels Ch.1 to Ch.16 are connected to the NAND interface13and two NAND flash memory chips are connected to each of the 16 channels Ch.1 to Ch.16 is exemplified. In this case, 16 NAND flash memory chips #1 to #16 connected to the channels Ch.1 to Ch.16 may be organized as a bank #0 and the remaining 16 NAND flash memory chips #17 to #32 connected to the channels Ch.1 to Ch.16 may be organized as a bank #1. The bank functions as a unit for operating a plurality of memory modules in parallel by bank interleaving. In the configuration example ofFIG.3, a maximum of 32 NAND flash memory chips can be operated in parallel by the 16 channels and the bank interleaving using the two banks. An erase operation may be executed in a unit of one block (physical block) or may be executed in a unit of a parallel unit (super block) including a set of physical blocks that can operate in parallel. One parallel unit, that is, one super block including the set of physical blocks is not limited thereto and may include a total of 32 physical blocks selected one by one from the NAND flash memory chips #1 to #32. Each of the NAND flash memory chips #1 to #32 may have a multi-plane configuration. For example, in the case where each of the NAND flash memory chips #1 to #32 has a multi-plane configuration including two planes, one super block may include a total of 64 physical blocks selected one by one from 64 planes corresponding to the NAND flash memory chips #1 to #32. InFIG.4, one super block (SB) including 32 physical blocks (in this case, a physical block BLK2in the NAND flash memory chip #1, a physical block BLK3in the NAND flash memory chip #2, a physical block BLK7in the NAND flash memory chip #3, a physical block BLK4in the NAND flash memory chip #4, a physical block BLK6in the NAND flash memory chip #5, . . . , and a physical block BLK3in the NAND flash memory chip #32) is exemplified. The write destination block may be one physical block or one super block. A configuration where one super block includes only one physical block may be used. In this case, one super block is equivalent to one physical block. Next, a configuration of the controller4ofFIG.2will be described. The controller4includes a host interface11, a CPU12, a NAND interface13, a DRAM interface14, a direct memory access controller (DMAC)15, an ECC encoding/decoding unit16, and the like. The host interface11, the CPU12, the NAND interface13, the DRAM interface14, the DMAC15, and the ECC encoding/decoding unit16are interconnected via the bus10. The host interface11is a host interface circuit configured to execute communication with the host2. The host interface11may be, for example, a PCIe controller (NVMe controller). Alternatively, in a configuration in which the flash storage device3is connected to the host2via Ethernet (registered trademark), the host interface11may be an NVMe over Fabrics (NVMeOF) controller. The host interface11receives various commands from the host2. These commands include a write command, a read command, and various other commands. The CPU12is a processor configured to control the host interface11, the NAND interface13, and the DRAM interface14. In response to power-on of the flash storage device3, the CPU12loads a control program (firmware) from the NAND flash memory5or a ROM (not illustrated in the drawings) onto the DRAM6and executes various processing by executing the firmware. The firmware may be loaded on an SRAM (not illustrated in the drawings) in the controller4. The CPU12can execute command processing for processing the various commands from the host2. An operation of the CPU12is controlled by the firmware executed by the CPU12. A part or all of the command processing may be executed by dedicated hardware in the controller4. The CPU12can function as a write control unit21and a read control unit22. A part or all of each of the write control unit21and the read control unit22may also be realized by the dedicated hardware in the controller4. The write control unit21manages the write destination blocks allocated from the blocks of the NAND flash memory5. In many modern NAND flash memories, complicated write operations are often executed to reduce program disturb. For this reason, in the many modern NAND flash memories, even if data is written to a certain page in the block, the data written to this page cannot be normally read immediately after writing the data, and after completion of writing of data to one or more pages subsequent to this page, the data may be read from this page. Further, in the modern NAND flash memories, a multi-step write operation involving transferring the same write data to the NAND flash memory multiple times is also applied. An example of the multi-step write operation is a foggy-fine write operation. The multi-step write operation includes at least a first write operation step such as a foggy write operation and a second write operation step such as a fine write operation. The foggy write operation is a write operation for roughly setting a threshold distribution of each memory cell and the fine write operation is a write operation for adjusting the threshold distribution of each memory cell. Furthermore, an intermediate write operation may be executed between the foggy write operation and the fine write operation. The write control unit21may write the write data to the write destination block by a write operation (multi-step write operation) involving transferring the same write data to the NAND flash memory5multiple times like the foggy-fine write operation or may write the write data to the write destination block by a write operation involving transferring the write data to the NAND flash memory5once like a full-sequence write operation or other various write operations. The write control unit21receives each write command from the host2. Each write command designates a location on the memory of the host2where write data to be written exists, a length of the write data, and an identifier indicating a block where the write data is to be written. The length of the write data is different for each write command. For example, a certain write command may request writing of large-sized write data of, for example, about 1 Mbyte and another write command may request writing of small-sized write data of, for example, about 4 Kbytes. Therefore, if a method in which the flash storage device3simply transfers the write data of the size designated by each write command from the host2to an internal buffer of the flash storage device3is used, the internal buffer may be occupied by large-sized write data to be written to a specific write destination block for a long time and a data write operation for each of the other write destination blocks may not be executed. As a result, it becomes difficult to simultaneously use a plurality of write destination blocks. Therefore, the write control unit21acquires the write data from the host2in a unit of the same data size as the data write unit of the NAND flash memory5, regardless of the size designated by each write command. The data write unit of the NAND flash memory5means a data transfer size for writing data to the NAND flash memory5. A typical example of the data write unit of the NAND flash memory5is a page size (for example, 16 Kbytes). Alternatively, a data size (size multiple times as large as the page size) corresponding to a plurality of pages may be used as the data write unit. When a size of write data associated with a write command designating a certain write destination block is smaller than the data write unit of the NAND flash memory5, the write control unit21waits for a next write command designating the write destination block. When a total size of some write data associated with some write commands designating the write destination block becomes equal to or larger than the data write unit of the NAND flash memory5, the write control unit21acquires, from the host2, data of the same size as the data write unit of the NAND flash memory5that is obtained by combining the write data. For example, in the case where four write commands designating the same write destination block request writing of four write data each having 4 Kbytes, respectively, the write control unit21may acquire, from the host2, write data of 16 Kbytes obtained by combining the four write data of 4 Kbytes with each other. In this case, the write control unit21may sequentially acquire the four write data of 4 Kbytes from the host2by four DMA transfers. The write control unit21transfers the acquired write data having the same size as the data write unit of the NAND flash memory5to the NAND flash memory5and writes the write data to the write destination block of the NAND flash memory5. On the other hand, when a size of write data associated with a write command designating a certain write destination block is larger than the data write unit of the NAND flash memory5, the write control unit21obtains one or more write data having the same size as the data write unit, obtained by dividing the write data into a plurality of write data (a plurality of data portions). In addition, the write control unit21acquires the obtained single write data having the same size as the data write unit from the host2. In this case, the write control unit21may acquire the obtained write data from the host2by one DMA transfer. The write control unit21transfers the acquired write data having the same size as the data write unit of the NAND flash memory5to the NAND flash memory5and writes the write data to the write destination block of the NAND flash memory5. As described above, after receiving one or more write commands having an identifier indicating the same write destination block, the write control unit21acquires, from the host2, write data having the same size as the data write unit of the NAND flash memory5, obtained by dividing the write data associated with one write command in the received write commands into a plurality of write data (a plurality of data portions) or combining the write data associated with two or more write commands in the received write commands. Here, the division/combination of the write data means an operation for, based on a data pointer and a length designated by each of one or more write commands having an identifier indicating the same write destination block, (i) dividing a set of write data associated with one or more write commands by boundaries having the same size as the data write unit of the NAND flash memory5from a head thereof and (ii) specifying a location in the host memory corresponding to each boundary. Therefore, the write data can be acquired from the host2in a unit of the same data size as the data write unit of the NAND flash memory5, regardless of the size of the write data designated by each write command, so that the acquired write data can be immediately transferred to the NAND flash memory5. Therefore, even if a write command requesting writing of large-sized write data to a certain write destination block is received, it is possible to prevent stagnation of a data write operation for other write destination block due to this. Further, a buffer-less configuration in which an internal buffer does not exist in the flash storage device3or the capacity of the internal buffer is nearly zero can be applied to the flash storage device3. In the case where a plurality of write destination blocks belong to different NAND flash memory chips, respectively, the operation for transferring the write data to the NAND flash memory5means transferring the write data to the NAND flash memory chip including the write destination block where the write data is to be written. The above processing for acquiring the write data from the host2in a unit of the same data size as the data write unit of the NAND flash memory5is executed in accordance with the progress of the write operation for each write destination block. That is, the write control unit21manages the progress of the write operation for each write destination block. In addition, for example, whenever data transfer to a next write destination page of a certain write destination block is enabled, the write control unit21advances a location of the write destination page in the write destination block, acquires write data to be written to the next write destination page from the host memory, and writes the acquired write data to the write destination block. Furthermore, when all of the write operation (write operation for transferring the same data to the NAND flash memory5once or more) for the entire write data associated with one write command designating the certain write destination block is finished, the write control unit21returns a response indicating command completion of the write command to the host2. For example, in the case where large-sized write data associated with one write command is divided into a plurality of write data portions, when all data transfers and all write operations necessary for writing all of the write data portions are finished, a response indicating command completion of the write command is returned to the host2. As a result, it is possible to correctly notify the host2of the command completion in a unit of the write command. Therefore, even when large-sized write data requested to be written by one write command is divided into a plurality of write data portions and processed, the host2can correctly receive only a response indicating command completion of one write command from the flash storage device3. Therefore, the host2may perform only simple processing for discarding the write data corresponding to the write command of which the command completion has been given in notification, from the host memory. Further, when all of a write operation (write operation involving transferring the same data to the NAND flash memory5once or more) for the entire write data associated with one write command designating a certain write destination block is finished and the entire write data becomes readable from the NAND flash memory5, the write control unit21may return a response indicating command completion of the write command to the host2. For example, the case where data written to a certain page of a certain write destination block becomes readable after data is written to one or more subsequent pages is assumed. In this case, when all data transfers and all write operations necessary for writing all of a plurality of write data portions obtained by dividing the large-sized write data associated with one write command are finished, the write control unit21does not return a response indicating command completion to the host2. After data is written to one or more subsequent pages, the write control unit21returns a response indicating the command completion of the write command to the host2. As a result, the host2performs only simple processing for discarding the write data corresponding to the write command of which the command completion has been given in notification, from the host memory, thereby maintaining the write data in the host memory until the write data of each write command becomes readable. The read control unit22receives a read command from the host2and reads data designated by the received read command from the NAND flash memory5or an internal buffer31. When the data (first data) designated by the read command is data in which all of the write operation (write operation involving transferring the same data to the NAND flash memory5once or more) has not been finished or data in which all of the write operation has been finished, but which has not yet become readable from the NAND flash memory5, the read control unit22determines whether the first data exists in the internal buffer31. In the case where the first data does not exist in the internal buffer31, the read control unit22acquires the first data from the host memory, stores the acquired first data in the internal buffer31, and returns the acquired first data from the internal buffer31to the host2. As a result, the host2does not perform complicated processing for managing whether data desired to be read is readable from the NAND flash memory5and performs only simple processing for sending a read command designating the data desired to be read to the flash storage device3, thereby receiving the data desired to be read from the flash storage device3. The NAND interface13is a memory control circuit configured to control the NAND flash memory5under the control of the CPU12. The DRAM interface14is a DRAM control circuit configured to control the DRAM6under the control of the CPU12. A part of the storage region of the DRAM6may be used as the internal buffer (shared cache)31. The internal buffer (shared cache)31is shared by a plurality of write destination blocks and is used for temporarily storing write data associated with an arbitrary write command received from the host2. As described above, the buffer-less configuration where the internal buffer (shared cache)31does not exist in the flash storage device3or the capacity of the internal buffer (shared cache)31is nearly zero may be applied to the flash storage device3. Further, another part of the storage region of the DRAM6may be used for storing a block management table32and a defect information management table33. The block management table32includes a plurality of management tables corresponding to a plurality of blocks in the NAND flash memory5. Each management table includes a plurality of valid/invalid management information corresponding to a plurality of data included in the block corresponding to the management table. Each of the valid/invalid management information indicates whether the data corresponding to the valid/invalid management information is valid data or invalid data. The defect information management table33manages a list of defective blocks (bad blocks). The internal buffer (shared cache)31, the block management table32, and the defect information management table33may be stored in an SRAM (not illustrated in the drawings) in the controller4. The DMAC15executes data transfer between the host memory and the internal buffer (shared cache)31under the control of the CPU12. When the write data is to be transferred from the host memory to the internal buffer (shared cache)31, the CPU12designates a transfer source address indicating a location on the host memory, a data size, and a transfer destination address indicating a location on the internal buffer (shared cache)31, with respect to the DMAC15. When data is to be written to the NAND flash memory5, the ECC encoding/decoding unit16encodes the data (data to be written) (ECC encoding), thereby adding an error correction code (ECC) as a redundant code to the data. When data is read from the NAND flash memory5, the ECC encoding/decoding unit16performs error correction of the data (ECC decoding) by using the ECC added to the read data. FIG.5illustrates a relation between a write data buffer51and a flash translation unit52included in the host2and the write control unit21, the DMAC15, and the internal buffer (shared cache)31included in the flash storage device3. The host2stores the write data in the write data buffer51on the host memory and issues a write command to the flash storage device3. The write command may include a data pointer indicating a location on the write data buffer51where the write data exists, a tag (for example, an LBA) for identifying the write data, a length of the write data, and an identifier (a block address or a stream ID) indicating a block where the write data is to be written. In the flash storage device3, under the control of the write control unit21, in accordance with the progress of the write operation of the write destination block designated by the identifier of the block, the data transfer from the write data buffer51to the internal buffer (shared cache)31is executed by the DMAC15. The data transfer is executed in a unit of the same data size as the data write unit of the NAND flash memory5, as described above. Under the control of the write control unit21, the write data to be written is transferred from the internal buffer (shared cache)31to the NAND flash memory chip including the write destination block and a NAND command for a write instruction is sent from the write control unit21to the NAND flash memory chip. In the case where the flash storage device3is realized as the type #2-storage device, the write control unit21also executes processing for allocating one of free blocks as the write destination block to the host2in response to a block allocation request received from the host2. The block allocation request may include a QoS domain ID. The write control unit21determines one of the free blocks belonging to the QoS domain ID as the write destination block and notifies the host2of the block address of the write destination block. As a result, the host2can issue a write command designating the block address, the data pointer, the tag (for example, the LBA), and the length. After the write data is written to the write destination block, the write control unit21notifies the host2of the block address indicating the write destination block where the write data has been written, the page address indicating the page in the write destination block where the write data has been written, and the tag (for example, the LBA) of the write data. The flash translation unit52of the host2includes an LUT404A to be an address translation table for managing the mapping between each of tags (for example, the LBAs) and each of physical addresses (block addresses, page addresses, and the like) of the NAND flash memory5. When the block address, the page address, and the tag (for example, the LBA) are given in notification from the flash storage device3, the flash translation unit52updates the LUT404A and maps the physical address (the block address and the page address) given in notification to the tag (for example, the LBA) given in notification. By referring to the LUT404A, the flash translation unit52can translate the tag (for example, the LBA) included in the read request into the physical address (the block address and the page address), thereby issuing the read command including the physical address to the flash storage device3. FIG.6illustrates I/O command processing executed by the flash storage device3. As described above, in the present embodiment, the flash storage device3may be any one of the type #1-storage device, the type #2-storage device, and the type #3-storage device. However, inFIG.6, the case where the flash storage device3is the type #1-storage device is exemplified. Each write command issued by the host2includes a block address, a page address, a data pointer, and a length. Each issued write command is input to an I/O command queue42. Each read command issued by the host2also includes a block address, a page address, a data pointer, and a length. Each issued read command is also input to the I/O command queue42. When the host2desires to request the flash storage device3to write the write data, the host2first stores the write data in the write data buffer51on the host memory and issues the write command to the flash storage device3. The write command includes a block address indicating a write destination block where the write data is to be written, a page address indicating a page in the write destination block where the write data is to be written, a data pointer indicating a location in the write data buffer51where the write data exists, and a length of the write data. The flash storage device3includes a program/read sequencer41. The program/read sequencer41is realized by the write control unit21and the read control unit22described above. The program/read sequencer41can execute each command input to the I/O command queue42in arbitrary order. After the program/read sequencer41acquires one or more write commands designating the same write destination block from the I/O command queue42, the program/read sequencer41sends, to the internal buffer (shared cache)31, a transfer request to acquire next write data (for example, write data corresponding to one page size) to be written to the write destination block from the internal buffer (shared cache)31or the write data buffer51, in accordance with the progress of the write operation of the write destination block. The transfer request may include a data pointer and a length. The data pointer included in the transfer request is calculated by processing for dividing the write data associated with one write command or combining two or more write data associated with two or more write commands designating the same write destination block. That is, the program/read sequencer41divides a set of write data associated with one or more write commands having an identifier indicating the same write destination block by boundaries having the same size as the data write unit of the NAND flash memory5from a head thereof and specifies a location in the host memory corresponding to each boundary. As a result, the program/read sequencer41can acquire the write data from the host2in a unit of the same size as the write unit. The data pointer included in the transfer request indicates a location on the write data buffer51where the write data corresponding to one page size exists. The write data corresponding to one page size may be a set of a plurality of small-sized write data associated with a plurality of write commands designating the write destination block or may be a part of large-sized write data associated with one write command designating the write destination block. Furthermore, the program/read sequencer41sends, to the internal buffer (shared cache)31, the block address of the write destination block where the write data corresponding to one page size is to be written and the page address of the page to which the write data corresponding to one page size is to be written. The controller4of the flash storage device3may include a cache controller controlling the internal buffer (shared cache)31. In this case, the cache controller can operate the internal buffer (shared cache)31as if it is control logic. A plurality of flash command queues43exist between the internal buffer (shared cache)31and a plurality of write destination blocks #0, #1, #2, . . . , and #n. These flash command queues43are associated with a plurality of NAND flash memory chips, respectively. The internal buffer (shared cache)31, that is, the cache controller determines whether the write data corresponding to one page size designated by the transfer request exists in the internal buffer (shared cache)31. If the write data corresponding to one page size designated by the transfer request exists in the internal buffer (shared cache)31, the internal buffer (shared cache)31, that is, the cache controller transfers the write data corresponding to one page size to the NAND flash memory chip including the write destination block where the write data is to be written. Further, the internal buffer (shared cache)31, that is, the cache controller sends, to the NAND flash memory chip including the write destination block where the write data is to be written, the block address of the write destination block, the page address where the write data is to be written, and the NAND command (flash write command) for the write instruction via the flash command queue43. The flash command queue43is provided for each NAND flash memory chip. For this reason, the internal buffer (shared cache)31, that is, the cache controller inputs, to the flash command queue43corresponding to the NAND flash memory chip including the write destination block where the write data is to be written, the block address of the write destination block, the page address where the write data is to be written, and the NAND command (flash write command) for the write instruction. If the transfer of the write data corresponding to one page size from the internal buffer (shared cache)31to the NAND flash memory chip is final data transfer necessary for writing the write data to the NAND flash memory chip, the internal buffer (shared cache)31, that is, the cache controller discards the write data from the internal buffer (shared cache)31and secures the region where the write data has been stored as a free region. In the case where the write data is written to the write destination block by the write operation (for example, the full-sequence write operation and the like) involving transferring data to the NAND flash memory chip once, the first data transfer to the NAND flash memory chip becomes the final data transfer. On the other hand, in the case where the write data is written to the write destination block by the write operation (for example, the foggy-fine write operation) involving transferring data to the NAND flash memory chip multiple times, the data transfer to the NAND flash memory chip necessary for the final fine write becomes the final data transfer. Next, the case where the write data corresponding to one page size designated by the transfer request does not exist in the internal buffer (shared cache)31will be described. If the write data corresponding to one page size designated by the transfer request does not exist in the internal buffer (shared cache)31, the internal buffer (shared cache)31, that is, the cache controller sends the transfer request (the data pointer and the length) to the DMAC15. The DMAC15transfers the write data corresponding to one page size from the write data buffer51on the host memory to the internal buffer (shared cache)31, based on the transfer request (the data pointer and the length). If the data transfer is finished, the DMAC15notifies the internal buffer (shared cache)31, that is, the cache controller, of the transfer completion (Done), the data pointer, and the length. If the free region exists in the internal buffer (shared cache)31, the internal buffer (shared cache)31, that is, the cache controller stores the write data acquired from the write data buffer51by the DMA transfer in the free region. If the free region does not exist in the internal buffer (shared cache)31, the internal buffer (shared cache)31, that is, the cache controller discards the oldest write data in the internal buffer (shared cache)31from the internal buffer (shared cache)31and secures a region where the oldest write data has been stored as a free region. In addition, the internal buffer (shared cache)31, that is, the cache controller stores the write data acquired from the write data buffer51by the DMA transfer in the free region. In the case where the multi-step write operation such as the foggy-fine write operation is used, the cache controller discards the oldest write data among the write data in the internal buffer (shared cache)31in which the first write operation step such as the foggy write operation is finished. A progress speed of the data write operation for the write destination block having a large data write amount tends to be faster than a progress speed of the data write operation for the write destination block having a small data write amount. Therefore, the write data to be written to the write destination block having the large data write amount is frequently transferred from the write data buffer51to the internal buffer (shared cache)31. As a result, there is a high possibility that the oldest write data is write data to a write destination block having a relatively small amount of data written from the host2. Therefore, by using a method of discarding the oldest write data among the write data in the internal buffer (shared cache)31in which the first write operation step such as the foggy write operation is finished, it is possible to efficiently reduce data traffic between the host2and the flash storage device3. An algorithm for selecting the write data to be discarded among the write data in the internal buffer (shared cache)31in which the first write operation step such as the foggy write operation is finished is not limited to first in first out for selecting the oldest data and other algorithms such as LRU and random may be used. The program/read sequencer41receives a status, that is, write completion (Done), a write failure (Error), a block address, and a page address from each NAND flash memory chip. In addition, the program/read sequencer41determines whether all of a write operation (write operation for transferring the same data to the NAND flash memory chip once or more) for entire write data associated with a write command has been finished, for each write command, based on the status. When all of a write operation for entire write data associated with a certain write command is finished, the program/read sequencer41transmits a response (Done) indicating the command completion of the write command to the host2. The response (Done) indicating the command completion includes a command ID for uniquely identifying the write command. Next, processing of the read command will be described. The read command includes a block address indicating a block where data to be read is stored, a page address indicating a page where the data is stored, a data pointer indicating a location in the read data buffer53on the host memory to which the data is to be transferred, and a length of the data. The program/read sequencer41sends the block address and the page address designated by the read command to the internal buffer (shared cache)31and requests the internal buffer (shared cache)31to read the data designated by the read command. The internal buffer (shared cache)31, that is, the cache controller sends, to the NAND flash memory chip, the block address, the page address, and the NAND command (flash read command) for the read instruction via the flash command queue43. The data read from the NAND flash memory chip is transferred to the read data buffer53by the DMAC15. When the data designated by the read command is data in which the write operation has not been finished or data in which all of the write operation has been finished, but which has not yet become readable from the NAND flash memory5, the internal buffer (shared cache)31, that is, the cache controller may determine whether the data exists in the internal buffer (shared cache)31. If the data exists in the internal buffer (shared cache)31, the data is read from the internal buffer (shared cache)31and transferred to the read data buffer53by the DMAC15. On the other hand, if the data does not exist in the internal buffer (shared cache)31, the data is first transferred from the write data buffer51to the internal buffer (shared cache)31by the DMAC15. In addition, the data is read from the internal buffer (shared cache)31and transferred to the read data buffer53by the DMAC15. FIG.7illustrates a multi-step write operation executed by the flash storage device3. Here, a foggy-fine write operation executed across four word lines is exemplified. Here, the case where the NAND flash memory5is a QLC-flash storing 4-bit data per memory cell is assumed. The foggy-fine write operation for one specific write destination block (here, the write destination block BLK #1) in the NAND flash memory5is executed as follows. (1) First, write data of four pages (P0to P3) is transferred to the NAND flash memory5in a page unit and the foggy write operation for writing the write data of the four pages (P0to P3) into a plurality of memory cells connected to a word line WL0in the write destination block BLK #1 is executed. (2) Next, write data of next four pages (P4to P7) is transferred to the NAND flash memory5in a page unit and the foggy write operation for writing the write data of the four pages (P4to P7) into a plurality of memory cells connected to a word line WL1in the write destination block BLK #1 is executed. (3) Next, write data of next four pages (P8to P11) is transferred to the NAND flash memory5in a page unit and the foggy write operation for writing the write data of the four pages (P8to P11) into a plurality of memory cells connected to a word line WL2in the write destination block BLK #1 is executed. (4) Next, write data of next four pages (P12to P15) is transferred to the NAND flash memory5in a page unit and the foggy write operation for writing the write data of the four pages (P12to P15) into a plurality of memory cells connected to a word line WL3in the write destination block BLK #1 is executed. (5) When the foggy write operation for the memory cells connected to the word line WL3is finished, a write target word line returns to the word line WL0and the fine write operation for the memory cells connected to the word line WL0can be executed. In addition, the same write data of four pages (P0to P3) as the write data of the four pages (P0to P3) used in the foggy write operation for the word line WL0is transferred again to the NAND flash memory5in a page unit and the fine write operation for writing the write data of the four pages (P0to P3) into the memory cells connected to the word line WL0in the write destination block BLK #1 is executed. As a result, the foggy-fine write operation for the pages P0to P3is finished. (6) Next, write data of next four pages (P16to P19) is transferred to the NAND flash memory5in a page unit and the foggy write operation for writing the write data of the four pages (P16to P19) into a plurality of memory cells connected to a word line WL4in the write destination block BLK #1 is executed. (7) When the foggy write operation for the memory cells connected to the word line WL4is finished, a write target word line returns to the word line WL1and the fine write operation for the memory cells connected to the word line WL1can be executed. In addition, the same write data of four pages (P4to P7) as the write data of the four pages (P4to P7) used in the foggy write operation for the word line WL1is transferred again to the NAND flash memory5in a page unit and the fine write operation for writing the write data of the four pages (P4to P7) into the memory cells connected to the word line WL1in the write destination block BLK #1 is executed. As a result, the foggy-fine write operation for the pages P4to P7is finished. (8) Next, write data of next four pages (P20to P23) is transferred to the NAND flash memory5in a page unit and the foggy write operation for writing the write data of the four pages (P20to P23) into a plurality of memory cells connected to a word line WL5in the write destination block BLK #1 is executed. (9) When the foggy write operation for the memory cells connected to the word line WL5is finished, a write target word line returns to the word line WL2and the fine write operation for the memory cells connected to the word line WL2can be executed. In addition, the same write data of four pages (P8to P11) as the write data of the four pages (P8to P11) used in the foggy write operation for the word line WL2is transferred again to the NAND flash memory5in a page unit and the fine write operation for writing the write data of the four pages (P8to P11) into the memory cells connected to the word line WL2in the write destination block BLK #1 is executed. As a result, the foggy-fine write operation for the pages P8to P11is finished. FIG.8illustrates order of writing data to the write destination block BLK #1. Here, similarly toFIG.7, the case where the foggy-fine write operation is executed across four word lines is assumed. Data d0, data d1, data d2, data d3, data d4, data d5, data d6, data d7, . . . , data d252, data d253, data d254, and data d255illustrated in a left portion ofFIG.8indicate a plurality of write data corresponding to a plurality of write commands designating the write destination block BLK #1. Here, for the sake of simplification of illustration, the case where all the write data have the same size is assumed. A right portion ofFIG.8illustrates order of writing data to the write destination block BLK #1. The write operation is performed in order of writing data d0to a plurality of memory cells connected to the word line WL0(foggy write), writing data d1to a plurality of memory cells connected to the word line WL1(foggy write), writing data d2to a plurality of memory cells connected to the word line WL2(foggy write), writing data d3to a plurality of memory cells connected to the word line WL3(foggy write), writing data d0to the plurality of memory cells connected to the word line WL0(fine write), writing data d4to a plurality of memory cells connected to the word line WL4(foggy write), writing data d1to the plurality of memory cells connected to the word line WL1(fine write), writing data d5to a plurality of memory cells connected to the word line WL5(foggy write), writing data d2to the plurality of memory cells connected to the word line WL2(fine write), . . . . FIG.9illustrates an operation for transferring write data from the host2to the flash storage device3in a unit of the same size as the data write unit of the NAND flash memory5. Data d1, data d2, data d3, data d4, data d5, data d6, data d7, data d8, data d9, data d10, illustrated in a left portion ofFIG.9indicate ten write data corresponding to ten write commands designating the write destination block BLK #1. The length (size) of the write data is different for each write command. InFIG.9, the case where each of the data d1, the data d2, the data d3, and the data d4has a size of 4 Kbytes, the data d5has a size of 8 Kbytes, the data d6has a size of 40 Kbytes, the data d7has a size of 16 Kbytes, each of the data d8and the data d9has a size of 8 Kbytes, and the data d10has a size of 1 Mbyte is assumed. Since each write command received from the host2includes a data pointer, a length, and a block identifier (for example, a block address), the controller4of the flash storage device3can classify the write commands received from the host2into a plurality of groups corresponding to a plurality of write destination blocks. The data d1, the data d2, the data d3, the data d4, the data d5, the data d6, the data d7, the data d8, the data d9, the data d10, correspond to ten write command classified into a group corresponding to the write destination block BLK #1. These ten write commands are write commands including a block identifier (for example, a block address) indicating the write destination block BLK #1. The controller4of the flash storage device3manages a location on the write data buffer51where each of the data d1, the data d2, the data d3, the data d4, the data d5, the data d6, the data d7, the data d8, the data d9, and the data d10exists and a length of each of the data d1, the data d2, the data d3, the data d4, the data d5, the data d6, the data d7, the data d8, the data d9, and the data d10, based on the data pointer and the length in each of the write commands designating the write destination block BLK #1. In addition, the controller4acquires, from the host2, write data having the same size as the data write unit of the NAND flash memory5, which is obtained by dividing large-sized write data associated with one write command into a plurality of write data (a plurality of data portions) or combining two or more small-sized write data associated with the two or more write commands. InFIG.9, the controller4first acquires write data of 16 Kbytes obtained by combining the data d1, the data d2, the data d3, and the data d4each having a size of 4 Kbytes, from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by four DMA transfers. In the first DMA transfer, a transfer source address designating a head location of the data d1and a data length=4 KB may be set to the DMAC15. The transfer source address designating the head location of the data d1is represented by the data pointer in the write command corresponding to the data d1. In the second DMA transfer, a transfer source address designating a head location of the data d2and a data length=4 KB may be set to the DMAC15. The transfer source address designating the head location of the data d2is represented by the data pointer in the write command corresponding to the data d2. In the third DMA transfer, a transfer source address designating a head location of the data d3and a data length=4 KB may be set to the DMAC15. The transfer source address designating the head location of the data d3is represented by the data pointer in the write command corresponding to the data d3. In the fourth DMA transfer, a transfer source address designating a head location of the data d4and a data length=4 KB may be set to the DMAC15. The transfer source address designating the head location of the data d4is represented by the data pointer in the write command corresponding to the data d4. In addition, the controller4transfers the write data (d1, d2, d3, and d4) of 16 KBytes acquired by the DMA transfer as data to be written to the page P0of the write destination block BLK #1 to the NAND flash memory5. The controller4changes a next write destination page of the write destination block BLK #1 to the page P1and acquires write data of 16 Kbytes obtained by combining the data d5having a size of 8 Kbytes and head data d6-1of 8 Kbytes in the data d6from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by two DMA transfers. In the first DMA transfer, a transfer source address designating a head location of the data d5and a data length=8 KB may be set to the DMAC15. The transfer source address designating the head location of the data d5is represented by the data pointer in the write command corresponding to the data d5. In the second DMA transfer, a transfer source address designating a head location of the data d6-1and a data length=8 KB may be set to the DMAC15. The transfer source address designating the head location of the data d6-1is represented by the data pointer in the write command corresponding to the data d6. In addition, the controller4transfers the write data (d5and d6-1) of 16 Kbytes as data to be written to the page P1of the write destination block BLK #1 to the NAND flash memory5. The controller4changes a next write destination page of the write destination block BLK #1 to the page P2and acquires first 16 Kbyte data d6-2among the remaining 32 Kbyte data of the data d6from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by one DMA transfer. In the DMA transfer, a transfer source address designating a head location of the data d6-2and a data length=16 KB may be set to the DMAC15. The transfer source address designating the head location of the data d6-2can be obtained by adding an offset corresponding to 8 KB to a value of the data pointer in the write command corresponding to the data d6. In addition, the controller4transfers the write data (d6-2) of 16 Kbytes as data to be written to the page P2of the write destination block BLK #1 to the NAND flash memory5. The controller4changes a next write destination page of the write destination block BLK #1 to the page P3and acquires the remaining 16 Kbyte data d6-3of the data d6from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by one DMA transfer. In the DMA transfer, a transfer source address designating a head location of the data d6-3and a data length=16 KB may be set to the DMAC15. The transfer source address designating the head location of the data d6-3can be obtained by adding an offset corresponding to 24 KB to a value of the data pointer in the write command corresponding to the data d6. In addition, the controller4transfers the write data (d6-3) of 16 Kbytes as data to be written to the page P3of the write destination block BLK #1 to the NAND flash memory5. In addition, the controller4writes data (P0to P3) of four pages to a plurality of memory cells connected to the word line WL0of the write destination block BLK #1 by the foggy write operation. The controller4changes a next write destination page of the write destination block BLK #1 to the page P4and acquires the data d7having a size of 16 Kbytes from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by one DMA transfer. In the DMA transfer, a transfer source address designating a head location of the data d7and a data length=16 KB may be set to the DMAC15. The transfer source address designating the head location of the data d7is represented by the data pointer in the write command corresponding to the data d7. In addition, the controller4transfers the write data (d7) of 16 Kbytes as data to be written to the page P4of the write destination block BLK #1 to the NAND flash memory5. The controller4changes a next write destination page of the write destination block BLK #1 to the page P5and acquires write data of 16 Kbytes obtained by combining the data d8having a size of 8 Kbytes and the data d9having a size of 8 Kbytes from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by two DMA transfers. In the first DMA transfer, a transfer source address designating a head location of the data d8and a data length=8 KB may be set to the DMAC15. The transfer source address designating the head location of the data d8is represented by the data pointer in the write command corresponding to the data d8. In the second DMA transfer, a transfer source address designating a head location of the data d9and a data length=8 KB may be set to the DMAC15. The transfer source address designating the head location of the data d9is represented by the data pointer in the write command corresponding to the data d9. In addition, the controller4transfers the write data (d8and d9) of 16 Kbytes as data to be written to the page P5of the write destination block BLK #1 to the NAND flash memory5. The controller4changes a next write destination page of the write destination block BLK #1 to the page P6and acquires head data d10-1of 16 Kbytes in the data d10from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by one DMA transfer. In the DMA transfer, a transfer source address designating a head location of the data d10-1and a data length=16 KB may be set to the DMAC15. The transfer source address designating the head location of the data d10-1is represented by the data pointer in the write command corresponding to the data d10. In addition, the controller4transfers the write data (d10-1) of 16 Kbytes as data to be written to the page P6of the write destination block BLK #1 to the NAND flash memory5. The controller4changes a next write destination page of the write destination block BLK #1 to the page P7and acquires next 16 Kbyte data d10-2of the data d10from the write data buffer51of the host2. In this case, although not limited thereto, the controller4may transfer write data of 16 Kbytes from the write data buffer51of the host2to the internal buffer31by one DMA transfer. In the DMA transfer, a transfer source address designating a head location of the data d10-2and a data length=16 KB may be set to the DMAC15. The transfer source address designating the head location of the data d10-2can be obtained by adding an offset corresponding to 16 KB to a value of the data pointer in the write command corresponding to the data d10. In addition, the controller4transfers the write data (d10-2) of 16 Kbytes as data to be written to the page P7of the write destination block BLK #1 to the NAND flash memory5. In addition, the controller4writes data (P4to P7) of four pages to a plurality of memory cells connected to the word line WL1of the write destination block BLK #1 by the foggy write operation. As described above, in accordance with the progress of the write operation of the write destination block BLK #1, the controller4acquires data of 16 Kbytes to be transferred to the write destination page of the write destination block BLK #1 from the host2. In addition, when the foggy write operation for the plurality of memory cells connected to the word line WL3is finished, the fine write operation for the plurality of memory cells connected to the word line WL0can be executed. The controller4changes a next write destination page of the write destination block BLK #1 to the page P0. In the same procedure as the above, the controller4transfers the write data (P0to P3) again to the NAND flash memory5in a page unit and writes the write data (P0to P3) of the four pages to the plurality of memory cells connected to the word line WL0of the write destination block BLK #1 by the fine write operation. As a result, for the first six write commands, that is, the write command corresponding to the data d1, the write command corresponding to the data d2, the write command corresponding to the data d3, the write command corresponding to the data d4, the write command corresponding to the data d5, and the write command corresponding to the data d6, all of the foggy-fine write operation for the entire write data associated with each write command is finished and each of the data d1to d6becomes readable from the NAND flash memory5. Therefore, the controller4returns six command completion responses corresponding to the first six write commands to the host2. InFIG.9, the operation for transferring the write data associated with each write command designating the write destination block BLK #1 from the host2to the flash storage device3in a unit of 16 Kbytes in accordance with the progress of the write operation of the write destination block BLK #1 has been described. However, the same operation as the operation described inFIG.9is executed for each of the other write destination blocks BLK #. A flowchart ofFIG.10illustrates a procedure of data write processing executed by the flash storage device3. The controller4of the flash storage device3receives each write command including a data pointer, a length, and a block identifier (for example, a block address) from the host2(step S11). Next, the controller4divides large-sized write data corresponding to one write command designating a specific write destination block into two or more data portions or combines two or more write data corresponding to two or more write commands designating the specific write destination block, thereby transferring the data from the host2to the flash storage device3in a unit of the same size as the write unit (data transfer size) of the NAND flash memory5(step S12). In step S12, as described inFIG.9, for example, one data of 16 Kbytes obtained by combining some write data portions having a small size or one of some write data of 16 Kbytes obtained by dividing the write data having a large size is transferred from the host2to the flash storage device3. In the case where the flash storage device3has a configuration including the internal buffer31, each write data of 16 bytes transferred from the host2to the flash storage device3is stored in the internal buffer31. In addition, in step S12, in order to combine some write data portions having small sizes, when a size of write data associated with a preceding write command having an identifier designating a certain write destination block is smaller than the write unit (for example, 16 Kbytes), the controller4waits for reception of a subsequent write command having the identifier designating the write destination block. The controller4transfers the data of 16 Kbytes transferred from the host2to the NAND flash memory5and writes the data of 16 Kbytes to the specific write destination block (step S13). Then, the controller4determines whether all of a write operation (a write operation involving transferring the same data to the NAND flash memory5once or more) for the entire write data associated with one write command designating the certain write destination block has been finished and the entire write data has become readable from the NAND flash memory5(step S14). When all of the write operation for the entire write data associated with one write command designating the certain write destination block is finished and the entire write data is readable from the NAND flash memory5, the controller4returns a response indicating the command completion of the write command to the host2(step S15). In the case of using the write operation involving transferring the same data to the NAND flash memory5multiple times like the foggy-fine write operation, when all of the write operation (multi-step write operation) for the entire write data associated with one write command designating the certain write destination block is finished, the controller4may return a response indicating the command completion of the write command to the host2. The reason is that, in the foggy-fine write operation, when the fine write operation of certain data is finished, the data can be correctly read from the NAND flash memory5. Further, the following type of NAND flash memory may be used, in which even if the fine write operation of data for a certain page is finished, the data cannot be read and the data can be correctly read after write of data for one or more subsequent pages is finished. In this case, when all of a write operation (multi-step write operation) for the entire write data associated with one write command designating the certain write destination block is finished and the entire write data becomes readable from the NAND flash memory5by writing of data to one or more subsequent pages, a response indicating the command completion of the write command may be returned to the host2. As described above, in the present embodiment, when the write data associated with the certain write command is transferred from the host2to the flash storage device3, a response indicating the command completion of the write command is not returned to the host2and when all of the write operation necessary for writing the entire write data associated with the certain write command is finished or all of the write operation of the entire write data is finished and the entire write data becomes readable from the NAND flash memory5, a response indicating the command completion of the write command is returned to the host2. As a result, the host2can maintain the write data of each write command in the write data buffer51until the write data of each write command becomes readable from the flash storage device3, by performing only simple processing for discarding the write data corresponding to the write command of which the command completion has been given in notification, from the write data buffer51. A flowchart ofFIG.11illustrates a procedure of write data discard processing executed by the host2. The host2determines whether a response indicating the command completion of the write command has been received from the flash storage device3(step S21). When the response indicating the command completion of the certain write command has been received from the flash storage device3(step S22), the host2discards the write data associated with the write command from the write data buffer51(step S22). FIG.12illustrates dummy data write processing executed by the flash storage device3, when a next write command designating a certain write destination block is not received for a threshold period from reception of a latest write command designating the certain write destination block. Data d1, data d2, data d3, and data d4illustrated in a left portion ofFIG.12indicate four write data corresponding to four write commands designating the write destination block BLK #1. InFIG.12, the case where each of the data d1, the data d2, the data d3, and the data d4has a size of 4 Kbytes is assumed. (1) The controller4acquires write data of 16 Kbytes obtained by combining the data d1, the data d2, the data d3, and the data d4, from the write data buffer51of the host2. In addition, the controller4transfers the write data of 16 Kbytes as data to be written to the page P0of the write destination block BLK #1 to the NAND flash memory5. When a subsequent write command designating the write destination block BLK #1 is not received for the threshold period from reception of the latest write command designating the write destination block BLK #1, that is, the write command that has requested writing of the data d4, in order to enable a response indicating the command completion of the latest write command to be returned to the host2within a predetermined time, the controller4writes dummy data to one or more Pages in the write destination block BLK #1 and advances a location of a write destination page in the write destination block BLK #1 where next write data is to be written. For example, the controller4transfers dummy data of three pages corresponding to the pages P1to P3to the NAND flash memory5in a page unit and writes data (P0to P3) of the four pages to a plurality of memory cells connected to the word line WL0of the write destination block BLK #1 by the foggy write operation. (2) Next, the controller4transfers dummy data of four pages corresponding to the pages P4to P7to the NAND flash memory5in a page unit and writes data (P4to P7) of the four pages to a plurality of memory cells connected to the word line WL1of the write destination block BLK #1 by the foggy write operation. (3) Next, the controller4transfers dummy data of four pages corresponding to the pages P8to P11to the NAND flash memory5in a page unit and writes data (P8to P11) of the four pages to a plurality of memory cells connected to the word line WL2of the write destination block BLK #1 by the foggy write operation. (4) Next, the controller4transfers dummy data of four pages corresponding to the pages P12to P15to the NAND flash memory5in a page unit and writes data (P12to P15) of the four pages to a plurality of memory cells connected to the word line WL3of the write destination block BLK #1 by the foggy write operation. (5) Next, the controller4transfers the write data of 16 Kbytes obtained by combining the data d1, the data d2, the data d3, and the data d4from the write data buffer51or the internal buffer31to the NAND flash memory5and transfers the same dummy data (P1to P3) of three pages as the dummy data (P1to P3) of the three pages used in the foggy write operation of the word line WL0to the NAND flash memory5in a page unit. In addition, the controller4writes the data (P0to P3) of the four pages to the plurality of memory cells connected to the word line WL0of the write destination block BLK #1 by the fine write operation. As a result, all of the multi-step write operation of the data d1, the data d2, the data d3, and the data d4is finished and the data d1, the data d2, the data d3, and the data d4become readable from the NAND flash memory5. The controller4returns a response indicating the command completion of the first write command having requested writing of the data d1, a response indicating the command completion of the second write command having requested writing of the data d2, a response indicating the command completion of the third write command having requested writing of the data d3, and a response indicating the command completion of the fourth write command having requested writing of the data d4to the host2. In the present embodiment, when the write data is transferred from the host2to the flash storage device3in a unit of the same data size as the data write unit of the NAND flash memory5. When all of the write operation of the entire write data of the certain write command is finished or when all of the write operation of the entire write data is finished and the entire write data is readable, a response indicating the command completion of the write command is returned to the host2. For this reason, for example, when a subsequent write command designating the certain write destination block is not issued from the host2for a while after the write command requesting writing of the small write data to the certain write destination block is issued from the host2to the flash storage device3, a timeout error of the write command may occur. In the present embodiment, when a next write command having the certain block identifier is not received for the threshold period after the latest write command having the certain block identifier is received from the host2, the controller4writes the dummy data to next one or more unwritten pages in the write destination block corresponding to the block identifier. Therefore, the write operation of the write destination block can be progressed as necessary, so that it is possible to prevent occurrence of the timeout error of the write command. A flowchart ofFIG.13illustrates a procedure of dummy data write processing executed by the flash storage device3. Here, the case where data is written to the write destination block by the multi-step write operation such as the foggy-fine write operation is assumed. The controller4of the flash storage device3writes the write data associated with the latest write command designating the certain write destination block to the write destination block by the first write operation step such as the foggy write operation. When the next write command designating the write destination block is not received for the threshold period (Th) from the reception of the latest write command (YES in step S31), the controller4writes the dummy data to one or more pages subsequent to the page in the write destination block where the write data associated with the latest write command has been written, thereby advancing a location of a write destination page in the write destination block where next write data is to be written (step S32). When the fine write operation (second write operation step) of the write data associated with the latest write command can be executed by advancing the location of the write destination page by writing the dummy data to the write destination block, the controller4transfers the write data associated with the latest write command again from the write data buffer51or the internal buffer (shared cache)31to the NAND flash memory5and executes the fine write operation of the write data (step S33). When the fine write operation of the write data associated with the latest write command is finished, that is, all of the multi-step write operation of the entire write data is finished, the controller4returns a response indicating the command completion of the latest write command to the host2(step S34). As described above, in the case of writing the write data to the write destination block by the multi-step write operation, in order to enable the second write operation step of the write data associated with the latest write command to be executed, the controller4writes the dummy data to one or more pages in this write destination block and advances a location of the write destination page in the write destination block where the next write data is to be written. FIG.14illustrates a data transfer operation executed by the controller4using the internal buffer (shared cache)31. The internal buffer (shared cache)31is shared by a plurality of write destination blocks BLK #1, BLK #2, . . . , and BLK #n. The controller4of the flash storage device3executes the following processing for each of the write destination blocks BLK #1, BLK #2, and BLK #n. The write destination block BLK #1 will be described below by way of example. After receiving one or more write commands designating the write destination block BLK #1, the controller4acquires, from the write data buffer51, write data having the same size as the write unit of the NAND flash memory5, which is obtained by dividing the write data associated with one write command designating the write destination block BLK #1 into a plurality of write data or combining the write data associated with the two or more write commands designating the write destination block BLK #1. In addition, the controller4stores a plurality of write data, each of which is obtained from the write data buffer51and has the same size as the write unit of the NAND flash memory5, in the internal buffer (shared cache)31. The write data buffer51does not necessarily include one continuous region on the host memory. As illustrated inFIG.14, the write data buffer51may be realized by a plurality of write data buffers51-1,51-2, . . . , and51-n. The controller4acquires the write data (first write data) to be subsequently written to the write destination block BLK #1 from the internal buffer (shared cache)31, transfers the first write data to the NAND flash memory5, and writes the first write data to the write destination block BLK #1 by the first write operation step such as the foggy write operation. In order to efficiently store the write data from the host2in the internal buffer (shared cache)31, when there is no free region for storing write data acquired from the host2in the internal buffer (shared cache)31, the controller4discards the write data (write data of a foggy state) in the internal buffer (shared cache)31in which the first write operation step such as the foggy write operation is finished and secures a free region in the internal buffer (shared cache)31. For example, when a new write command designating an arbitrary write destination block is received from the host2in a state in which there is no free region in the internal buffer (shared cache)31, the controller4may discard the write data (write data of a foggy state) in the internal buffer (shared cache)31in which the first write operation step such as the foggy write operation is finished and secure a free region capable of storing write data corresponding to the new write command in the internal buffer (shared cache)31. For example, when a new write command is received from the host2in a state in which the entire internal buffer (shared cache)31is filled with a large amount of write data of a foggy state, the controller4may select specific write data to be discarded from the write data of the foggy state and discard the selected write data. As a result, it is possible to efficiently share the internal buffer (shared cache)31having the limited capacity between a plurality of write destination blocks. In the case where the first write data does not exist in the internal buffer (shared cache)31at a point of time when the second write operation step such as the fine write operation of the first write data is to be executed, the controller4acquires the first write data again from the write data buffer51of the host2by sending a request (transfer request: DMA transfer request) for acquiring the first write data to the host2. The acquired first write data may be stored in the internal buffer (shared cache)31. In addition, the controller4transfers the acquired first write data to the NAND flash memory5and writes the first write data to the write destination block BLK #1 by the second write operation step such as the fine write operation. In the case where the first write data exists in the internal buffer (shared cache)31at a point of time when the second write operation step such as the fine write operation of the first write data is to be executed, the controller4acquires the first write data from the internal buffer (shared cache)31, transfers the acquired first write data to the NAND flash memory5, and writes the first write data to the write destination block BLK #1 by the second write operation step such as the fine write operation. After performing the final data transfer (here, data transfer for the fine write operation) of the first write data to the NAND flash memory5, the controller4discards the first write data from the internal buffer (shared cache)31, thereby securing a free region in the internal buffer (shared cache)31. Alternatively, the controller4may discard the first write data from the internal buffer (shared cache)31when the fine write operation of the first write data is finished. When all of the write operation for the entire write data associated with a certain write command is finished or when the fine write operation of the entire write data is finished and the entire write data becomes readable from the NAND flash memory5, the controller4returns a response indicating the command completion of the write command to the host2. Although the internal buffer (shared cache)31has a limited capacity, if the number of write destination blocks is a certain number or less, the probability (hit rate) that the first write data exists in the internal buffer (shared cache)31at a point of time when the second write operation step for the first write data is to be executed is relatively high. Therefore, it is possible to execute the multi-step write operation such as the foggy-fine write operation without transferring the same write data from the host2to the flash storage device3multiple times. As a result, since data traffic between the host2and the flash storage device3can be reduced, I/O performance of the flash storage device3can be improved as compared with the case where the same write data is transferred from the host2to the flash storage device3multiple times each time data is written. The number of write destination blocks may be the same as the number of clients using the host2. In this case, data corresponding to a certain client is written to a write destination block corresponding to the client and data corresponding to another client is written to another write destination block. Therefore, when the number of clients using the host2increases, the hit ratio of the internal buffer (shared cache)31decreases. However, when the first write data does not exist in the internal buffer (shared cache)31(miss), the controller4acquires the first write data from the host2. Therefore, even when the number of clients increases, it is possible to normally execute the multi-step write operation such as the foggy-fine write operation. Therefore, the flash storage device3can flexibly cope with an increase in the number of clients sharing the flash storage device3(that is, an increase in the number of write destination blocks that can be simultaneously used) and the data traffic between the host2and the flash storage device3can be reduced. Here, the write processing for writing data to the write destination block BLK #1 has been described. However, the same write processing is executed for each of the other write destination blocks. FIG.15illustrates the write processing executed by the controller4using the internal buffer (shared cache)31and processing for discarding the write data in the internal buffer (shared cache)31. InFIG.15, for the sake of simplification of illustration, the case where the internal buffer (shared cache)31includes regions101to109is exemplified. Further, inFIG.15, the NAND flash memory5is realized as a QLC-flash and processing for discarding the write data in a unit of a data size of four pages is exemplified. However, the present embodiment is not limited thereto. For example, processing for transferring the write data from the write data buffer51to the internal buffer (shared cache)31in a unit of a data size of one page may be executed and processing for discarding the write data in a unit of a data size of one page may be executed. Further, inFIG.15, the case where the foggy-fine write operation is executed across three word lines WL is assumed. Each of write data D1and D2respectively stored in the regions101and102of the internal buffer (shared cache)31is associated with one or more write commands designating the write destination block BLK #11. Each of the write data D1and D2may have, for example, a size of four pages. The controller4writes the write data D1of four pages to pages P0to P3(a plurality of memory cells connected to the word line WL0) of the write destination block BLK #11 by the foggy write operation (1), and writes the write data D2of four pages to pages P4to P7(a plurality of memory cells connected to the word line WL1) of the write destination block BLK #11 by the foggy write operation (2). Each of write data D11, D12, and D13respectively stored in the regions103,104, and105of the internal buffer (shared cache)31is associated with one or more write commands designating a write destination block BLK #101. Each of the write data D11, D12, and D13may have, for example, a size of four pages. The controller4writes the write data D11of four pages to the pages P0to P3(a plurality of memory cells connected to the word line WL0) of the write destination block BLK #101 by the foggy write operation (3), writes the write data D12of four pages to the pages P4to P7(a plurality of memory cells connected to the word line WL1) of the write destination block BLK #101 by the foggy write operation (4), and writes the write data D13of four pages to pages P8to P11(a plurality of memory cells connected to the word line WL2) of the write destination block BLK #101 by the foggy write operation (5). After the foggy write operation of the write data D13of four pages with respect to the word line WL2of the write destination block BLK #101 is finished, the controller4writes the write data D11of four pages to the pages P0to P3(a plurality of memory cells connected to the word line WL0) of the write destination block BLK #101 by the fine write operation (6). When the fine write operation of the write data D11is finished or when the transfer (final transfer) for the fine write operation of the write data D11to the NAND flash memory chip including the write destination block BLK #101 is finished, a state of the write data D11changes from a foggy state to a fine state. Further, the controller4discards the write data D11(write data of the fine state) in which the fine write operation has been finished from the internal buffer (shared cache)31to set the region103to a free region (7). FIG.16illustrates processing for discarding the write data in the internal buffer (shared cache)31, which is executed by the controller4when there is no free region in the internal buffer (shared cache)31. An upper portion ofFIG.16illustrates a state in which the entire internal buffer (shared cache)31is filled with write data (D21to D23, D31to D33, and D41to D43) of the foggy state in which the foggy write operation (first write operation step) is finished and there is no free region in the internal buffer (shared cache)31. In this state, when it is necessary to transfer the write data from the write data buffer51to the internal buffer (shared cache)31, for example, when a new write command is received from the host2, as illustrated in a middle portion ofFIG.16, the controller4selects the oldest write data (here, the write data D11) from the write data (write data of the foggy state) in which the foggy write operation (first write operation step) is finished as write data to be discarded and discards the oldest write data (here, the write data D11) from the internal buffer (shared cache)31. In addition, as illustrated in a lower portion ofFIG.16, the controller4of the flash storage device3stores new write data (here, write data D51) received from the write data buffer51in the region101that has become a free region by discarding the write data D11. Instead of discarding the oldest write data among the write data (write data of the foggy state) in which the foggy write operation has been finished, the write data having the smallest number of remaining data transfers to the NAND flash memory5may be discarded among the write data (write data of the foggy state) in which the foggy write operation has been finished. In this case, for example, in the case where the multi-step write operation involving transferring the same data to the NAND flash memory5three times is used, data which has already been transferred twice to the NAND flash memory5is selected as data to be discarded in preference to data which has been transferred once to the NAND flash memory5. A flowchart ofFIG.17illustrates a procedure of data write processing executed by the controller4using the internal buffer (shared cache)31. The controller4receives one or more write commands each including a data pointer, a length of write data, and an identifier (for example, a block address) designating any one of a plurality of write destination blocks from the host2(step S101). After receiving one or more write commands having identifiers indicating the same write destination block, the controller4transfers write data having the same size as the write unit of the NAND flash memory5, which is obtained by dividing the write data associated with one write command in the write commands into a plurality of write data or combining the write data associated with two or more write commands having the identifiers indicating the same write destination blocks, from the write data buffer51to the internal buffer (shared cache)31(step S102). The controller4acquires the write data to be subsequently written to the write destination block from the internal buffer (shared cache)31, transfers the write data to the NAND flash memory5, and writes the write data to the write destination block by the foggy write operation (steps S103and S104). When the NAND flash memory5is realized as a QLC-flash, in step S103, the write data of four pages is transferred to the NAND flash memory5in a page unit and in step S104, the write data of the four pages is written to a plurality of memory cells connected to one word line to be written in the write destination block by the foggy write operation. The transfer of the write data from the write data buffer51to the internal buffer (shared cache)31is executed in accordance with the progress of the write operation of each write destination block. For example, when an operation of transferring the write data to be written to a certain page of a certain write destination block to the NAND flash memory chip is finished, write data to be written to a next page of the write destination block may be transferred from the write data buffer51to the internal buffer (shared cache)31. Alternatively, when the operation of transferring the write data to be written to the certain page of the certain write destination block to the NAND flash memory chip including the write destination block is finished and the operation of writing the write data to the write destination block is finished, the write data to be written to the next page of the write destination block may be transferred from the write data buffer51to the internal buffer (shared cache)31. At a point of time when the fine write operation of the write data in which the foggy write operation has been finished is to be started, the controller4determines whether the write data exists in the internal buffer (shared cache)31. If the write data exists in the internal buffer (shared cache)31(YES in step S106), the controller4acquires the write data from the internal buffer (shared cache)31, transfers the write data to the NAND flash memory5, and writes the write data to the write destination block by the fine write operation (steps S107, S108, and S109). As a result, the write data becomes readable from the NAND flash memory5. The controller4determines whether the foggy-fine write operation of the entire write data has been finished and the entire write data has become readable from the NAND flash memory5, for each write command. Then, the controller4returns, to the host2, a response indicating the command completion of the write command corresponding to the write data in which the foggy-fine write operation has been finished and which has become readable from the NAND flash memory5(step S110). If the fine write operation of the entire write data associated with the certain write command has been finished by the processing of step S109, a response indicating the command completion of the write command may be returned to the host2in step S110. If the write data does not exist in the internal buffer (shared cache)31(NO in step S106), the controller4acquires the write data from the write data buffer51on the host memory. A flowchart ofFIG.18illustrates a procedure of data read processing executed by the controller4. As described above, when the data designated by the read command received from the host2is data in which all of the write operation (write operation for transferring the same data to the NAND flash memory5once or more) has not been finished or data in which all of the write operation has been finished, but which has not yet become readable from the NAND flash memory5, the controller4determines whether the data exists in the internal buffer (shared cache)31. When the data does not exist in the internal buffer (shared cache)31, the controller4acquires the data from the write data buffer51, stores the data in the internal buffer (shared cache)31, and returns the data from the internal buffer (shared cache)31to the host2. Specifically, the following data read processing is executed. When the controller4receives the read command from the host2(YES in step S121), the controller4determines whether the data designated by the read command is data in which all of the write operation is finished and which is readable from the NAND flash memory5(step S122). When the data designated by the read command is readable from the NAND flash memory5(YES in step S122), the controller4reads the data from the NAND flash memory5and returns the read data to the host2(step S126). In step S126, the controller4transfers the read data to a location in the read data buffer53designated by the data pointer included in the read command. When the data designated by the read command is not readable from the NAND flash memory5(NO in step S122), the controller4determines whether the data exists in the internal buffer (shared cache)31(step S123). When the data designated by the read command exists in the internal buffer (shared cache)31(YES in step S123), the controller4reads the data from the internal buffer (shared cache)31and returns the read data to the host2(step S124). In step S124, the controller4transfers the read data to a location in the read data buffer53designated by the data pointer included in the read command. When the data does not exist in the internal buffer (shared cache)31(NO in step S123), the controller4acquires the data from the write data buffer51and stores the data in the internal buffer (shared cache)31(step S125). In step S125, the data is transferred from the write data buffer51to a free region of the internal buffer (shared cache)31by the DMAC15. When there is no free region of the internal buffer (shared cache)31, processing for securing the free region of the internal buffer (shared cache)31is executed. In addition, the controller4reads the data from the internal buffer (shared cache)31and returns the read data to the host2(step S124). In step S124, the controller4transfers the read data to a location in the read data buffer53designated by the data pointer included in the read command. FIG.19illustrates a data write operation and a data read operation to be applied to the flash storage device3realized as the type #2-storage device. In the data write operation, the host2designates a write destination block and the flash storage device3determines a write destination page. In addition, in the data read operation, the host2designates a block address and a page address. The host2includes a storage management unit404for managing the flash storage device3. The storage management unit404sends a block allocation command and a write command to the flash storage device3. The controller4of the flash storage device3includes a block allocation unit701and a page allocation unit702. The block allocation unit701and the page allocation unit702may be included in the write control unit21described inFIG.2. The data write operation is executed in the following procedure. (1) When the storage management unit404of the host2needs to write data (write data) to the flash storage device3, the storage management unit404may request the flash storage device3to allocate an available free block as a write destination block. When the block allocation unit701receives the request (block allocation command), the block allocation unit701allocates one free block of a free block group as the write destination block to the host2and notifies the host2of a block address (BLK #) of the allocated write destination block. (2) The storage management unit404of the host2sends a write command including the block address designating the allocated write destination block, a tag for identifying the write data, a data length of the write data, and a data pointer to the flash storage device3. In addition, the storage management unit404stores the write data in the write data buffer51. (3) When the page allocation unit702receives the write command, the page allocation unit702determines a page address indicating a write destination page in a block (write destination block) having the block address designated by the write command. The controller4transfers the write data from the write data buffer51to the internal buffer (shared cache)31in a unit of a page size and writes the write data to the determined write destination page in the write destination block. (4) The controller4may notify the host2of the page address indicating the write destination page as a response indicating the command completion of the write command. Alternatively, the controller4may notify the host2of a set of the tag included in the write command, the block address included in the write command, and the determined page address as the response indicating the command completion of the write command. In the host2, the LUT404A is updated so that a physical address (the block address and the page address) indicating a physical storage location where the write data has been written is mapped to the tag of the write data. The data read operation is executed in the following procedure. (1)′ When the host2needs to read data from the flash storage device3, the host2refers to the LUT404A to acquire from the LUT404A the physical address (the block address and the page address) corresponding to the tag of the data to be read. (2)′ The host2sends a read command designating the acquired block address and page address to the flash storage device3. When the controller4of the flash storage device3receives the read command from the host2, the controller4reads the data from the physical storage location of the read target in the block to be read, based on the block address and the page address. FIG.20illustrates a block allocation command applied to the flash storage device3realized as the type #2-storage device. The block allocation command is a command (block allocation request) that requests the flash storage device3to allocate a write destination block (free block). The host2requests the flash storage device3to allocate the write destination block by transmitting the block allocation command to the flash storage device3, thereby obtaining the block address (block address of the allocated write destination block). FIG.21illustrates a response to the block allocation command. When the block allocation command is received from the host2, the controller4of the flash storage device3selects the free block to be allocated to the host2from a free block list, allocates the selected free block as the write destination block, and returns a response including the block address of the write destination block to the host2. FIG.22illustrates a write command applied to the flash storage device3realized as the type #2-storage device. The write command is a command for requesting the flash storage device3to write data. The write command may include a command ID, a block address, a tag, a length, and the like. The command ID is an ID (command code) indicating that a command is a write command and the write command includes the command ID for the write command. The block address is a physical address designating a write destination block where data is to be written. The tag is an identifier for identifying write data to be written. As described above, the tag may be a logical address such as the LBA and may be a key of a key-value store. When the tag is the logical address such as the LBA, the logical address (start LBA) included in the write command indicates a logical location (first logical location) in a logical address space where the write data is to be written. The length indicates a length of the write data to be written. The write command further includes a data pointer indicating a location in the write data buffer51in which the write data has been stored. When the write command is received from the host2, the controller4determines a write destination location (write destination page) in the write destination block having the block address designated by the write command. The write destination page is determined in consideration of page write order restrictions, bad pages, and the like. In addition, the controller4writes the write data associated with the write command to the write destination location (write destination page) in the write destination block. FIG.23illustrates a response to the write command ofFIG.22. The response includes the page address and the length. The page address is a physical address indicating the physical storage location in the write destination block where the data has been written. The physical address may be represented by an offset in the block (that is, a set of a page address and an offset in a page). The length indicates a length of the written data. Alternatively, the response may further include a tag and a block address, in addition to the page address (offset in the block) and the length. The tag is the tag included in the write command ofFIG.22. The block address is the block address included in the write command ofFIG.22. FIG.24illustrates a read command applied to the flash storage device3realized as the type #2-storage device. The read command is a command for requesting the flash storage device3to read data. The read command includes a command ID, a tag, a block address, a page address, and a length. The command ID is an ID (command code) indicating that a command is a read command and the read command includes the command ID for the read command. The block address designates a block where data to be read is stored. The page address designates a page where the data to be read is stored. The page address may be represented by an offset in the block (that is, a set of a page address and an offset in a page), which indicates a physical storage location in the block where the data to be read is stored. The length indicates a length of the data to be read. Further, the read command includes a data pointer indicating a location in the read data buffer53to which the data designated by the read command is to be transferred. As described above, according to the present embodiment, after receiving one or more write commands having the first identifiers indicating the same write destination block, the controller4acquires, from the host2, write data having the same size as the data write unit of the NAND flash memory5, which is obtained by dividing the write data associated with one write command in the received write command into a plurality of write data (a plurality of data portions) or combining the write data associated with the two or more write commands in the received write command. In addition, the controller4writes the acquired write data to the write destination block designated by the first identifier, by the first write operation involving transferring the same data once or more. Therefore, the write data can be acquired from the host2in a unit of the same data size as the data write unit of the NAND flash memory5, regardless of the size of the write data designated by each write command, and the write data can be transferred to the NAND flash memory5. Therefore, even if a write command requesting writing of large-sized write data to a certain write destination block is received, it is possible to prevent stagnation of a data write operation for other write destination block due to this. Therefore, it is possible to efficiently process each of the plurality of write commands respectively designating the plurality of write destination blocks. As a result, even if the internal buffer31with the large capacity is not provided on the device side or the buffer-less configuration where the capacity of the internal buffer31is nearly zero is used, the plurality of write destination blocks can be simultaneously used. That is, as described above, by diverting the host memory (write data buffer51) on the device side, it is possible to flexibly cope with an increase in the number of write destination blocks, that is, an increase in the number of clients sharing the flash storage device3, without providing the internal buffer31with the large capacity on the device side, and with the limited resources of the flash storage device3, it is possible to maximize the upper limit of the number of write destination blocks that can be simultaneously used. In addition, when all of the write operation (write operation for transferring the same data to the NAND flash memory5once or more) for the entire write data associated with one write command designating the certain write destination block is finished or when all of the write operation (write operation involving transferring the same data to the NAND flash memory5once or more) for the entire write data associated with one write command designating the certain write destination block is finished and the entire write data becomes readable from the NAND flash memory5, the controller4returns a response indicating the command completion of the write command to the host2. As a result, the host2performs only simple processing for discarding the write data corresponding to the write command of which the command completion has been given in notification, from the write data buffer51, thereby maintaining the write data in the write data buffer51until the write data of each write command becomes readable. Further, at a point of time when the second write operation step such as the fine write operation is to be executed, only when there is no data to be written in the internal buffer31, the controller4transfers the write data again from the write data buffer51of the host2to the internal buffer31. Therefore, it becomes unnecessary to transfer the same data from the write data buffer51to the internal buffer31multiple times every time the data is written. As described above, by diverting the write data buffer51of the host2on the device side, it is possible to flexibly cope with an increase in the number of write destination blocks, that is, an increase in the number of clients sharing the flash storage device3, without providing the internal buffer31with the large capacity on the device side for the multi-step write operation, and it is possible to reduce data traffic between the host2and the flash storage device3. Further, in the configuration of the present embodiment, instead of returning the response of the command completion to the host2when the write data having the size designated by each write command is transferred from the host2to the flash storage device3and the transfer of the write data is finished, the write data is transferred from the host2to the flash storage device3in a unit of the same size as the write unit of the flash and the response of the command completion is returned to the host2for each write command. Therefore, it is possible to flexibly cope with an increase in the number of write destination blocks, that is, an increase in the number of clients sharing the flash storage device3while using a standard such as NVMe. The write data buffer51can be realized as a region accessible from each virtual machine executed on the host2. In the case where the host2and the flash storage device3are connected via a network such as Ethernet, the DMA transfer between the write data buffer51and the internal buffer31may be executed by remote DMA transfer. Further, in the present embodiment, the NAND flash memory is exemplified as the nonvolatile memory. However, the functions of the present embodiment can be applied to a variety of other nonvolatile memories such as a magnetoresistive random access memory (MRAM), a phase change random access memory (PRAM), a resistive random access memory (ReRAM), and a ferroelectric random access memory (FeRAM). While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. | 116,625 |
11861219 | DETAILED DESCRIPTION In a computer system with accessible storage or memory, various operating systems (OS) issue write requests at LBA sector sizes of 5xxB (such as 512B, 520B, and 528B) or 4xxxB (such as 4096B and 4160B) (where B represents a byte). LBA sector sizes are not expected to grow in the foreseeable future, but may change depending on specifications of the file systems (FS). In some examples, an SSD uses a 4 KibiByte (KiB) Indirection Unit (IU) size to map the LBAs to physical addresses of the media (e.g., NAND flash). An L2P table that stores the NAND physical block addresses with a 4 KiB (4096 byes) indirection unit (IU) granularity and 4-bytes per entry consumes 1 MebiByte (MiB) of volatile memory space per GibiByte (GiB) of SSD logical memory capacity. The L2P table therefore uses 1 GiB of volatile memory for a 1 TebiByte (TiB) SSD item identifier (where the SSD has 1 TiB logic capacity, while its physical capacity may be larger), and 16 GiB of volatile memory for a 16 TiB SSD (an SSD with 16 TiB of NAND for storing user data). The IU size is expected to grow to 16 KiB and even to 64 KiB as the SSD capacity grows. As IU size grows, so does the requirement for available volatile memory (or other media used to store the L2P table) but available memory space for the table may be limited. Unaligned host writes occur when an LBA range of a write request is misaligned with the IU starting and ending address boundary. In other words, an unaligned or misaligned write request occurs when a starting and ending LBA range of a write request does not correspond with respective starting and ending LBA of an IU. By contrast, an aligned write request occurs when a starting and ending LBA range of a write request corresponds with respective starting and ending LBA range of an IU. Write amplification refers to an amount of data written to storage media divided by an amount of data associated with a write request. Misaligned writes can introduce an SSD write amplification (WA) above one (1). As the IU size continues to increase, write amplification from misaligned write requests will further increase. For example, one 512B host write to an SSD with 4 KiB IU will be amplified 8 times in addition to a performance penalty (e.g., NAND page read time) associated with a data read, data modify, data write (Read-Modify-Write) operation. For example, if an IU size is 4 KiB and a host writes just to LBA0 (single sector), the write is misaligned. A read of IU0 (LBA0 to LBA7) takes place, data of LBA0 is updated with data from host, then the LBA0 to LBA7 are written back to media with an updated LBA0. This example has an amplification of 8 as a write of one LBA corresponds to 8 LBAs being written to storage. As another example, if an IU size is 4 KiB and a host writes just to LBA1-LBA8, the write is misaligned. A read of IU0 (LBA0 to LBA7) takes place, data of LBA1-LBA7 are updated with data from host, then the LBA0 to LBA7 are written back to media with updated LBA1-LBA7. A second read of IU1 (LBA8 to LBA15) takes place, data of LBA8 is updated, then write LBA8 to LBA15 are written to the storage with updated LBA8. Note that in some cases, IU0 and IU1 can be read, LBA1-LBA8 updated, and updated IU0 and IU1 written to the media. This example has an amplification of 2 as a write of 8 LBAs corresponds to 16 LBAs being written to storage. Another example provides for a data write of LBA0-LBA7, which is aligned with an IU0. Data of LBA0-LBA7 are written to the media directly without Read-Modify-Write. This example has an amplification of 1 as a write of 8 LBAs corresponds to 8 LBAs being written to storage. Various embodiments provide for transfer of unaligned portion of a host write to a buffer to attempt to reduce SSD write amplification (WA) and improve performance. For example, various embodiments use a table or array to identify if a retrievable segment (e.g., IU) stored in a storage is copied in a buffer. A buffer can be used to store retrievable segments and misaligned content of data writes overwrite stored retrievable segments in the buffer. A user need not map LBAs to the buffer manually, but can do so. Instead, a controller can detect the unaligned portion of host writes and map them to the buffer automatically. Content in the buffer can be kept in the buffer without being backed-up or persisted to the storage until occurrence of a triggering event such as power loss or low space in the buffer. Without loss of generality, an Integrated Memory Buffer (IMB) can be used as an example of a buffer. However, various embodiments can provide other forms of a buffer, e.g., persistent memory regions (PMR), non-volatile dual in-line memory module (NVDIMM), persistent memory (e.g., (Intel® Optane®), and so forth. Furthermore, some implementations may implement the buffer inside the SSD and/or in a caching-controller (in hardware and/or in host software). Various embodiments can provide SSD level write amplification reduction by up to 2-10 times for workloads that feature many misaligned writes. Host level changes to logical block size need not be made (but can be) and compatibility with existing or future host systems can be achieved, without modification, while attempting to reduce write amplification. Performance benefits can be achieved without any host level changes, including device driver, file system, and host applications. FIG.1depicts an example of a system. In some examples, host system100can include or access processors102and memory104to execute applications, an operating system, file system, or virtualized execution environments. An operating system can be for example: Microsoft® Windows® operating system, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel, MacOS®, or Android®. A virtualized execution environment can include at least a virtual machine or a container. A virtual machine (VM) can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can be an operating system (OS) or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the PC client or server's CPU, memory, hard disk, network and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run Linux and Windows Server operating systems on the same underlying physical host. A container can be a software package of applications, configurations and dependencies so the applications run reliably on one computing environment to another. Containers can share an operating system installed on the server platform and run as isolated processes. A container can be a software package that contains everything the software needs to run such as system tools, libraries, and settings. Containers are not installed like traditional software programs, which allows them to be isolated from the other software and the operating system itself. The isolated nature of containers provides several benefits. First, the software in a container will run the same in different environments. For example, a container that includes PHP and MySQL can run identically on both a Linux computer and a Windows machine. Second, containers provide added security since the software will not affect the host operating system. While an installed application may alter system settings and modify resources, such as the Windows registry, a container can only modify settings within the container. In some examples, processors102can include any central processing unit (CPU), graphics processing unit (GPU), field programmable gate array (FPGA), or application specific integrated circuit (ASIC). Memory104can be any type of cache or volatile or non-volatile memory. Interface108can manage communications using connection120with storage150and other nodes (not depicted). Connection120can be provide communications compatible or compliant with one or more of: Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniB and, Internet Wide Area RDMA Protocol (iWARP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Peripheral Component Interconnect (PCIe), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omnipath, Compute Express Link (CXL), HyperTransport, high-speed fabric, NVLink, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3GPP 5G, and variations thereof. Data can be copied or stored to virtualized storage nodes using a protocol such as NVMe over Fabrics (NVMe-oF) or NVMe. Storage system150can use a storage controller152to control accesses (e.g., read or write) of media168. For example, transport layer processor154can encode or decode communications with host system100such as read or write requests received or transmitted via connection120. Content access manager156can determine whether a write request is to access aligned and/or misaligned retrievable regions in media168. For example, a write request that corresponds to a request to write content to media168can trigger access of a table162to determine if a starting address of the content to be written to media168indicates that a retrievable region encompassing the starting address is stored in buffer164allocated in memory160or media168. For example, table162can be stored in memory160or in a static random access memory (SRAM) available to storage controller152. Content subject to a write request can include aligned and/or misaligned portions. For portions of the content that are aligned with retrievable region from media168(or in buffer164), determination is made if the retrievable region is stored in buffer164and if so, such portions overwrite the buffered content. But if the retrievable region is not stored in buffer164, such portions are written directly to media168. In some examples, a retrievable region is an IU of size 4 KiB, but any size can be used. For portions of the content that are misaligned with retrievable content from media168(or in buffer164), retrievable regions of media168that encompass the misaligned content are copied to buffer164, the retrievable regions are modified to include the misaligned content, and table162is updated to indicate that buffer164includes the retrievable regions. For example, an entry in table162can include a token value set to indicate that a retrievable region having a particular index is stored in buffer164and that the entry includes an offset to a location in memory160of a starting address of the retrievable region. However, if the token value is not present in the entry, the entry refers to an address reference in media168. Memory160can be any or a combination of: cache, volatile memory, or non-volatile memory. For example, if storage150is configured for 4 KiB IU size and receives a host write for LBA5-15, LBA5-7 are misaligned with IU0 (which encompasses LBA0-7) but LBA8-15 are aligned with IU1 (which encompasses LBA8-15). In this example, both IU0 and IU1 were saved in media168but not in buffer164. For LBA5-7, controller152reads LBA0-7 from media168to buffer164by using an address of IU0 from translation table162, then overwrites LBA5-7 in buffer164by using data from host system100. Controller152updates an address of IU0 in the translation table162to identify an offset into buffer164. For LBA8-15, controller152writes the host data to media168, and the IU1 is stored starting at a different starting address, so controller152updates table162with the address of IU1 in media168. Back-up handler166can manage copying of content from buffer164to media168. For example, if a fullness level of buffer164meets or exceeds a threshold level, back-up handler166can identify content in buffer164to evict or copy to media168in order to make more space available in buffer164and also update table162to identify content as stored in media168. Various techniques can be used to select content to evict such as least recently used (LRU) or least recently accessed. In some cases, if power is lost, back-up handler166can flush content of buffer164to media168and update table162to identify content as stored in media168. For example, a capacitor or other back-up power supply can be used in the event of power loss to ensure power is available to perform a back-up. Asynchronous DRAM Refresh (ADR) can be used to copy content into media168upon a power loss to volatile memory. Other conditions can trigger flushing of content from buffer164to media168such as a timed event. Media168can be any type of volatile or non-volatile memory and include multiple tiers of memory or storage if their write granularity is greater than the host LBA sector size. For example, memory or storage can include one or more of: persistent memory (e.g., Intel Optane® or Samsung Z-NAND), storage (e.g., NAND or 3D NAND), byte-addressable non-volatile memory, or 2-level memory (2LM). As used herein, any reference to storage or memory can refer to any type or configuration or combination of volatile and non-volatile memory. Various embodiments can use system main memory with at least two levels of memory (“2LM”) that includes cached subsets of system disk level storage (in addition to, for example, run-time data). This main memory includes a first level (alternatively referred to herein as “near memory”) including smaller faster memory made of, for example, DRAM or other volatile memory; and a second level (alternatively referred to herein as “far memory”) which includes larger and slower (with respect to the near memory) volatile memory (e.g., DRAM) or nonvolatile memory storage (e.g., flash memory or byte addressable non-volatile memory (e.g., Intel Optane® or Samsung Z-NAND)). The far memory is presented as “main memory” to the host operating system (OS), while the near memory is a cache for the far memory that is transparent to the OS, thus rendering the embodiments described below to appear the same as prior art main memory solutions. The management of the two-level memory may be done by a combination of logic and modules executed via the host central processing unit (CPU). Near memory may be coupled to the host system CPU via high bandwidth, low latency means for efficient processing. Far memory may be coupled to the CPU via low bandwidth, high latency means (as compared to that of the near memory). FIG.2depicts an example format of a table. The table can be used to identify whether a retrieval segment (e.g., one or more IUs) are stored in a buffer or not stored in the buffer (e.g., stored in the storage). For example, the table can be stored in volatile memory (e.g., DRAM or SRAM) and accessible to a storage controller. The table can be an array of entries. For example, an entry can be the following format: [NVM buffer token (e.g., 0xFF), NVM Buffer Offset] or [Physical address in storage]. In one example, an IU index is associated with a physical address. When an IU includes 8 logical blocks starting at logical block 0, the IU index can be determined from a remainder of a modulo 8 operation on a starting at the beginning logical block address (LBA) of a retrieval segment. For example, a beginning LBA of LBA0 corresponds to an index of 0; a beginning LBA of LBA8 corresponds to an index of 1; and a beginning LBA of LBA16 corresponds to an index of 2, and so forth. In this example, a physical address entry is associated with each IU index. But, if content corresponding to an IU index is stored in the buffer, a code can be used in the physical address entry to identify that the IU is stored in the buffer. For example, if a first byte of a physical address entry is hexadecimal FF, the corresponding IU is stored in the buffer beginning at an offset after the hexadecimal FF. Other codes of shorter or longer size can be used. Accordingly, retrieval of an IU can include retrieval of content starting at an offset and includes 8 LBAs starting at the offset. An LBA can be a 512 bytes in size, for example, but other sizes can be used depending on the file system used. However, if a first byte of a physical address entry is not hexadecimal FF (or other code used to indicate storage in the buffer), the corresponding IU is stored in storage at a physical address identified by the entire physical address entry. FIG.3depicts an example process to manage storage of content to attempt to reduce write amplification. The process can be performed by a storage controller or other device or software. At302, a write request is received. The write request can be provided by a host that is locally attached or remotely connected to a storage device. The write request can be received by a storage controller. At304, content to be written that is associated with the write request is divided into an unaligned portion and aligned portion. A portion can be considered “aligned” if an entirety of the portion overwrites a single retrieval segment stored in the storage. A retrieval segment can be an IU range from starting to ending boundary of an IU range in storage of memory. For example, a retrieval segment can be one or more IU segments that are stored at logical block address intervals of 8 bytes, 16 bytes, or other integer multiples of 8 bytes or other numbers of bytes or bits. If the write request has any aligned portion, then the process continues to320. If the write request has any unaligned portion, the process continues to310. Note that sequences beginning with310and320can execute in parallel. At310, a determination is made if a retrieval segment corresponding to the unaligned portion is stored in the buffer. For example, a table (e.g., L2P table) can be inspected to determine if a retrieval segment associated with a logical block address provided with the write request is present in the buffer. If the retrieval segment is not stored in the buffer, the process continues to312. If the retrieval segment is stored in the buffer, the process continues to315. At312, one or more retrieval segments associated with the write request that are stored in the storage are copied to the buffer. For example, retrieval segments of one or more IUs can be read from storage and copied to the buffer. The process continues to314, where content associated with the write request is written to the buffer. For retrieval segment(s) copied from the storage to the buffer in312, part or an entirety of the retrieval segment(s) is overwritten by unaligned portion(s) associated with the write request. Subsequently, retrieval segments stored in the buffer can be copied to storage for consistency, persistence, or back-up. The process continues to316, where the table is updated to indicate that retrieval segment(s) from the storage has/have been written to the buffer and the buffer stores one or more retrieval segments. For example, logical or physical addresses associated with content from the storage copied to the buffer are identified in the table as present in the buffer. The table can identify logical block addresses and physical block addresses of content in the buffer. A token in the first byte of a table entry can be used to identify retrieval segment(s) are present in the buffer. At315, content associated with the write request is written in the buffer. For retrieval segment(s) in the buffer, part or an entirety of the retrieval segment(s) is overwritten by unaligned portion(s) associated with the write request. Subsequently, retrieval segments stored in the buffer can be copied to storage for consistency, persistence, or back-up. For example, if a write request corresponds to an LBA8 and a retrieval segment including the LBA8 is LBA8 to LBA15, the write request is unaligned. A retrieval segment of LBA8 to LBA15 is retrieved from storage and stored in the buffer, and the content associated with the write request and corresponding to LBA8 overwrites the content in the buffer. For an aligned portion associated with the write request, at320, a determination is made if a retrieval segment corresponding to the aligned portion is stored in the buffer. For example, a table (e.g., L2P table) can be inspected to determine if a retrieval segment associated with a logical block address provided with the write request is present in the table. If the retrieval segment is not stored in the buffer, the process continues to322. If the retrieval segment is stored in the buffer, the process continues to330. At322, the aligned portion is written to storage. For example, a controller can issue a write operation to copy the aligned portion to an IU range in storage but not store the aligned portion in the buffer. At324, the table identifying content of the buffer and the storage can be updated to identify content written to the storage. For example, an entry in the table can identify a range of logical block addresses or retrieval segments have corresponding content stored in the storage and not in the buffer. A code in an entry can indicate that the retrieval segments are not stored in the buffer but in the storage (or memory). However, in some examples, the controller can also write the aligned portion to the buffer and also update the table that identifies starting addresses (e.g., logical or physical) of content that are stored in the buffer. At330, the aligned portion is written to the buffer. For example, if the aligned portion corresponds to a retrieval segment that is already stored in the buffer, the controller can copy the data to the buffer directly. The aligned portion may not also be written to the storage at this juncture but can be written to the storage later as part of a back-up (e.g., power loss back-up operation). The table may not be updated as the retrieval segment is already stored in the buffer and identified as being stored in the buffed by the table due to a previous writing of content of that retrieval segment into the buffer. FIG.4shows an example configuration of references to portions of a buffer. In this example, a buffer is configured as a circular buffer with pointers at IU granularity, but pointers could refer to multiple-IUs or sub-IU starting points. In this example, IUs are referenced in a linked list and there are three pointers. A first pointer can refer to a least recently used entry and a second pointer can refer to a next free entry in the buffer. If a new write request is received, an available IU entry among the available space is selected as the next free entry using the second pointer. FIG.5depicts an example process that can be used to manage memory allocated to a buffer. For example, the buffer can be used to store one or more retrieval segments from a storage. A retrieval segment can be one or more IUs in size, where an IU is 4096 bytes, or other sizes. At502, a determination is made if a condition is met to clean-up a buffer. For example, a condition could be if there is insufficient free space in a buffer associated with the storage. For example, a threshold of 128 KiB can be considered a threshold level and if the free space is less than the threshold, the buffer can be considered to have insufficient free space. If there is insufficient free space, the process continues to504. If there is not insufficient free space, the502can repeat. Another condition could be a timer expiring whereby garbage collection is performed to clean-up the buffer and free space. If the timer expires, then the process continues to504and the timer is reset. If the timer has not expired,502can repeat. Yet another condition could be power loss being detected. If power loss is detected, the process continues to504. For example,502can be performed after receipt of a write request or completion of a write request. A storage controller can check the available buffer space after a host write completes and if the buffer does not have enough free space, the controller can set a real-time operating system RTOS event (e.g., EVENT_NVM_BUFFER_FLUSH) to wake-up a task (e.g., Flush_NVM_BUFFER_Task) to flush selected content from the buffer to local or remote storage media to free more space available for use in the buffer. At504, data can be assembled to be written to storage. For example, one or more least recently used (LRU) IUs can be selected for flushing to the storage device. An LRU IU can be an IU that has been accessed a least amount of times over a time interval. In other words, an LRU IU can be data that has been partially overwritten or overwritten in-whole the least amount of times over a time interval. The IU(s) that have been partially overwritten or overwritten in-whole the least amount of times over a time interval can be selected for flushing. For example, in some examples, enough IUs are selected in order to increase free space in the buffer above the threshold level. However, a second threshold can be used to de-select or exclude certain IU from selection for flushing where the second threshold is higher than the threshold and indicates most recently used IUs (e.g., partially or fully overwritten). Any IU that meets this criteria of the second threshold are not flushed. Flushing can include updating a table to indicate that IU associated with the flushed IUs are not stored in the buffer and allowing the memory locations in the buffer to be overwritten. At506, assembled data can be dispatched to storage. For example, the assembled data determined in504can be dispatched by a storage controller for writing to storage. At508, a table that indicates content stored in the buffer and the storage can be updated. The table can be updated so that the one or more entries that correspond to the flushed data are updated to identify that the flushed data is stored in the storage. A subsequent write request that would partially overwrite any but not all of the flushed data could cause the controller to copy the data from the storage into the buffer and a corresponding table update. A subsequent write request that would overwrite an entire flushed IU could cause its data to be written directly to the storage and the corresponding table updated to identify a new starting storage location of the IU. After508, a central processing unit (CPU) or other processor can be released to start another task than buffer flush or the same task. In some examples, if the buffer is in a separate device or house from a solid state drive (SSD) and a disk-caching controller is handling the writes, the disk-caching controller may write-back cache the mis-aligned portions of the incoming writes and send aligned write-requests to the SSD. The controller may not cache other sections of the write (but can continue to do so too) but will write-back cache the misaligned sections. FIG.6depicts a system. The system can use embodiments described herein to attempt to reduce write amplification by using a buffer to store data misaligned from data write requests. System600includes processor610, which provides processing, operation management, and execution of instructions for system600. Processor610can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system600, or a combination of processors. Processor610controls the overall operation of system600, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. In one example, system600includes interface612coupled to processor610, which can represent a higher speed interface or a high throughput interface for system components that needs higher bandwidth connections, such as memory subsystem620or graphics interface components640, or accelerators642. Interface612represents an interface circuit, which can be a standalone component or integrated onto a processor die. Where present, graphics interface640interfaces to graphics components for providing a visual display to a user of system600. In one example, graphics interface640can drive a high definition (HD) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater and can include formats such as full HD (e.g., 1080p), retina displays, 4K (ultra-high definition or UHD), or others. In one example, the display can include a touchscreen display. In one example, graphics interface640generates a display based on data stored in memory630or based on operations executed by processor610or both. In one example, graphics interface640generates a display based on data stored in memory630or based on operations executed by processor610or both. Accelerators642can be a fixed function offload engine that can be accessed or used by a processor610. For example, an accelerator among accelerators642can provide compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services. In some embodiments, in addition or alternatively, an accelerator among accelerators642provides field select controller capabilities as described herein. In some cases, accelerators642can be integrated into a CPU socket (e.g., a connector to a motherboard or circuit board that includes a CPU and provides an electrical interface with the CPU). For example, accelerators642can include a single or multi-core processor, graphics processing unit, logical execution unit single or multi-level cache, functional units usable to independently execute programs or threads, application specific integrated circuits (ASICs), neural network processors (NNPs), programmable control logic, and programmable processing elements such as field programmable gate arrays (FPGAs). Accelerators642can provide multiple neural networks, CPUs, processor cores, general purpose graphics processing units, or graphics processing units can be made available for use by artificial intelligence (AI) or machine learning (ML) models. For example, the AI model can use or include any or a combination of: a reinforcement learning scheme, Q-learning scheme, deep-Q learning, or Asynchronous Advantage Actor-Critic (A3C), combinatorial neural network, recurrent combinatorial neural network, or other AI or ML model. Multiple neural networks, processor cores, or graphics processing units can be made available for use by AI or ML models. Memory subsystem620represents the main memory of system600and provides storage for code to be executed by processor610, or data values to be used in executing a routine. Memory subsystem620can include one or more memory devices630such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. Memory630stores and hosts, among other things, operating system (OS)632to provide a software platform for execution of instructions in system600. Additionally, applications634can execute on the software platform of OS632from memory630. Applications634represent programs that have their own operational logic to perform execution of one or more functions. Processes636represent agents or routines that provide auxiliary functions to OS632or one or more applications634or a combination. OS632, applications634, and processes636provide software logic to provide functions for system600. In one example, memory subsystem620includes memory controller622, which is a memory controller to generate and issue commands to memory630. It will be understood that memory controller622could be a physical part of processor610or a physical part of interface612. For example, memory controller622can be an integrated memory controller, integrated onto a circuit with processor610. While not specifically illustrated, it will be understood that system600can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination. Buses can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a Hyper Transport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (Firewire). In one example, system600includes interface614, which can be coupled to interface612. In one example, interface614represents an interface circuit, which can include standalone components and integrated circuitry. In one example, multiple user interface components or peripheral components, or both, couple to interface614. Network interface650provides system600the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks. Network interface650can include an Ethernet adapter, wireless interconnection components, cellular network interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces. Network interface650can transmit data to a device that is in the same data center or rack or a remote device, which can include sending data stored in memory. Network interface650can receive data from a remote device, which can include storing received data into memory. Various embodiments can be used in connection with network interface650, processor610, and memory subsystem620. In one example, system600includes one or more input/output (I/O) interface(s)660. I/O interface660can include one or more interface components through which a user interacts with system600(e.g., audio, alphanumeric, tactile/touch, or other interfacing). Peripheral interface670can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system600. A dependent connection is one where system600provides the software platform or hardware platform or both on which operation executes, and with which a user interacts. In one example, system600includes storage subsystem680to store data in a nonvolatile manner. In one example, in certain system implementations, at least certain components of storage680can overlap with components of memory subsystem620. Storage subsystem680includes storage device(s)684, which can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage684holds code or instructions and data686in a persistent state (e.g., the value is retained despite interruption of power to system600). Storage684can be generically considered to be a “memory,” although memory630is typically the executing or operating memory to provide instructions to processor610. Whereas storage684is nonvolatile, memory630can include volatile memory (e.g., the value or state of the data is indeterminate if power is interrupted to system600). In one example, storage subsystem680includes controller682to interface with storage684. In one example controller682is a physical part of interface614or processor610or can include circuits or logic in both processor610and interface614. A volatile memory is memory whose state (and therefore the data stored in it) is indeterminate if power is interrupted to the device. Dynamic volatile memory requires refreshing the data stored in the device to maintain state. One example of dynamic volatile memory incudes DRAM (Dynamic Random Access Memory), or some variant such as Synchronous DRAM (SDRAM). A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (Double Data Rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on Jun. 27, 2007). DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (DDR version 4), LPDDR3 (Low Power DDR version3, JESD209-3B, August 2013 by JEDEC), LPDDR4) LPDDR version 4, JESD209-4, originally published by JEDEC in August 2014), WIO2 (Wide Input/output version 2, JESD229-2 originally published by JEDEC in August 2014, HBM (High Bandwidth Memory, JESD325, originally published by JEDEC in October 2013, LPDDR5 (currently in discussion by JEDEC), HBM2 (HBM version 2), currently in discussion by JEDEC, or others or combinations of memory technologies, and technologies based on derivatives or extensions of such specifications. The JEDEC standards are available at www.jedec.org. A non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable memory device, such as NAND technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND). A NVM device can also comprise a byte-addressable write-in-place three dimensional cross point memory device, or other byte addressable write-in-place NVM device (also referred to as persistent memory), such as single or multi-level Phase Change Memory (PCM) or phase change memory with a switch (PCMS), NVM devices that use chalcogenide phase change material (for example, chalcogenide glass), resistive memory including metal oxide base, oxygen vacancy base and Conductive Bridge Random Access Memory (CB-RAM), nanowire memory, ferroelectric random access memory (FeRAM, FRAM), magneto resistive random access memory (MRAM) that incorporates memristor technology, spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. A power source (not depicted) provides power to the components of system600. More specifically, power source typically interfaces to one or multiple power supplies in system600to provide power to the components of system600. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source. In an example, system600can be implemented using interconnected compute sleds of processors, memories, storages, network interfaces, and other components. High speed interconnects can be used such as PCIe, Ethernet, or optical interconnects (or a combination thereof). Embodiments herein may be implemented in various types of computing and networking equipment, such as switches, routers, racks, and blade servers such as those employed in a data center and/or server farm environment. The servers used in data centers and server farms comprise arrayed server configurations such as rack-based servers or blade servers. These servers are interconnected in communication via various network provisions, such as partitioning sets of servers into Local Area Networks (LANs) with appropriate switching and routing facilities between the LANs to form a private Intranet. For example, cloud hosting facilities may typically employ large data centers with a multitude of servers. A blade comprises a separate computing platform that is configured to perform server-type functions, that is, a “server on a card.” Accordingly, each blade includes components common to conventional servers, including a main printed circuit board (main board) providing internal wiring (e.g., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board. FIG.7depicts an environment700includes multiple computing racks702, each including a Top of Rack (ToR) switch704, a pod manager706, and a plurality of pooled system drawers. Various embodiments can be used in a switch. Generally, the pooled system drawers may include pooled compute drawers and pooled storage drawers. Optionally, the pooled system drawers may also include pooled memory drawers and pooled Input/Output (I/O) drawers. In the illustrated embodiment the pooled system drawers include an Intel® XEON® pooled computer drawer708, and Intel® ATOM™ pooled compute drawer710, a pooled storage drawer712, a pooled memory drawer714, and a pooled I/O drawer716. Each of the pooled system drawers is connected to ToR switch704via a high-speed link718, such as a 40 Gigabit/second (Gb/s) or 100 Gb/s Ethernet link or a 100+Gb/s Silicon Photonics (SiPh) optical link. In one embodiment high-speed link718comprises an 800 Gb/s SiPh optical link. Multiple of the computing racks700may be interconnected via their ToR switches704(e.g., to a pod-level switch or data center switch), as illustrated by connections to a network720. In some embodiments, groups of computing racks702are managed as separate pods via pod manager(s)706. In one embodiment, a single pod manager is used to manage all of the racks in the pod. Alternatively, distributed pod managers may be used for pod management operations. Environment700further includes a management interface722that is used to manage various aspects of the environment. This includes managing rack configuration, with corresponding parameters stored as rack configuration data724. Environment700can be used for computing racks. Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as “module,” “logic,” “circuit,” or “circuitry.” A processor can be one or more combination of a hardware state machine, digital control logic, central processing unit, or any hardware, firmware and/or software elements. Some examples may be implemented using or as an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. The appearances of the phrase “one example” or “an example” are not necessarily all referring to the same example or embodiment. Any aspect described herein can be combined with any other aspect or similar aspect described herein, regardless of whether the aspects are described with respect to the same figure or element. Division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments. Some examples may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The terms “first,” “second,” and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. The term “asserted” used herein with reference to a signal denote a state of the signal, in which the signal is active, and which can be achieved by applying any logic level either logic 0 or logic 1 to the signal. The terms “follow” or “after” can refer to immediately following or following after some other event or events. Other sequences of operations may also be performed according to alternative embodiments. Furthermore, additional operations may be added or removed depending on the particular applications. Any combination of changes can be used and one of ordinary skill in the art with the benefit of this disclosure would understand the many variations, modifications, and alternative embodiments thereof. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Additionally, conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, should also be understood to mean X, Y, Z, or any combination thereof, including “X, Y, and/or Z.’” Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below. Example 1 includes an apparatus that includes: a memory and a controller to receive a write request, wherein: based on the write request having associated content that does not encompass an entire retrievable region, configure a buffer in the memory to store a retrievable region and the associated content and based on the write request having associated content that encompasses an entire retrievable region and the retrievable region is not stored in the buffer, provide content associated with the write request to a storage media and not to the buffer. Example 2 includes any example, wherein the controller is to: based on the write request having associated content that encompasses an entire retrievable region and the retrievable region is stored in the buffer, store the associated content in the buffer. Example 3 includes any example, wherein the controller is to: based on the write request having associated content that encompasses an entire retrievable region and also includes but does not encompass a second entire retrievable region: based on the associated content that encompasses an entire retrievable region not being stored in the buffer, store a part of the associated content that encompasses an entire retrievable region in the storage media and store a part of the associated content that does not encompass the second entire retrievable region into the buffer. Example 4 includes any example, wherein the controller is to: update a table to identify an address in the storage media associated with content written to the storage media. Example 5 includes any example, wherein the controller is to: access a table to determine if a retrievable region is stored in the buffer. Example 6 includes any example, wherein the table comprises at least one entry and when an entry is to indicate that a retrievable region is stored in the buffer, the entry includes a token of a particular value. Example 7 includes any example, wherein when an entry refers to a retrievable region that is stored in the buffer, the entry includes an offset into a memory that stores the buffer to indicate a starting storage location of the retrievable region. Example 8 includes any example, wherein when an entry is to indicate that a retrievable region is not stored in the buffer but stored in the storage media, the entry does not include a token of a particular value and includes a starting storage location of content in the storage media. Example 9 includes any example, wherein the retrievable region comprises at least one Indirection Unit (IU). Example 10 includes any example, wherein use of the buffer is to reduce write amplification, wherein write amplification comprises an amount of content written to storage media divided by an amount of content associated with the write request. Example 11 includes any example, wherein the controller is to: based on a condition, flush content from the buffer to the storage media and update a table to indicate content is stored in the storage media. Example 12 includes any example, wherein the condition comprises one or more of: expiration of a timer or fullness of the buffer meeting or exceeding a threshold. Example 13 includes any example, and further includes: the storage media coupled to the controller and one or more of: a network interface, a fabric interface, a power supply, or a display. Example 14 includes a computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: receive a write request, wherein the write request comprises content and a starting address to store the content; determine if the content encompasses an entirety of a retrievable region associated with the starting address; based on the content not encompassing an entire retrievable region, configure a buffer to store the entire retrievable region; and based on the content encompassing an entire retrievable region and the retrievable region not stored in the buffer, provide content associated with the write request to a storage media and not to the buffer. Example 15 includes any example and including instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: based on the content encompassing an entire retrievable region stored in the buffer, store the content in the buffer. Example 16 includes any example and including instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: based on the content encompassing an entire retrievable region and also includes but does not encompass a second entire retrievable region: store a part of the content that encompasses an entire retrievable region in the storage media based on an entry indicating that the content, that encompasses an entire retrievable region, is not stored in the buffer and store a part of the content that does not encompass a second entire retrievable region into the buffer. Example 17 includes any example and including instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: update an entry to identify an address in the storage media associated with the content written to the storage media. Example 18 includes any example and including instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: access an entry to determine if a retrievable region is stored in the buffer. Example 19 includes any example and including instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: modify an entry to (1) indicate that a retrievable region is stored in the buffer by inclusion of a token of a particular value and (2) include an offset into a memory that stores the buffer to indicate a starting storage location of content. Example 20 includes any example, wherein when an entry is to indicate that a retrievable region is not stored in the buffer and stored in the storage media, the entry does not include a token of a particular value and includes a starting storage location of content in the storage media. | 56,194 |
11861220 | The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features. DETAILED DESCRIPTION The following description is presented by way of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art. Embodiments will now be described by way of example only. As described above, a processing system (e.g. a system comprising a CPU or GPU and memory) may comprise multiple banks within a memory. The instructions that are executed (e.g. read or write instructions) do not, typically, refer to any specific bank but just refer to a register number, e.g. read r0, where r0refers to register0. In known processing systems, an address generation unit maps the register number to a bank within the memory based on a defined formula (or relationship), such as: (bank number)=(register number) mod (number of banks) (equation 1) and address decode logic within each bank maps the register number to an actual memory location (or memory address) within the specified bank (as given by the formula above) based on the register number (which may be treated as an offset, where for example, offset=(register number) divided by (number of banks)) and a base pointer. As described above, if multiple attempts are made to access the same bank of memory at the same time (e.g. because by using the formula above, the same bank is indicated), a clash occurs and all but one of the multiple access attempts is stalled. As well as providing multiple banks within a single memory to reduce the number of clashes, multiple memories may be provided, each memory having multiple banks, or, the banks within a memory may be divided into two or more logically independent memories (e.g. eight logical memories) by providing a corresponding number of ports, each port providing access to a separate, non-overlapping subset of the banks (e.g. one port for banks0-3and the other for banks4-7). This increases the number of simultaneous accesses that can occur without a clash. For the purposes of the following description, a logical memory refers to an area of memory with a dedicated access port whereas a bank within a logical memory shares the access port with the other banks in that logical memory. As described above, in any cycle, data can be read from each of the banks in a logical memory via the access port and the access port has sufficient width to support this. Various methods and apparatus for memory allocation are described herein. A first method relates to the mapping of registers to memories, where as described above a memory may be a separate physical or logical memory with a dedicated port. By mapping (or allocating) registers to memories such that accesses (e.g. reads from or writes to the memory) are more evenly spread (or such that the probability is that accesses are more evenly spread) between those memories, the probability of clashes occurring is reduced and hence the performance impact of clashes is also reduced. Additional methods described herein relate to the mapping of registers to banks within a memory (or within a plurality of physically or logically separate memories). These methods of mapping (or allocating) registers to banks may be used in combination with the method of mapping (or allocating) registers to memories or may be used independently of that first method described herein. By mapping (or allocating) registers to banks such that accesses are more evenly spread (or such that the probability is that accesses are more evenly spread) between banks within a memory, the probability of clashes occurring is reduced and hence the performance impact of clashes is also reduced. A system may be designed to balance performance (using multiple banks and/or memories and the methods described herein) against any costs of providing additional banks and/or memories (e.g. in terms of size of hardware). The term ‘task’ is used herein to refer to a group of data-items and the work that is to be performed upon those data-items. For example, in a Single Instruction Multiple Data (SIMD) processing system a task may comprise or be associated with a program or reference to a program (e.g. the same sequence of ALU instructions or reference thereto) in addition to a set of data that is to be processed according to the program, where this set of data may comprise one or more data elements (or data-items, e.g. a plurality of pixels or vertices). The term ‘program instance’ is used herein to refer to individual instances that take a path through the code. A program instance therefore refers to a single data-item and a reference (e.g. pointer) to a program which will be executed on the data-item. A task therefore could be considered to comprise a plurality of program instances (e.g. up to 32 program instances), though in practice only a single instance of the common program (or reference) is required per task. There is therefore a hierarchy of terminology, with tasks comprising a plurality of program instances. A program typically performs operations on values stored in registers and each program instance requires its own copy of each register value. There may be many registers used (or referenced) by each program and many tasks running concurrently on a processing system and hence the methods described herein may be used to provide a way of flexibly allocating a relatively large number of registers. A first method of memory allocation can be described with reference toFIG.1. This method relates to the mapping (or allocation) of registers to memories, rather than individual banks within a memory, where, as described above, a memory may be a separate physical or logical memory with a dedicated port. When a task is created (block102), the task may comprise multiple instances, e.g. the same instruction may be applied to multiple separate data items and each combination of the instruction and a different data item comprises a separate instance. For example, a single instruction may be executed on 16, 32 or 64 data points (e.g. 16, 32 or 64 pixels, samples, primitives, vertices, etc.) and hence there are 16, 32 or 64 instances of the task. These instances of a task are then packed (or grouped) into groups, which may be referred to as quads in examples where each group can accommodate four instances (block104). The use of quads may be particularly suited to pixel processing activities as the processing operates on 2×2 fragments; however in other examples, there may be a different number of instances per group and in various examples the number of registers in a line in a bank of memory may correspond to the number of instances in each group. In various examples there may be 8 groups (or quads) per task; however the number of groups (and hence instances) may be much larger, e.g. 32 groups per task and/or 128 instances per task. Depending upon the number of instances of a particular task, the final group may be filled completely or may be only partially filled and if different tasks comprise different numbers of instances, different tasks may fill a different number of groups. Additionally, some of the other groups may also be partially filled, e.g. where certain criteria exist that control the way instances are packed into groups. Having packed all the instances of a task into groups (in block104), the registers referenced (e.g. used or required) by each program instance in a group are mapped to a logical memory (block106) e.g. the registers referenced by the program for each instance in the group are mapped to a separate memory or to a separate group of banks within a memory that has a dedicated port. In various examples, registers referenced by each group of instances of the same task are mapped (in block106) to a different logical memory; however in many examples there are many more groups of instances from the same task than logical memories, such that registers from multiple groups of instances of the same task are mapped to the same logical memory. In various examples, there may be restrictions based on the execution pipelines used, such that instances of a task that are processed by a particular execution pipeline can only have their registers mapped to a pre-defined subset of the logical memories and there may be different pre-defined subsets for different execution pipelines. The mapping of registers for groups to logical memories (in block106) is based on a pre-defined allocation scheme and a value of a counter, which may be referred to as the group counter. The pre-defined allocation scheme (as used in block106) may map (or allocate) registers referenced by groups to memories using a pre-defined sequence of memories (e.g. memory0, memory1, memory2, . . . ) and the allocation may start at a position in the sequence that is determined based on the counter value. In this scheme, if the counter value is zero, the registers referenced by groups are mapped to memories starting at the beginning of the pre-defined sequence (e.g. registers for group0are mapped to memory0, registers for group1are mapped to memory1, etc.) and if the counter value is non-zero, the registers for groups are mapped to memories starting at an offset position in the pre-defined sequence where the offset is equal to (or otherwise derived from) the counter value. For example, if the counter value is one, the mapping starts at the second memory in the pre-defined sequence (e.g. registers for group0are mapped to memory1, registers for group1are mapped to memory2, etc.). Where the counter value is non-zero, the pre-defined sequence of memories may be considered to wrap around, such that if there are n memories denoted memory0to memory (n−1), after mapping registers for a group to memory (n−1), the registers for the next group are mapped to memory0. As shown inFIG.1, for each task that is created (in block102), the counter is adjusted (block108) and this adjusting operation (in block108) may occur after the mapping of registers to memories (in block106), as shown inFIG.1, or at any other stage in the method ofFIG.1(e.g. after block102or after block104). The adjustment (in block108) takes the counter value for the previous task and updates the counter value, for example by incrementing its value. In one example, the counter is incremented (in block108) by one for each task that is created (in block102) and in another example the counter is incremented (in block108) by the number of groups, G, that are formed for the particular task (in block104), where G is an integer and G≥1. Using this latter technique, if the memories are labelled numerically, if the registers for the last group in a task are mapped to memory A (where A is an integer), then the registers for the first group formed from instances of the next task may be mapped to memory (A+1). In other examples, the counter may be adjusted (e.g. incremented) by other amounts (in block108), e.g. by a random amount between 0 and n, where n is the number of logical memories. In various examples, the value of the counter may be capped based on the number of memories, e.g. capped at n−1, where n is the number of memories, and then may wrap back to a value of 0. By changing the way that registers used (i.e. referenced) by groups are mapped to memories so that the mapping is not the same for all tasks (even though the underlying pre-defined allocation scheme remains the same, to reduce complexity), the distribution of memory accesses is more evenly spread across the different logical memories and this reduces the probability of clashes and hence the performance impact of clashes is also reduced. In particular, if this method is not used, the registers for the first group of each task will always be mapped to the first memory and given that all tasks will comprise at least one group of instances (and this group is likely to be always fully populated with instances), this memory to which registers for the first group are mapped is likely to get the highest number of accesses of all the memories and hence is more susceptible to clashes. By offsetting, as described above based on the counter value, the pressure on the memories will be more evenly spread, resulting in memories being accessed more uniformly, fewer clashes and hence fewer stalls. The different operations withinFIG.1may be performed by different parts of a processing system and an example processing system200is shown inFIG.2. The processing system200comprises a processor202(e.g. a CPU or GPU) and a plurality of logical memories204. In various examples, the number of logical memories204in the processing system200may be equal to the maximum number of groups per task, to enable a one-to-one relationship between groups (and hence registers for those groups) and memories within each task. The task is created (in block102) by a task creation module206and the instances are created and then packed (or grouped) into groups (in block104) by a scheduler208. The instances are then executed by the execution modules (or pipelines)210. The updating of the group counter214(in block108) may be performed by the task creation module206or the scheduler208and the mapping of registers referenced by groups to logical memories (in block106) based on a pre-defined allocation scheme and the value of the group counter214may be implemented by an address generation unit212. A second method of memory allocation can be described with reference toFIG.3. This method relates to the mapping (or allocation) of registers to banks within a single logical memory, rather than allocation of different logical memories, and this second method (as shown inFIG.3) may be used in combination with, or independently of, the first method described above (as shown inFIG.1). As noted above, separate logical memories have separate access ports, whereas banks within a logical memory share the same access port. When a task is created (block102), a bank counter value is allocated to the task (block304) and this bank counter value that is allocated may be the current value of a counter which may be referred to as the ‘bank counter’ and which is different to the counter referred to inFIG.1(which is the group counter214inFIG.2). The allocated bank counter value (from block304) is then used to map registers referenced by the task to banks within a single logical memory. The mapping is based on the number of banks in the logical memory and the allocated bank counter value (block308). In various examples, the mapping (in block308) may use a formula that is a modified version of equation (1) above and is given by: (bank number)=((register number)+(allocated bank counter value)) mod (number of banks) (equation 2) In this equation, the register number and allocated bank counter value are summed prior to the modulus operation and the result is the bank number to which the register is mapped; however, the same result may be achieved in different ways. In other examples, the allocated bank counter value may be used by the address generation unit to determine an additional offset (the bank offset) that is applied when determining the actual memory address for a register based on a base pointer for the task and the register number (specified as a register offset): (memory address)=(base pointer for task)+(register offset)+(bank offset) (equation 3) The base pointer for a task is determined at task creation based on the memory requirements of the previously created task. In further examples, the allocated bank counter value may be used by the task creation module to update the base pointer for the task and that updated base pointer is then used by the address generation unit when determining the actual memory address for a register based on a base pointer for the task and the register number: (memory address)=(updated base pointer for task)+(register offset) (equation 4) Irrespective of which of equations (2)-(4) are used; the same mapping of registers to banks is achieved. The mapping scheme (as used in block308) maps registers to banks in such a way that register0for different tasks will not always be in the same bank (bank0) as would be the case if equation (1) was used. This has the effect of spreading the distribution of memory accesses more uniformly between banks, which reduces the probability of clashes and hence the performance impact of clashes (i.e. the performance impact of stalls) is also reduced. As shown inFIG.3, for each task that is created (in block102), the bank counter (from the previous task) is adjusted (block306) and this may occur after the allocation of a bank counter value to the newly created task (in block304), as shown inFIG.3, or at any other stage in the method ofFIG.3. In one example, the bank counter is incremented (in block306) by one for each task that is created (in block102) and in another example the counter is incremented (in block306) by other amounts, e.g. by a fixed or variable amount between 0 and b−1, where b is the number of banks. In various examples, the value of the bank counter may be capped based on the number of banks, e.g. capped at b−1, where b is the number of banks, and then may wrap back to a value of 0; although as a modulus operation is performed (in equation 2), the bank counter value is effectively capped at b−1. FIG.4is a schematic diagram showing an example of the mapping between registers and banks for two different tasks (task1and task2) in an example where the bank counter is incremented by one for each task. In this example, through the use of any of equations (2)-(4), register1(r0) for the two tasks is allocated to different banks—bank0for task1and bank1for task2. By changing the way that registers are mapped to banks so that the mapping is not the same for all tasks, the distribution of memory accesses tends to be more evenly spread across the different banks within a logical memory and this reduces the probability of clashes and hence the performance impact of clashes is also reduced. In particular, if this method is not used, the first register (register0) for each task will always be mapped to the first bank and this may result in the first bank getting the highest number of accesses of all the banks and hence be more susceptible to clashes. By offsetting, as described above based on the bank counter value, the pressure on the banks will be more evenly spread, resulting in banks being accessed more uniformly, fewer clashes and less stalls. As noted above, the different operations withinFIG.3may be performed by different parts of the processing system200shown inFIG.2. As described above, the task is created (in block102) by a task creation module206and the task creation module206additionally allocates the bank counter value to the task (in block304) and updates the bank counter216(in block306). The mapping of registers to banks205is performed by the address generation unit212(in block308); however, as described above the address generation unit212may use the allocated bank counter value itself as part of the mapping (with the bank counter value being communicated by the task creation module206to the address generation unit212) or the address generation unit212may receive an updated base pointer for the task from the task creation module206, where the updated base pointer has been calculated based on the allocated bank counter value. As shown inFIG.2, each bank205may comprise address decode logic207that is arranged to identify a line within a bank that a particular register is mapped to, based on the address of the register as determined by the address generation unit212. Since each bank205has its own address decode logic207, data can be read from a different line of each bank at the same time. A third method of memory allocation can be described with reference toFIG.5. This method, like the second method, relates to the mapping (or allocation) of registers to banks within a single logical memory, rather than mapping registers to different memories, and this may be used in combination with, or independently of, the first method described above (as shown inFIG.1). Unlike the second method, the third method involves the determination of a ‘dominant bank’ for a particular task, where different tasks may, as a consequence of the programs they refer to, have different dominant banks, but tasks that use the same program will have the same dominant bank. The term ‘dominant bank’ is used herein to refer to the bank that is statistically most likely to receive the highest number of accesses when the task is executed. It is not, however, guaranteed to identify the bank with the highest number of accesses, because the determination of a dominant bank may, in various examples, not take into consideration one or more run-time factors, such as the number of times a loop is executed, although depending upon the complexity some run-time factors (e.g. the execution of some loops) may be included when determining the dominant bank of a task. The determination of the dominant bank for any program (and hence any task that uses the program) may be made at compile time by a compiler (e.g. when compiling the program), e.g. using equation (1) above, and the dominant bank information may be communicated to the processing system (e.g. in the form of meta data associated with the program). The dominant bank for any program is therefore fixed and is defined with respect to the program (and irrespective of any offset that may be applied using the mapping scheme described below). As shown inFIG.5, when creating a task (block102) or at a separate time, meta data for the task (e.g. meta data for the program used by the task) is received and this meta data is used to identify the dominant bank associated with the task (block503). As in the second method, a bank counter value is allocated to the task (block704) and this bank counter value that is allocated may be the current value of the bank counter216(which, as noted above, is different to the group counter214referred to inFIG.1). The allocated bank counter value (from block704) is then used to map registers used by a task to banks based on the number of banks in the logical memory, the allocated bank counter value and the dominant bank (block308). In various examples, the mapping (in block308) may use a formula that is a modified version of equation (2) above and is given by: (bank number)=((register number)+(bank difference)) mod (number of banks) (equation 5) where (bank difference)=(allocated bank counter value)−(dominant bank) (equation 6) In other examples, the bank difference may be used by the address generation unit to determine an additional offset (the bank difference offset) that is applied when determining the actual memory address for a register based on a base pointer for the task and the register number (specified as a register offset): (memory address)=(base pointer for task)+(register offset)+(bank difference offset) (equation 7) or the bank difference may be used by the task creation module to update the base pointer for the task and that updated base pointer is then used by the address generation unit when determining the actual memory address for a register based on a base pointer for the task and the register number: (memory address)=(updated base pointer for task)+(register offset) (equation 8) The mapping scheme (as used in block308) maps registers to banks in such a way that whilst the dominant banks for different tasks may clash, the banks that are expected to be most frequently accessed, after all offsets have been applied, do not clash. This has the effect of spreading the distribution of memory accesses more uniformly between banks, which reduces the probability of clashes and hence the performance impact of clashes (i.e. the performance impact of stalls) is also reduced. As shown inFIG.3, for each task that is created (in block102), the bank counter is incremented (block306) and this may occur after the allocation of a bank counter value to the newly created task (in block704) or at any other stage in the method ofFIG.5. In one example, the bank counter is incremented by one for each task that is created (in block102) and in another example the counter is incremented by other amounts, e.g. by a fixed or variable amount between 0 and b−1, where b is the number of banks. In various examples, the value of the bank counter may be capped based on the number of banks, e.g. capped at b−1, where b is the number of banks, and then may wrap back to a value of 0; although as a modulus operation is performed (in equation 5), the bank counter value is effectively capped at b−1. FIG.6is a schematic diagram showing an example of the mapping between registers and banks for different tasks (tasks1-4) in an example where the bank counter is incremented by one. In this example, through the use of any of equations (5)-(8), the dominant banks are allocated in a round robin manner (i.e. the dominant bank advances by one position for each task, as indicated by the vertical arrows above banks0-3). For task1(T1), the allocated bank counter value (BC) is one and the dominant bank (DB) is zero and hence r0is rotated by the bank difference (i.e. using equations (5) and (6) it is rotated by one to bank1) which means that the bank that expected to be most frequently accessed, after all offsets have been applied is bank1. For task2(T2), the allocated bank counter value (BC) is two and the dominant bank (DB) is one and hence r0is rotated by the bank difference (i.e. by one to bank1) which means that the bank that expected to be most frequently accessed, after all offsets have been applied is bank2. For task3(T3), the allocated bank counter value (BC) is three and the dominant bank (DB) is three and hence (as the bank difference is zero) r0is not rotated (i.e. it is in bank0) which means that the bank that expected to be most frequently accessed, after all offsets have been applied is bank3. For task4(T4), which uses eight registers, the allocated bank counter value (BC) is zero (e.g. where, because there are four memory banks, the previous bank counter value of three has been incremented by one using modulo 4 arithmetic) and the dominant bank (DB) is three (e.g. because the most frequently referenced register has been determined to be R7, which is in bank3) and hence r0and r4are rotated by 1 (i.e. to bank1) since −3 mod 4=1, which means that the bank that expected to be most frequently accessed, after all offsets have been applied is bank0. By changing the way that registers are mapped to banks so that the mapping is not the same for all tasks and the banks that are expected to be most frequently accessed, after all offsets have been applied, are spaced apart, the distribution of memory accesses is more evenly spread across the different banks within a logical memory and this reduces the probability of clashes and hence the performance impact of clashes is also reduced. As noted above, the different operations withinFIG.5may be performed by different parts of the processing system200shown inFIG.2. As described above, the task is created (in block102) by a task creation module206and the task creation module206additionally identifies the dominant bank based on meta data provided by the compiler (in block503), allocates the bank counter value to the task (in block704) and updates the bank counter216(in block306). The mapping of registers to banks205is performed by the address generation unit212(in block308); however, as described above the address generation unit212may use the bank difference itself as part of the mapping (with the bank difference being communicated by the task creation module206to the address generation unit212) or the address generation unit212may receive an updated base pointer for the task from the task creation module206, where the updated base pointer has been calculated based on the bank difference. In a variation on the method ofFIG.5, a dominant bank mask may be maintained by the task creation module206and the bank counter may only be incremented (in block306) in the event that there is a clash of the dominant bank of a newly created task with banks that expected to be most frequently accessed, after all offsets have been applied, for earlier tasks, as determined using the dominant bank mask. This can be described with reference toFIGS.7and8. As shown inFIG.7, when creating a task (block102) or at a separate time, meta data for the task is received and this meta data is used to identify the dominant bank for the task (block503). As in the method ofFIG.5, a bank counter value is allocated to the task (block704); however, this bank counter value that is allocated is determined based on a dominant bank mask rather than the current value of a counter (as is the case in the method ofFIG.5). The dominant bank mask may comprise one bit per bank and all bits may initially be set to a default value (e.g. zero) to indicate that no dominant bank of a task has yet been allocated to that bank and then may be set to another value (e.g. one) once a dominant bank of a task has been allocated to that bank. As there will often be more tasks than banks, once all the bits in the dominant bank mask are set to the second value (e.g. one), the entire mask may be reset (e.g. to zero) and the method may then continue. To determine the bank counter value (in block704), the dominant bank of the task is compared to the dominant bank mask. If the dominant bank of the task clashes with a bank that expected to be most frequently accessed by an earlier task, after all offsets have been applied, which may be the dominant bank of an earlier task or an offset dominant bank, (e.g. as determined by whether the bank that corresponds to the dominant bank of the newly created task already has its bit set to indicate a prior allocation in the dominant bank mask), then the bank counter value is set to a value that offsets the dominant bank of the task so that the clash is avoided and the dominant bank mask is updated (block706) to reflect the offset dominant bank allocation. If, however, the dominant bank does not clash with a bank that expected to be most frequently accessed by an earlier task, after all offsets have been applied, (e.g. as determined by whether the bank that corresponds to the dominant bank of the newly created task already has its bit set to indicate a prior allocation in the dominant bank mask), then the bank counter value is set to zero and the dominant bank mask is updated (block706) to reflect the new (non-offset) dominant bank allocation. This is shown in the example ofFIG.8in which there is no clash between task1(DB=0) and task2(DB=3), as indicated by the dominant bank mask1000, and hence the bank counter value for those tasks is set to zero (BC=0) and the dominant bank mask is updated to1001. However there is a clash between the dominant bank of task3, bank zero (DB=0) and the previously allocated dominant banks, as indicated by the dominant bank mask1001, and hence the bank counter value is set to a non-zero value (e.g. one) in order to shift the bank that expected to be most frequently accessed by the task after all offsets have been applied (e.g. from bank0to bank1) and avoid the clash. The dominant bank mask is then updated to reflect the new allocation, e.g. from1001to1101. Having allocated the bank counter value (in block704), this is then used to map registers to banks based on the number of banks in the logical memory and the allocated bank counter value (block308). In various examples, the mapping (in block308) may use the same equations as the second method (described above), i.e.: (bank number)=((register number)+(allocated bank counter value)) mod (number of banks) (equation 2) or (memory address)=(base pointer for task)+(register offset)+(bank offset) (equation 3) or (memory address)=(updated base pointer for task)+(register offset) (equation 4) As noted above, the different operations withinFIG.7may be performed by different parts of the processing system200shown inFIG.2. As described above, the task is created (in block102) by a task creation module206and the task creation module206additionally maintains the dominant bank mask (as used and updated in block704), allocates the bank counter value to the task (in block304) and updates the bank counter216. The mapping of registers to banks205is performed by the address generation unit212(in block308); however, as described above the address generation unit212may use the allocated bank counter value itself as part of the mapping (with the bank counter value being communicated by the task creation module206to the address generation unit212) or the address generation unit212may receive an updated base pointer for the task from the task creation module206, where the updated base pointer has been calculated based on the allocated bank counter value. As detailed above, the method ofFIG.1may be used independently of, or in combination with the method ofFIG.3,5or7. Whilst all methods are described with reference to the processing system200ofFIG.2, it will be appreciated that dependent upon the method used, either or both of the group counter214and the bank counter216may be omitted, i.e. the group counter214is used by the method ofFIG.1and the bank counter is used by the methods ofFIGS.3,5and7. It will also be appreciated that the methods ofFIGS.3,5and7require only a single memory204that comprises multiple banks205(although there may be multiple memories204within the system200) and the method ofFIG.1requires multiple logically separate memories204which may, or may not, comprise multiple banks205. FIG.9shows a computer system in which the graphics processing systems described herein may be implemented. The computer system comprises a CPU902, a GPU904, a memory906and other devices914, such as a display916, speakers918and a camera920. The CPU902and/or GPU904may operate as processing system200as shown inFIG.2and described above and the memory906may comprise a plurality of logical memories204and/or a plurality of memory banks205. The components of the computer system can communicate with each other via a communications bus922. The systems ofFIGS.2and9are shown as comprising a number of functional blocks. This is schematic only and is not intended to define a strict division between different logic elements of such entities. Each functional block may be provided in any suitable manner. It is to be understood that intermediate values described herein as being formed by a processing system need not be physically generated by the processing system at any point and may merely represent logical values which conveniently describe the processing performed by the processing system between its input and output. The processing systems described herein may be embodied in hardware on an integrated circuit. The processing systems described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms “module,” “functionality,” “component”, “element”, “unit”, “block” and “logic” may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine. The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java™ or OpenCL™. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code. A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be any kind of general purpose or dedicated processor, such as a CPU, GPU, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), physics processing units (PPUs), radio processing units (RPUs), digital signal processors (DSPs), general purpose processors (e.g. a general purpose GPU), microprocessors, any processing unit which is designed to accelerate tasks outside of a CPU, etc. A computer or computer system may comprise one or more processors. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes set top boxes, media players, digital radios, PCs, servers, mobile telephones, personal digital assistants and many other devices. It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e. run) in an integrated circuit manufacturing system configures the system to manufacture a processor configured to perform any of the methods described herein, or to manufacture a processing system comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description. Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a processor as described herein. Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a processor to be performed. An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog® or VHDL, and as low-level circuit representations such as OASIS® and GDSII. Higher level representations which logically define an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g. providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit. An example of processing an integrated circuit definition dataset at an integrated circuit manufacturing system so as to configure the system to manufacture a processor will now be described with respect toFIG.10. FIG.10shows an example of an integrated circuit (IC) manufacturing system1002which is configured to manufacture a processor as described in any of the examples herein. In particular, the IC manufacturing system1002comprises a layout processing system1004and an integrated circuit generation system1006. The IC manufacturing system1002is configured to receive an IC definition dataset (e.g. defining a processor as described in any of the examples herein), process the IC definition dataset, and generate an IC according to the IC definition dataset (e.g. which embodies a processor as described in any of the examples herein). The processing of the IC definition dataset configures the IC manufacturing system1002to manufacture an integrated circuit embodying a processor as described in any of the examples herein. The layout processing system1004is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g. in terms of logical components (e.g. NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system1004has determined the circuit layout it may output a circuit layout definition to the IC generation system1006. A circuit layout definition may be, for example, a circuit layout description. The IC generation system1006generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system1006may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system1006may be in the form of computer-readable code which the IC generation system1006can use to form a suitable mask for use in generating an IC. The different processes performed by the IC manufacturing system1002may be implemented all in one location, e.g. by one party. Alternatively, the IC manufacturing system1002may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties. In other examples, processing of the integrated circuit definition dataset at an integrated circuit manufacturing system may configure the system to manufacture a processor without the IC definition dataset being processed so as to determine a circuit layout. For instance, an integrated circuit definition dataset may define the configuration of a reconfigurable processor, such as an FPGA, and the processing of that dataset may configure an IC manufacturing system to generate a reconfigurable processor having that defined configuration (e.g. by loading configuration data to the FPGA). In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect toFIG.10by an integrated circuit manufacturing definition dataset may cause a device as described herein to be manufactured. In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown inFIG.10, the IC generation system may further be configured by an integrated circuit definition dataset to, on manufacturing an integrated circuit, load firmware onto that integrated circuit in accordance with program code defined at the integrated circuit definition dataset or otherwise provide program code with the integrated circuit for use with the integrated circuit. Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like. The methods described herein may be performed by a computer configured with software in machine readable form stored on a tangible storage medium e.g. in the form of a computer program comprising computer readable program code for configuring a computer to perform the constituent portions of described methods or in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable storage medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc. and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously. The hardware components described herein may be generated by a non-transitory computer readable storage medium having encoded thereon computer readable program code. Memories storing machine executable data for use in implementing disclosed aspects can be non-transitory media. Non-transitory media can be volatile or non-volatile. Examples of volatile non-transitory media include semiconductor-based memory, such as SRAM or DRAM. Examples of technologies that can be used to implement non-volatile memory include optical and magnetic memory technologies, flash memory, phase change memory, resistive RAM. A particular reference to “logic” refers to structure that performs a function or functions. An example of logic includes circuitry that is arranged to perform those function(s). For example, such circuitry may include transistors and/or other hardware elements available in a manufacturing process. Such transistors and/or other elements may be used to form circuitry or structures that implement and/or contain memory, such as registers, flip flops, or latches, logical operators, such as Boolean operations, mathematical operators, such as adders, multipliers, or shifters, and interconnect, by way of example. Such elements may be provided as custom circuits or standard cell libraries, macros, or at other levels of abstraction. Such elements may be interconnected in a specific arrangement. Logic may include circuitry that is fixed function and circuitry can be programmed to perform a function or functions; such programming may be provided from a firmware or software update or control mechanism. Logic identified to perform one function may also include logic that implements a constituent function or sub-process. In an example, hardware logic has circuitry that implements a fixed function operation, or operations, state machine or process. The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g. in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget. Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person. It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. Any reference to ‘an’ item refers to one or more of those items. The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and an apparatus may contain additional blocks or elements and a method may contain additional operations or elements. Furthermore, the blocks, elements and operations are themselves not impliedly closed. The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. The arrows between boxes in the figures show one example sequence of method steps but are not intended to exclude other sequences or the performance of multiple steps in parallel. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. Where elements of the figures are shown connected by arrows, it will be appreciated that these arrows show just one example flow of communications (including data and control messages) between elements. The flow between elements may be in either direction or in both directions. The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention. | 53,406 |
11861221 | DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for providing scalable and reliable container-based storage services in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110A) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay be similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an application specific integrated circuit (‘ASIC’), a field programmable gate array (‘FPGA’), a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate4(‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage controller119. In one embodiment, storage controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120n. The stored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example system124for data storage in accordance with some implementations. In one embodiment, system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices119a,119band119c,119d, respectively. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one controller125ato another controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one or more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units152or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storages152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storages152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storages152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit152may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e., multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., a field programmable gate array (FPGA). In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units152described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple storage units152and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage units152ofFIGS.2A-C. In this version, each storage unit152has a processor such as controller212(seeFIG.2C), an FPGA (field programmable gate array), flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The storage unit152may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units152may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the storage unit152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit152fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g., partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some embodiments. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS™ environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or local area network (‘LAN’), or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider302with minimal management effort. Generally, the user of the cloud services provider302is unaware of the exact computing resources utilized by the cloud services provider302to provide the services. Although in many cases such a cloud services provider302may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider302offers computing infrastructure such as virtual machines and other resources as a service to subscribers. In addition, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider302offers a development environment to application developers. Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform. Furthermore, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider302offers application software, databases, as well as the platforms that are used to run the applications to the storage system306and users of the storage system306, providing the storage system306and users of the storage system306with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. The cloud services provider302may be further configured to provide services to the storage system306and users of the storage system306through the implementation of an authentication as a service (‘AaaS’) service model where the cloud services provider302offers authentication services that can be used to secure access to applications, data sources, or other resources. The cloud services provider302may also be configured to provide services to the storage system306and users of the storage system306through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306. Readers will appreciate that the cloud services provider302may be configured to provide additional services to the storage system306and users of the storage system306through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider302or a limitation as to the service models that may be implemented by the cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. Public cloud and private cloud deployment models may differ and may come with various advantages and disadvantages. For example, because a public cloud deployment involves the sharing of a computing infrastructure across different organization, such a deployment may not be ideal for organizations with security concerns, mission-critical workloads, uptime requirements demands, and so on. While a private cloud deployment can address some of these issues, a private cloud deployment may require on-premises staff to manage the private cloud. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premises with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array306and remote, cloud-based storage that is utilized by the storage array306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider302, as well as addressing security concerns associated with sensitive data to the cloud services provider302over data communications networks. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model where the cloud services provider302offers application software, databases, as well as the platforms that are used to run the applications to the storage system306and users of the storage system306, providing the storage system306and users of the storage system306with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include storage resources308, which may be embodied in many forms. For example, in some embodiments the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate. In some embodiments, the storage resources308may include 3D crosspoint non-volatile memory in which bit storage is based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. In some embodiments, the storage resources308may include flash memory, including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, and others. In some embodiments, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM, in which data is stored through the use of magnetic storage elements. In some embodiments, the example storage resources308may include non-volatile phase-change memory (‘PCM’) that may have the ability to hold multiple bits in a single cell as cells can achieve a number of distinct intermediary states. In some embodiments, the storage resources308may include quantum memory that allows for the storage and retrieval of photonic quantum information. In some embodiments, the example storage resources308may include resistive random-access memory (‘ReRAM’) in which data is stored by changing the resistance across a dielectric solid-state material. In some embodiments, the storage resources308may include storage class memory (‘SCM’) in which solid-state nonvolatile memory may be manufactured at a high density using some combination of sub-lithographic patterning techniques, multiple bits per cell, multiple layers of devices, and so on. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Amay include various forms of storage-class memory (‘SCM’). SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC networks. The communications resources310can also include FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks. The communications resources310can also include InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters. The communications resources310can also include NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more application-specific integrated circuits (‘ASICs’) that are customized for some particular purpose as well as one or more central processing units (‘CPUs’). The processing resources312may also include one or more digital signal processors (‘DSPs’), one or more field-programmable gate arrays (‘FPGAs’), one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform various tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. Through the use of such data protection techniques, business continuity and disaster recovery objectives may be met as a failure of the storage system may not result in the loss of data stored in the storage system. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources308in the storage system306. For example, the software resources314may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. Readers will appreciate that the presence of such software resources314may provide for an improved user experience of the storage system306, an expansion of functionality supported by the storage system306, and many other benefits. Consider the specific example of the software resources314carrying out data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. In such an example, the systems described herein may more reliably (and with less burden placed on the user) perform backup operations relative to interactive backup management systems that require high degrees of user interactivity, offer less robust automation and feature sets, and so on. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components depicted inFIG.3Bmay be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system306while also reducing various costs associated with the establishment and operation of the storage system306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage system306depicted inFIG.3Bmay be useful for supporting various types of software applications. For example, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. The storage systems described above may operate to support a wide variety of applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson, Microsoft Oxford, Google DeepMind, Baidu Minwa, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. Reinforcement learning may be employed to find the best possible behavior or path that a particular software application or machine should take in a specific situation. Reinforcement learning differs from other areas of machine learning (e.g., supervised learning, unsupervised learning) in that correct input/output pairs need not be presented for reinforcement learning and sub-optimal actions need not be explicitly corrected. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. A GPU is a modern processor with thousands of cores, well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others. Applications of AI techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Furthermore, AI may impact a wide variety of industries and sectors. For example, AI solutions may be used in healthcare to take clinical notes, patient files, research data, and other inputs to generate potential treatment options for doctors to explore. Likewise, AI solutions may be used by retailers to personalize consumer recommendations based on a person's digital footprint of behaviors, profile data, or other data. Training deep neural networks, however, requires both high quality input data and large amounts of computation. GPUs are massively parallel processors capable of operating on large amounts of data simultaneously. When combined into a multi-GPU cluster, a high throughput pipeline may be required to feed input data from storage to the compute engines. Deep learning is more than just constructing and training models. There also exists an entire data pipeline that must be designed for the scale, iteration, and experimentation necessary for a data science team to succeed. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. A data scientist works to improve the usefulness of the trained model through a wide variety of approaches: more data, better data, smarter training, and deeper models. In many cases, there will be teams of data scientists sharing the same datasets and working in parallel to produce new and improved training models. Often, there is a team of data scientists working within these phases concurrently on the same shared datasets. Multiple, concurrent workloads of data processing, experimentation, and full-scale training layer the demands of multiple access patterns on the storage tier. In other words, storage cannot just satisfy large file reads, but must contend with a mix of large and small file reads and writes. Finally, with multiple data scientists exploring datasets and models, it may be critical to store data in its native format to provide flexibility for each user to transform, clean, and use the data in a unique way. The storage systems described above may provide a natural shared storage home for the dataset, with data protection redundancy (e.g., by using RAID6) and the performance necessary to be a common access point for multiple developers and multiple experiments. Using the storage systems described above may avoid the need to carefully copy subsets of the data for local work, saving both engineering and GPU-accelerated servers use time. These copies become a constant and growing tax as the raw data set and desired transformations constantly update and change. Readers will appreciate that a fundamental reason why deep learning has seen a surge in success is the continued improvement of models with larger data set sizes. In contrast, classical machine learning algorithms, like logistic regression, stop improving in accuracy at smaller data set sizes. As such, the separation of compute resources and storage resources may also allow independent scaling of each tier, avoiding many of the complexities inherent in managing both together. As the data set size grows or new data sets are considered, a scale out storage system must be able to expand easily. Similarly, if more concurrent training is required, additional GPUs or other compute resources can be added without concern for their internal storage. Furthermore, the storage systems described above may make building, operating, and growing an AI system easier due to the random read bandwidth provided by the storage systems, the ability to of the storage systems to randomly read small files (50 KB) high rates (meaning that no extra effort is required to aggregate individual data points to make larger, storage-friendly files), the ability of the storage systems to scale capacity and performance as either the dataset grows or the throughput requirements grow, the ability of the storage systems to support files or objects, the ability of the storage systems to tune performance for large or small files (i.e., no need for the user to provision filesystems), the ability of the storage systems to support non-disruptive upgrades of hardware and software even during production model training, and for many other reasons. Small file performance of the storage tier may be critical as many types of inputs, including text, audio, or images will be natively stored as small files. If the storage tier does not handle small files well, an extra step will be required to pre-process and group samples into larger files. Storage, built on top of spinning disks, that relies on SSD as a caching tier, may fall short of the performance needed. Because training with random input batches results in more accurate models, the entire data set must be accessible with full performance. SSD caches only provide high performance for a small subset of the data and will be ineffective at hiding the latency of spinning drives. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. Distributed deep learning can be used to significantly accelerate deep learning with distributed computing on GPUs (or other form of accelerator or computer program instruction executor), such that parallelism can be achieved. In addition, the output of training machine learning and deep learning models, such as a fully trained machine learning model, may be used for a variety of purposes and in conjunction with other tools. For example, trained machine learning models may be used in conjunction with tools like Core ML to integrate a broad variety of machine learning model types into an application. In fact, trained models may be run through Core ML converter tools and inserted into a custom application that can be deployed on compatible devices. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. Readers will further appreciate that the systems described above may be deployed in a variety of ways to support the democratization of AI, as AI becomes more available for mass consumption. The democratization of AI may include, for example, the ability to offer AI as a Platform-as-a-Service, the growth of Artificial general intelligence offerings, the proliferation of Autonomous level 4 and Autonomous level 5 vehicles, the availability of autonomous mobile robots, the development of conversational AI platforms, and many others. For example, the systems described above may be deployed in cloud environments, edge environments, or other environments that are useful in supporting the democratization of AI. As part of the democratization of AI, a movement may occur from narrow AI that consists of highly scoped machine learning solutions that target a particular task to artificial general intelligence where the use of machine learning is expanded to handle a broad range of use cases that could essentially perform any intelligent task that a human could perform and could learn dynamically, much like a human. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains. Such blockchains may be embodied as a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block in a blockchain may contain a hash pointer as a link to a previous block, a timestamp, transaction data, and so on. Blockchains may be designed to be resistant to modification of the data and can serve as an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. This makes blockchains potentially suitable for the recording of events, medical records, and other records management activities, such as identity management, transaction processing, and others. In addition to supporting the storage and use of blockchain technologies, the storage systems described above may also support the storage and use of derivative items such as, for example, open source blockchains and related tools that are part of the IBM™ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Readers will appreciate that blockchain technologies may impact a wide variety of industries and sectors. For example, blockchain technologies may be used in real estate transactions as blockchain based contracts whose use can eliminate the need for 3rdparties and enable self-executing actions when conditions are met. Likewise, universal health records can be created by aggregating and placing a person's health history onto a blockchain ledger for any healthcare provider, or permissioned health care providers, to access and update. Readers will appreciate that the usage of blockchains is not limited to financial transactions, contracts, and the like. In fact, blockchains may be leveraged to enable the decentralized aggregation, ordering, timestamping and archiving of any type of information, including structured data, correspondence, documentation, or other data. Through the usage of blockchains, participants can provably and permanently agree on exactly what data was entered, when and by whom, without relying on a trusted intermediary. For example, SAP's recently launched blockchain platform, which supports MultiChain and Hyperledger Fabric, targets a broad range of supply chain and other non-financial applications. One way to use a blockchain for recording data is to embed each piece of data directly inside a transaction. Every blockchain transaction may be digitally signed by one or more parties, replicated to a plurality of nodes, ordered and timestamped by the chain's consensus algorithm, and stored permanently in a tamper-proof way. Any data within the transaction will therefore be stored identically but independently by every node, along with a proof of who wrote it and when. The chain's users are able to retrieve this information at any future time. This type of storage may be referred to as on-chain storage. On-chain storage may not be particularly practical, however, when attempting to store a very large dataset. As such, in accordance with embodiments of the present disclosure, blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Each hash may serve as a commitment to its input data, with the data itself being stored outside of the blockchain. Readers will appreciate that any blockchain participant that needs an off-chain piece of data cannot reproduce the data from its hash, but if the data can be retrieved in some other way, then the on-chain hash serves to confirm who created it and when. Just like regular on-chain data, the hash may be embedded inside a digitally signed transaction, which was included in the chain by consensus. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). While typical PoW systems only depend on the previous block in order to generate each successive block, the PoA algorithm may incorporate data from a randomly chosen previous block. Combined with the blockweave data structure, miners do not need to store all blocks (forming a blockchain), but rather can store any previous blocks forming a weave of blocks (a blockweave). This enables increased levels of scalability, speed and low-cost and reduces the cost of data storage in part because miners need not store all blocks, thereby resulting in a substantial reduction in the amount of electricity that is consumed during the mining process because, as the network expands, electricity consumption decreases because a blockweave demands less and less hashing power for consensus as data is added to the system. Furthermore, blockweaves may be deployed on a decentralized storage network in which incentives are created to encourage rapid data sharing. Such decentralized storage networks may also make use of blockshadowing techniques, where nodes only send a minimal block “shadow” to other nodes that allows peers to reconstruct a full block, instead of transmitting the full block itself. The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In memory computing involves the storage of information in RAM that is distributed across a cluster of computers. In-memory computing helps business customers, including retailers, banks and utilities, to quickly detect patterns, analyze massive data volumes on the fly, and perform their operations quickly. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible. Readers will appreciate that the systems described above may be better suited for the applications described above relative to other systems that may include, for example, a distributed direct-attached storage (DDAS) solution deployed in server nodes. Such DDAS solutions may be built for handling large, less sequential accesses but may be less able to handle small, random accesses. Readers will further appreciate that the storage systems described above may be utilized to provide a platform for the applications described above that is preferable to the utilization of cloud-based resources as the storage systems may be included in an on-site or in-house infrastructure that is more secure, more locally and internally managed, more robust in feature sets and performance, or otherwise preferable to the utilization of cloud-based resources as part of a platform to support the applications described above. For example, services built on platforms such as IBM's Watson may require a business enterprise to distribute individual user information, such as financial transaction information or identifiable patient records, to other institutions. As such, cloud-based offerings of AI as a service may be less desirable than internally managed and offered AI as a service that is supported by storage systems such as the storage systems described above, for a wide array of technical reasons as well as for various business reasons. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM™ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Such platforms may seamlessly collect, organize, secure, and analyze data across an enterprise, as well as simplify hybrid data management, unified data governance and integration, data science and business analytics with a single solution. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. Likewise, machines like locomotives and gas turbines that generate large amounts of information through the use of a wide array of data-generating sensors may benefit from the rapid data processing capabilities of an edge solution. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. Consider a specific example of inventory management in a warehouse, distribution center, or similar location. A large inventory, warehousing, shipping, order-fulfillment, manufacturing or other operation has a large amount of inventory on inventory shelves, and high resolution digital cameras that produce a firehose of large data. All of this data may be taken into an image processing system, which may reduce the amount of data to a firehose of small data. All of the small data may be stored on-premises in storage. The on-premises storage, at the edge of the facility, may be coupled to the cloud, for external reports, real-time control and cloud storage. Inventory management may be performed with the results of the image processing, so that inventory can be tracked on the shelves and restocked, moved, shipped, modified with new products, or discontinued/obsolescent products deleted, etc. The above scenario is a prime candidate for an embodiment of the configurable processing and storage systems described above. A combination of compute-only blades and offload blades suited for the image processing, perhaps with deep learning on offload-FPGA or offload-custom blade(s) could take in the firehose of large data from all of the digital cameras, and produce the firehose of small data. All of the small data could then be stored by storage nodes, operating with storage units in whichever combination of types of storage blades best handles the data flow. This is an example of storage and function acceleration and integration. Depending on external communication needs with the cloud, and external processing in the cloud, and depending on reliability of network connections and cloud resources, the system could be sized for storage and compute management with bursty workloads and variable conductivity reliability. Also, depending on other inventory management aspects, the system could be configured for scheduling and resource management in a hybrid edge/cloud environment. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. Big data analytics applications enable data scientists, predictive modelers, statisticians and other analytics professionals to analyze growing volumes of structured transaction data, plus other forms of data that are often left untapped by conventional business intelligence (BI) and analytics programs. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. Big data analytics is a form of advanced analytics, which involves complex applications with elements such as predictive models, statistical algorithms and what-if analyses powered by high-performance analytics systems. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's Alexa, Apple Siri, Google Voice, Samsung Bixby, Microsoft Cortana, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. Readers will appreciate that some transparently immersive experiences may involve the use of digital twins of various “things” such as people, places, processes, systems, and so on. Such digital twins and other immersive technologies can alter the way that humans interact with technology, as conversational platforms, augmented reality, virtual reality and mixed reality provide a more natural and immersive interaction with the digital world. In fact, digital twins may be linked with the real-world, perhaps even in real-time, to understand the state of a thing or system, respond to changes, and so on. Because digital twins consolidate massive amounts of information on individual assets and groups of assets (even possibly providing control of those assets), digital twins may communicate with each other to digital factory models of multiple linked digital twins. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. Furthermore, application monitoring and visibility tools may be deployed to move application workloads around different clouds, identify performance issues, and perform other tasks. In addition, security and compliance tools may be deployed for to ensure compliance with security requirements, government regulations, and so on. Such a multi-cloud environment may also include tools for application delivery and smart workload management to ensure efficient application delivery and help direct workloads across the distributed and heterogeneous infrastructure, as well as tools that ease the deployment and maintenance of packaged and custom applications in the cloud and enable portability amongst clouds. The multi-cloud environment may similarly include tools for data portability. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Such crypto-anchors may take many forms including, for example, as edible ink, as a mobile sensor, as a microchip, and others. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming through the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. Through the use of a parallel file system, file contents may be distributed over a plurality of storage servers using striping and metadata may be distributed over a plurality of metadata servers on a directory level, with each server storing a part of the complete file system tree. Readers will appreciate that in some embodiments, the storage servers and metadata servers may run in userspace on top of an existing local file system. Furthermore, dedicated hardware is not required for client services, the metadata servers, or the hardware servers as metadata servers, storage servers, and even the client services may be run on the same machines. Readers will appreciate that, in part due to the emergence of many of the technologies discussed above including mobile devices, cloud services, social networks, big data analytics, and so on, an information technology platform may be needed to integrate all of these technologies and drive new business opportunities by quickly delivering revenue-generating products, services, and experiences—rather than merely providing the technology to automate internal business processes. Information technology organizations may need to balance resources and investments needed to keep core legacy systems up and running while also integrating technologies to build an information technology platform that can provide the speed and flexibility in areas such as, for example, exploiting big data, managing unstructured data, and working with cloud applications and services. One possible embodiment of such an information technology platform is a composable infrastructure that includes fluid resource pools, such as many of the systems described above that, can meet the changing needs of applications by allowing for the composition and recomposition of blocks of disaggregated compute, storage, and fabric infrastructure. Such a composable infrastructure can also include a single management interface to eliminate complexity and a unified API to discover, search, inventory, configure, provision, update, and diagnose the composable infrastructure. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, a clustering and scheduling tool for Docker containers that enables IT administrators and developers to establish and manage a cluster of Docker nodes as a single virtual system. Likewise, containerized applications may be managed through the use of Kubernetes, a container-orchestration system for automating deployment, scaling and management of containerized applications. Kubernetes may execute on top of operating systems such as, for example, Red Hat Enterprise Linux, Ubuntu Server, SUSE Linux Enterprise Servers, and others. In such examples, a master node may assign tasks to worker/minion nodes. Kubernetes can include a set of components (e.g., kubelet, kube-proxy, cAdvisor) that manage individual nodes as a well as a set of components (e.g., etcd, API server, Scheduler, Control Manager) that form a control plane. Various controllers (e.g., Replication Controller, DaemonSet Controller) can drive the state of a Kubernetes cluster by managing a set of pods that includes one or more containers that are deployed on a single node. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. MEC technology is designed to be implemented at the cellular base stations or other edge nodes, and enables flexible and rapid deployment of new applications and services for customers. MEC may also allow cellular operators to open their radio access network (‘RAN’) to authorized third-parties, such as application developers and content provider. Furthermore, edge computing and micro data centers may substantially reduce the cost of smartphones that work with the 5G network because customer may not need devices with such intensive processing power and the expensive requisite components. Readers will appreciate that 5G networks may generate more data than previous network generations, especially in view of the fact that the high network bandwidth offered by 5G networks may cause the 5G networks to handle amounts and types of data (e.g., sensor data from self-driving cars, data generated by AR/VR technologies) that weren't as feasible for previous generation networks. In such examples, the scalability offered by the systems described above may be very valuable as the amount of data increases, adoption of emerging technologies increase, and so on. For further explanation,FIG.3Cillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3C, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3C, the components illustrated inFIG.3Care not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Cwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanation,FIG.3Dsets forth a block diagram illustrating a plurality of storage systems (311-402,311-404,311-406) that support a pod according to some embodiments of the present disclosure. Although depicted in less detail, the storage systems (311-402,311-404,311-406) depicted inFIG.3Dmay be similar to the storage systems described above with reference toFIGS.1A-1D,FIGS.2A-2G,FIGS.3A-3B, or any combination thereof. In fact, the storage systems (311-402,311-404,311-406) depicted inFIG.3Dmay include the same, fewer, or additional components as the storage systems described above. In the example depicted inFIG.3D, each of the storage systems (311-402,311-404,311-406) is depicted as having at least one computer processor (311-408,311-410,311-412), computer memory (311-414,311-416,311-418), and computer storage (311-420,311-422,311-424). Although in some embodiments the computer memory (311-414,311-416,311-418) and the computer storage (311-420,311-422,311-424) may be part of the same hardware devices, in other embodiments the computer memory (311-414,311-416,311-418) and the computer storage (311-420,311-422,311-424) may be part of different hardware devices. The distinction between the computer memory (311-414,311-416,311-418) and the computer storage (311-420,311-422,311-424) in this particular example may be that the computer memory (311-414,311-416,311-418) is physically proximate to the computer processors (311-408,311-410,311-412) and may store computer program instructions that are executed by the computer processors (311-408,311-410,311-412), while the computer storage (311-420,311-422,311-424) is embodied as non-volatile storage for storing user data, metadata describing the user data, and so on. Referring to the example above inFIG.1A, for example, the computer processors (311-408,311-410,311-412) and computer memory (311-414,311-416,311-418) for a particular storage system (311-402,311-404,311-406) may reside within one of more of the controllers (110A-110D) while the attached storage devices (171A-171F) may serve as the computer storage (311-420,311-422,311-424) within a particular storage system (311-402,311-404,311-406). In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may attach to one or more pods (311-430,311-432) according to some embodiments of the present disclosure. Each of the pods (311-430,311-432) depicted inFIG.3Dcan include a dataset (311-426,311-428). For example, a first pod (311-430) that three storage systems (311-402,311-404,311-406) have attached to includes a first dataset (311-426) while a second pod (311-432) that two storage systems (311-404,311-406) have attached to includes a second dataset (311-428). In such an example, when a particular storage system attaches to a pod, the pod's dataset is copied to the particular storage system and then kept up to date as the dataset is modified. Storage systems can be removed from a pod, resulting in the dataset being no longer kept up to date on the removed storage system. In the example depicted inFIG.3D, any storage system which is active for a pod (it is an up-to-date, operating, non-faulted member of a non-faulted pod) can receive and process requests to modify or read the pod's dataset. In the example depicted inFIG.3D, each pod (311-430,311-432) may also include a set of managed objects and management operations, as well as a set of access operations to modify or read the dataset (311-426,311-428) that is associated with the particular pod (311-430,311-432). In such an example, the management operations may modify or query managed objects equivalently through any of the storage systems. Likewise, access operations to read or modify the dataset may operate equivalently through any of the storage systems. In such an example, while each storage system stores a separate copy of the dataset as a proper subset of the datasets stored and advertised for use by the storage system, the operations to modify managed objects or the dataset performed and completed through any one storage system are reflected in subsequent management objects to query the pod or subsequent access operations to read the dataset. Readers will appreciate that pods may implement more capabilities than just a clustered synchronously replicated dataset. For example, pods can be used to implement tenants, whereby datasets are in some way securely isolated from each other. Pods can also be used to implement virtual arrays or virtual storage systems where each pod is presented as a unique storage entity on a network (e.g., a Storage Area Network, or Internet Protocol network) with separate addresses. In the case of a multi-storage-system pod implementing a virtual storage system, all physical storage systems associated with the pod may present themselves as in some way the same storage system (e.g., as if the multiple physical storage systems were no different than multiple network ports into a single storage system). Readers will appreciate that pods may also be units of administration, representing a collection of volumes, file systems, object/analytic stores, snapshots, and other administrative entities, where making administrative changes (e.g., name changes, property changes, managing exports or permissions for some part of the pod's dataset), on any one storage system is automatically reflected to all active storage systems associated with the pod. In addition, pods could also be units of data collection and data analysis, where performance and capacity metrics are presented in ways that aggregate across all active storage systems for the pod, or that call out data collection and analysis separately for each pod, or perhaps presenting each attached storage system's contribution to the incoming content and performance for each a pod. One model for pod membership may be defined as a list of storage systems, and a subset of that list where storage systems are considered to be in-sync for the pod. A storage system may be considered to be in-sync for a pod if it is at least within a recovery of having identical idle content for the last written copy of the dataset associated with the pod. Idle content is the content after any in-progress modifications have completed with no processing of new modifications. Sometimes this is referred to as “crash recoverable” consistency. Recovery of a pod carries out the process of reconciling differences in applying concurrent updates to in-sync storage systems in the pod. Recovery can resolve any inconsistencies between storage systems in the completion of concurrent modifications that had been requested to various members of the pod but that were not signaled to any requestor as having completed successfully. Storage systems that are listed as pod members but that are not listed as in-sync for the pod can be described as “detached” from the pod. Storage systems that are listed as pod members, are in-sync for the pod, and are currently available for actively serving data for the pod are “online” for the pod. Each storage system member of a pod may have its own copy of the membership, including which storage systems it last knew were in-sync, and which storage systems it last knew comprised the entire set of pod members. To be online for a pod, a storage system must consider itself to be in-sync for the pod and must be communicating with all other storage systems it considers to be in-sync for the pod. If a storage system can't be certain that it is in-sync and communicating with all other storage systems that are in-sync, then it must stop processing new incoming requests for the pod (or must complete them with an error or exception) until it can be certain that it is in-sync and communicating with all other storage systems that are in-sync. A first storage system may conclude that a second paired storage system should be detached, which will allow the first storage system to continue since it is now in-sync with all storage systems now in the list. But, the second storage system must be prevented from concluding, alternatively, that the first storage system should be detached and with the second storage system continuing operation. This would result in a “split brain” condition that can lead to irreconcilable datasets, dataset corruption, or application corruption, among other dangers. The situation of needing to determine how to proceed when not communicating with paired storage systems can arise while a storage system is running normally and then notices lost communications, while it is currently recovering from some previous fault, while it is rebooting or resuming from a temporary power loss or recovered communication outage, while it is switching operations from one set of storage system controller to another set for whatever reason, or during or after any combination of these or other kinds of events. In fact, any time a storage system that is associated with a pod can't communicate with all known non-detached members, the storage system can either wait briefly until communications can be established, go offline and continue waiting, or it can determine through some means that it is safe to detach the non-communicating storage system without risk of incurring a split brain due to the non-communicating storage system concluding the alternative view, and then continue. If a safe detach can happen quickly enough, the storage system can remain online for the pod with little more than a short delay and with no resulting application outages for applications that can issue requests to the remaining online storage systems. One example of this situation is when a storage system may know that it is out-of-date. That can happen, for example, when a first storage system is first added to a pod that is already associated with one or more storage systems, or when a first storage system reconnects to another storage system and finds that the other storage system had already marked the first storage system as detached. In this case, this first storage system will simply wait until it connects to some other set of storage systems that are in-sync for the pod. This model demands some degree of consideration for how storage systems are added to or removed from pods or from the in-sync pod members list. Since each storage system will have its own copy of the list, and since two independent storage systems can't update their local copy at exactly the same time, and since the local copy is all that is available on a reboot or in various fault scenarios, care must be taken to ensure that transient inconsistencies don't cause problems. For example, if one storage systems is in-sync for a pod and a second storage system is added, then if the second storage system is updated to list both storage systems as in-sync first, then if there is a fault and a restart of both storage systems, the second might startup and wait to connect to the first storage system while the first might be unaware that it should or could wait for the second storage system. If the second storage system then responds to an inability to connect with the first storage system by going through a process to detach it, then it might succeed in completing a process that the first storage system is unaware of, resulting in a split brain. As such, it may be necessary to ensure that storage systems won't disagree inappropriately on whether they might opt to go through a detach process if they aren't communicating. One way to ensure that storage systems won't disagree inappropriately on whether they might opt to go through a detach process if they aren't communicating is to ensure that when adding a new storage system to the in-sync member list for a pod, the new storage system first stores that it is a detached member (and perhaps that it is being added as an in-sync member). Then, the existing in-sync storage systems can locally store that the new storage system is an in-sync pod member before the new storage system locally stores that same fact. If there is a set of reboots or network outages prior to the new storage system storing its in-sync status, then the original storage systems may detach the new storage system due to non-communication, but the new storage system will wait. A reverse version of this change might be needed for removing a communicating storage system from a pod: first the storage system being removed stores that it is no longer in-sync, then the storage systems that will remain store that the storage system being removed is no longer in-sync, then all storage systems delete the storage system being removed from their pod membership lists. Depending on the implementation, an intermediate persisted detached state may not be necessary. Whether or not care is required in local copies of membership lists may depend on the model storage systems use for monitoring each other or for validating their membership. If a consensus model is used for both, or if an external system (or an external distributed or clustered system) is used to store and validate pod membership, then inconsistencies in locally stored membership lists may not matter. When communications fail or one or several storage systems in a pod fail, or when a storage system starts up (or fails over to a secondary controller) and can't communicate with paired storage systems for a pod, and it is time for one or more storage systems to decide to detach one or more paired storage systems, some algorithm or mechanism must be employed to decide that it is safe to do so and to follow through on the detach. One means of resolving detaches is use a majority (or quorum) model for membership. With three storage systems, as long as two are communicating, they can agree to detach a third storage system that isn't communicating, but that third storage system cannot by itself choose to detach either of the other two. Confusion can arise when storage system communication is inconsistent. For example, storage system A might be communicating with storage system B but not C, while storage system B might be communicating with both A and C. So, A and B could detach C, or B and C could detach A, but more communication between pod members may be needed to figure this out. Care needs to be taken in a quorum membership model when adding and removing storage systems. For example, if a fourth storage system is added, then a “majority” of storage systems is at that point three. The transition from three storage systems (with two required for majority) to a pod including a fourth storage system (with three required for majority) may require something similar to the model described previously for carefully adding a storage system to the in-sync list. For example, the fourth storage system might start in an attaching state but not yet attached where it would never instigate a vote over quorum. Once in that state, the original three pod members could each be updated to be aware of the fourth member and the new requirement for a three storage system majority to detach a fourth. Removing a storage system from a pod might similarly move that storage system to a locally stored “detaching” state before updating other pod members. A variant scheme for this is to use a distributed consensus mechanism such as PAXOS or RAFT to implement any membership changes or to process detach requests. Another means of managing membership transitions is to use an external system that is outside of the storage systems themselves to handle pod membership. In order to become online for a pod, a storage system must first contact the external pod membership system to verify that it is in-sync for the pod. Any storage system that is online for a pod should then remain in communication with the pod membership system and should wait or go offline if it loses communication. An external pod membership manager could be implemented as a highly available cluster using various cluster tools, such as Oracle RAC, Linux HA, VERITAS Cluster Server, IBM's HACMP, or others. An external pod membership manager could also use distributed configuration tools such as Etcd or Zookeeper, or a reliable distributed database such as Amazon's DynamoDB. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may receive a request to read a portion of the dataset (311-426,311-428) and process the request to read the portion of the dataset locally according to some embodiments of the present disclosure. Readers will appreciate that although requests to modify (e.g., a write operation) the dataset (311-426,311-428) require coordination between the storage systems (311-402,311-404,311-406) in a pod, as the dataset (311-426,311-428) should be consistent across all storage systems (311-402,311-404,311-406) in a pod, responding to a request to read a portion of the dataset (311-426,311-428) does not require similar coordination between the storage systems (311-402,311-404,311-406). As such, a particular storage system that receives a read request may service the read request locally by reading a portion of the dataset (311-426,311-428) that is stored within the storage system's storage devices, with no synchronous communication with other storage systems in the pod. Read requests received by one storage system for a replicated dataset in a replicated cluster are expected to avoid any communication in the vast majority of cases, at least when received by a storage system that is running within a cluster that is also running nominally. Such reads should normally be processed simply by reading from the local copy of a clustered dataset with no further interaction required with other storage systems in the cluster. Readers will appreciate that the storage systems may take steps to ensure read consistency such that a read request will return the same result regardless of which storage system processes the read request. For example, the resulting clustered dataset content for any set of updates received by any set of storage systems in the cluster should be consistent across the cluster, at least at any time updates are idle (all previous modifying operations have been indicated as complete and no new update requests have been received and processed in any way). More specifically, the instances of a clustered dataset across a set of storage systems can differ only as a result of updates that have not yet completed. This means, for example, that any two write requests which overlap in their volume block range, or any combination of a write request and an overlapping snapshot, compare-and-write, or virtual block range copy, must yield a consistent result on all copies of the dataset. Two operations should not yield a result as if they happened in one order on one storage system and a different order on another storage system in the replicated cluster. Furthermore, read requests can be made time order consistent. For example, if one read request is received on a replicated cluster and completed and that read is then followed by another read request to an overlapping address range which is received by the replicated cluster and where one or both reads in any way overlap in time and volume address range with a modification request received by the replicated cluster (whether any of the reads or the modification are received by the same storage system or a different storage system in the replicated cluster), then if the first read reflects the result of the update then the second read should also reflect the results of that update, rather than possibly returning data that preceded the update. If the first read does not reflect the update, then the second read can either reflect the update or not. This ensures that between two read requests “time” for a data segment cannot roll backward. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also detect a disruption in data communications with one or more of the other storage systems and determine whether to the particular storage system should remain in the pod. A disruption in data communications with one or more of the other storage systems may occur for a variety of reasons. For example, a disruption in data communications with one or more of the other storage systems may occur because one of the storage systems has failed, because a network interconnect has failed, or for some other reason. An important aspect of synchronous replicated clustering is ensuring that any fault handling doesn't result in unrecoverable inconsistencies, or any inconsistency in responses. For example, if a network fails between two storage systems, at most one of the storage systems can continue processing newly incoming I/O requests for a pod. And, if one storage system continues processing, the other storage system can't process any new requests to completion, including read requests. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also determine whether the particular storage system should remain in the pod in response to detecting a disruption in data communications with one or more of the other storage systems. As mentioned above, to be ‘online’ as part of a pod, a storage system must consider itself to be in-sync for the pod and must be communicating with all other storage systems it considers to be in-sync for the pod. If a storage system can't be certain that it is in-sync and communicating with all other storage systems that are in-sync, then it may stop processing new incoming requests to access the dataset (311-426,311-428). As such, the storage system may determine whether to the particular storage system should remain online as part of the pod, for example, by determining whether it can communicate with all other storage systems it considers to be in-sync for the pod (e.g., via one or more test messages), by determining whether the all other storage systems it considers to be in-sync for the pod also consider the storage system to be attached to the pod, through a combination of both steps where the particular storage system must confirm that it can communicate with all other storage systems it considers to be in-sync for the pod and that all other storage systems it considers to be in-sync for the pod also consider the storage system to be attached to the pod, or through some other mechanism. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also keep the dataset on the particular storage system accessible for management and dataset operations in response to determining that the particular storage system should remain in the pod. The storage system may keep the dataset (311-426,311-428) on the particular storage system accessible for management and dataset operations, for example, by accepting requests to access the version of the dataset (311-426,311-428) that is stored on the storage system and processing such requests, by accepting and processing management operations associated with the dataset (311-426,311-428) that are issued by a host or authorized administrator, by accepting and processing management operations associated with the dataset (311-426,311-428) that are issued by one of the other storage systems, or in some other way. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may, however, make the dataset on the particular storage system inaccessible for management and dataset operations in response to determining that the particular storage system should not remain in the pod. The storage system may make the dataset (311-426,311-428) on the particular storage system inaccessible for management and dataset operations, for example, by rejecting requests to access the version of the dataset (311-426,311-428) that is stored on the storage system, by rejecting management operations associated with the dataset (311-426,311-428) that are issued by a host or other authorized administrator, by rejecting management operations associated with the dataset (311-426,311-428) that are issued by one of the other storage systems in the pod, or in some other way. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also detect that the disruption in data communications with one or more of the other storage systems has been repaired and make the dataset on the particular storage system accessible for management and dataset operations. The storage system may detect that the disruption in data communications with one or more of the other storage systems has been repaired, for example, by receiving a message from the one or more of the other storage systems. In response to detecting that the disruption in data communications with one or more of the other storage systems has been repaired, the storage system may make the dataset (311-426,311-428) on the particular storage system accessible for management and dataset operations once the previously detached storage system has been resynchronized with the storage systems that remained attached to the pod. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also go offline from the pod such that the particular storage system no longer allows management and dataset operations. The depicted storage systems (311-402,311-404,311-406) may go offline from the pod such that the particular storage system no longer allows management and dataset operations for a variety of reasons. For example, the depicted storage systems (311-402,311-404,311-406) may also go offline from the pod due to some fault with the storage system itself, because an update or some other maintenance is occurring on the storage system, due to communications faults, or for many other reasons. In such an example, the depicted storage systems (311-402,311-404,311-406) may subsequently update the dataset on the particular storage system to include all updates to the dataset since the particular storage system went offline and go back online with the pod such that the particular storage system allows management and dataset operations, as will be described in greater detail in the resynchronization sections included below. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also identifying a target storage system for asynchronously receiving the dataset, where the target storage system is not one of the plurality of storage systems across which the dataset is synchronously replicated. Such a target storage system may represent, for example, a backup storage system, as some storage system that makes use of the synchronously replicated dataset, and so on. In fact, synchronous replication can be leveraged to distribute copies of a dataset closer to some rack of servers, for better local read performance. One such case is smaller top-of-rack storage systems symmetrically replicated to larger storage systems that are centrally located in the data center or campus and where those larger storage systems are more carefully managed for reliability or are connected to external networks for asynchronous replication or backup services. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also identify a portion of the dataset that is not being asynchronously replicated to the target storage system by any of the other storages systems and asynchronously replicate, to the target storage system, the portion of the dataset that is not being asynchronously replicated to the target storage system by any of the other storages systems, wherein the two or more storage systems collectively replicate the entire dataset to the target storage system. In such a way, the work associated with asynchronously replicating a particular dataset may be split amongst the members of a pod, such that each storage system in a pod is only responsible for asynchronously replicating a subset of a dataset to the target storage system. In the example depicted inFIG.3D, the depicted storage systems (311-402,311-404,311-406) may also detach from the pod, such that the particular storage system that detaches from the pod is no longer included in the set of storage systems across which the dataset is synchronously replicated. For example, if storage system (311-404) inFIG.3Ddetached from the pod (311-430) illustrated inFIG.3D, the pod (311-430) would only include storage systems (311-402,311-406) as the storage systems across which the dataset (311-426) that is included in the pod (311-430) would be synchronously replicated across. In such an example, detaching the storage system from the pod could also include removing the dataset from the particular storage system that detached from the pod. Continuing with the example where the storage system (311-404) inFIG.3Ddetached from the pod (311-430) illustrated inFIG.3D, the dataset (311-426) that is included in the pod (311-430) could be deleted or otherwise removed from the storage system (311-404). Readers will appreciate that there are a number of unique administrative capabilities enabled by the pod model that can further be supported. Also, the pod model itself introduces some issues that can be addressed by an implementation. For example, when a storage system is offline for a pod, but is otherwise running, such as because an interconnect failed and another storage system for the pod won out in mediation, there may still be a desire or need to access the offline pod's dataset on the offline storage system. One solution may be simply to enable the pod in some detached mode and allow the dataset to be accessed. However, that solution can be dangerous and that solution can cause the pod's metadata and data to be much more difficult to reconcile when the storage systems do regain communication. Furthermore, there could still be a separate path for hosts to access the offline storage system as well as the still online storage systems. In that case, a host might issue I/O to both storage systems even though they are no longer being kept in sync, because the host sees target ports reporting volumes with the same identifiers and the host I/O drivers presume it sees additional paths to the same volume. This can result in fairly damaging data corruption as reads and writes issued to both storage systems are no longer consistent even though the host presumes they are. As a variant of this case, in a clustered application, such as a shared storage clustered database, the clustered application running on one host might be reading or writing to one storage system and the same clustered application running on another host might be reading or writing to the “detached” storage system, yet the two instances of the clustered application are communicating between each other on the presumption that the dataset they each see is entirely consistent for completed writes. Since they aren't consistent, that presumption is violated and the application's dataset (e.g., the database) can quickly end up being corrupted. One way to solve both of these problems is to allow for an offline pod, or perhaps a snapshot of an offline pod, to be copied to a new pod with new volumes that have sufficiently new identities that host I/O drivers and clustered applications won't confuse the copied volumes as being the same as the still online volumes on another storage system. Since each pod maintains a complete copy of the dataset, which is crash consistent but perhaps slightly different from the copy of the pod dataset on another storage system, and since each pod has an independent copy of all data and metadata needed to operate on the pod content, it is a straightforward problem to make a virtual copy of some or all volumes or snapshots in the pod to new volumes in a new pod. In a logical extent graph implementation, for example, all that is needed is to define new volumes in a new pod which reference logical extent graphs from the copied pod associated with the pod's volumes or snapshots, and with the logical extent graphs being marked as copy on write. The new volumes should be treated as new volumes, similarly to how volume snapshots copied to a new volume might be implemented. Volumes may have the same administrative name, though within a new pod namespace. But, they should have different underlying identifiers, and differing logical unit identifiers from the original volumes. In some cases it may be possible to use virtual network isolation techniques (for example, by creating a virtual LAN in the case of IP networks or a virtual SAN in the case of fiber channel networks) in such a way that isolation of volumes presented to some interfaces can be assured to be inaccessible from host network interfaces or host SCSI initiator ports that might also see the original volumes. In such cases, it may be safe to provide the copies of volumes with the same SCSI or other storage identifiers as the original volumes. This could be used, for example, in cases where the applications expect to see a particular set of storage identifiers in order to function without an undue burden in reconfiguration. Some of the techniques described herein could also be used outside of an active fault context to test readiness for handling faults. Readiness testing (sometimes referred to as “fire drills”) is commonly required for disaster recovery configurations, where frequent and repeated testing is considered a necessity to ensure that most or all aspects of a disaster recovery plan are correct and account for any recent changes to applications, datasets, or changes in equipment. Readiness testing should be non-disruptive to current production operations, including replication. In many cases the real operations can't actually be invoked on the active configuration, but a good way to get close is to use storage operations to make copies of production datasets, and then perhaps couple that with the use of virtual networking, to create an isolated environment containing all data that is believed necessary for the important applications that must be brought up successfully in cases of disasters. Making such a copy of a synchronously replicated (or even an asynchronously replicated) dataset available within a site (or collection of sites) that is expected to perform a disaster recovery readiness test procedure and then starting the important applications on that dataset to ensure that it can startup and function is a great tool, since it helps ensure that no important parts of the application datasets were left out in the disaster recovery plan. If necessary, and practical, this could be coupled with virtual isolated networks coupled perhaps with isolated collection of physical or virtual machines, to get as close as possible to a real world disaster recovery takeover scenario. Virtually copying a pod (or set of pods) to another pod as a point-in-time image of the pod datasets immediately creates an isolated dataset that contains all the copied elements and that can then be operated on essentially identically to the originally pods, as well as allowing isolation to a single site (or a few sites) separately from the original pod. Further, these are fast operations and they can be torn down and repeated easily allowing testing to repeated as often as is desired. Some enhancements could be made to get further toward perfect disaster recovery testing. For example, in conjunction with isolated networks, SCSI logical unit identities or other types of identities could be copied into the target pod so that the test servers, virtual machines, and applications see the same identities. Further, the administrative environment of the servers could be configured to respond to requests from a particular virtual set of virtual networks to respond to requests and operations on the original pod name so scripts don't require use of test-variants with alternate “test” versions of object names. A further enhancement can be used in cases where the host-side server infrastructure that will take over in the case of a disaster takeover can be used during a test. This includes cases where a disaster recovery data center is completely stocked with alternative server infrastructure that won't generally be used until directed to do so by a disaster. It also includes cases where that infrastructure might be used for non-critical operations (for example, running analytics on production data, or simply supporting application development or other functions which may be important but can be halted if needed for more critical functions). Specifically, host definitions and configurations and the server infrastructure that will use them can be set up as they will be for an actual disaster recovery takeover event and tested as part of disaster recovery takeover testing, with the tested volumes being connected to these host definitions from the virtual pod copy used to provide a snapshot of the dataset. From the standpoint of the storage systems involved, then, these host definitions and configurations used for testing, and the volume-to-host connection configurations used during testing, can be reused when an actual disaster takeover event is triggered, greatly minimizing the configuration differences between the test configuration and the real configuration that will be used in case of a disaster recovery takeover. In some cases it may make sense to move volumes out of a first pod and into a new second pod including just those volumes. The pod membership and high availability and recovery characteristics can then be adjusted separately, and administration of the two resulting pod datasets can then be isolated from each other. An operation that can be done in one direction should also be possible in the other direction. At some point, it may make sense to take two pods and merge them into one so that the volumes in each of the original two pods will now track each other for storage system membership and high availability and recovery characteristics and events. Both operations can be accomplished safely and with reasonably minimal or no disruption to running applications by relying on the characteristics suggested for changing mediation or quorum properties for a pod which were discussed in an earlier section. With mediation, for example, a mediator for a pod can be changed using a sequence consisting of a step where each storage system in a pod is changed to depend on both a first mediator and a second mediator and each is then changed to depend only on the second mediator. If a fault occurs in the middle of the sequence, some storage systems may depend on both the first mediator and the second mediator, but in no case will recovery and fault handling result in some storage systems depending only on the first mediator and other storage systems only depending on the second mediator. Quorum can be handled similarly by temporarily depending on winning against both a first quorum model and a second quorum model in order to proceed to recovery. This may result in a very short time period where availability of the pod in the face of faults depend on additional resources, thus reducing potential availability, but this time period is very short and the reduction in availability is often very little. With mediation, if the change in mediator parameters is nothing more than the change in the key used for mediation and the mediation service used is the same, then the potential reduction in availability is even less, since it now depends only on two calls to the same service versus one call to that service, and rather than separate calls to two separate services. Readers will note that changing the quorum model may be quite complex. An additional step may be necessary where storage systems will participate in the second quorum model but won't depend on winning in that second quorum model, which is then followed by the step of also depending on the second quorum model. This may be necessary to account for the fact that if only one system has processed the change to depend on the quorum model, then it will never win quorum since there will never be a majority. With this model in place for changing the high availability parameters (mediation relationship, quorum model, takeover preferences), we can create a safe procedure for these operations to split a pod into two or to join two pods into one. This may require adding one other capability: linking a second pod to a first pod for high availability such that if two pods include compatible high availability parameters the second pod linked to the first pod can depend on the first pod for determining and instigating detach-related processing and operations, offline and in-sync states, and recovery and resynchronization actions. To split a pod into two, which is an operation to move some volumes into a newly created pod, a distributed operation may be formed that can be described as: form a second pod into which we will move a set of volumes which were previously in a first pod, copy the high availability parameters from the first pod into the second pod to ensure they are compatible for linking, and link the second pod to the first pod for high availability. This operation may be encoded as messages and should be implemented by each storage system in the pod in such a way that the storage system ensures that the operation happens completely on that storage system or does not happen at all if processing is interrupted by a fault. Once all in-sync storage systems for the two pods have processed this operation, the storage systems can then process a subsequent operation which changes the second pod so that it is no longer linked to the first pod. As with other changes to high availability characteristics for a pod, this involves first having each in-sync storage system change to rely on both the previous model (that model being that high availability is linked to the first pod) and the new model (that model being its own now independent high availability). In the case of mediation or quorum, this means that storage systems which processed this change will first depend on mediation or quorum being achieved as appropriate for the first pod and will additionally depend on a new separate mediation (for example, a new mediation key) or quorum being achieved for the second pod before the second pod can proceed following a fault that required mediation or testing for quorum. As with the previous description of changing quorum models, an intermediate step may set storage systems to participate in quorum for the second pod before the step where storage systems participate in and depend on quorum for the second pod. Once all in-sync storage systems have processed the change to depend on the new parameters for mediation or quorum for both the first pod and the second pod, the split is complete. Joining a second pod into a first pod operates essentially in reverse. First, the second pod must be adjusted to be compatible with the first pod, by having an identical list of storage systems and by having a compatible high availability model. This may involve some set of steps such as those described elsewhere in this paper to add or remove storage systems or to change mediator and quorum models. Depending on implementation, it may be necessary only to reach an identical list of storage systems. Joining proceeds by processing an operation on each in-sync storage system to link the second pod to the first pod for high availability. Each storage system which processes that operation will then depend on the first pod for high availability and then the second pod for high availability. Once all in-sync storage systems for the second pod have processed that operation, the storage systems will then each process a subsequent operation to eliminate the link between the second pod and the first pod, migrate the volumes from the second pod into the first pod, and delete the second pod. Host or application dataset access can be preserved throughout these operations, as long as the implementation allows proper direction of host or application dataset modification or read operations to the volume by identity and as long as the identity is preserved as appropriate to the storage protocol or storage model (for example, as long as logical unit identifiers for volumes and use of target ports for accessing volumes are preserved in the case of SCSI). Migrating a volume between pods may present issues. If the pods have an identical set of in-sync membership storage systems, then it may be straightforward: temporarily suspend operations on the volumes being migrated, switch control over operations on those volumes to controlling software and structures for the new pod, and then resume operations. This allows for a seamless migration with continuous uptime for applications apart from the very brief operation suspension, provided network and ports migrate properly between pods. Depending on the implementation, suspending operations may not even be necessary, or may be so internal to the system that the suspension of operations has no impact. Copying volumes between pods with different in-sync membership sets is more of a problem. If the target pod for the copy has a subset of in-sync members from the source pod, this isn't much of a problem: a member storage system can be dropped safely enough without having to do more work. But, if the target pod adds in-sync member storage systems to the volume over the source pod, then the added storage systems must be synchronized to include the volume's content before they can be used. Until synchronized, this leaves the copied volumes distinctly different from the already synchronized volumes, in that fault handling differs and request handling from the not yet synced member storage systems either won't work or must be forwarded or won't be as fast because reads will have to traverse an interconnect. Also, the internal implementation will have to handle some volumes being in sync and ready for fault handling and others not being in sync. There are other problems relating to reliability of the operation in the face of faults. Coordinating a migration of volumes between multi-storage-system pods is a distributed operation. If pods are the unit of fault handling and recovery, and if mediation or quorum or whatever means are used to avoid split-brain situations, then a switch in volumes from one pod with a particular set of state and configurations and relationships for fault handling, recovery, mediation and quorum to another then storage systems in a pod have to be careful about coordinating changes related to that handling for any volumes. Operations can't be atomically distributed between storage systems, but must be staged in some way. Mediation and quorum models essentially provide pods with the tools for implementing distributed transactional atomicity, but this may not extend to inter-pod operations without adding to the implementation. Consider even a simple migration of a volume from a first pod to a second pod even for two pods that share the same first and second storage systems. At some point the storage systems will coordinate to define that the volume is now in the second pod and is no longer in the first pod. If there is no inherent mechanism for transactional atomicity across the storage systems for the two pods, then a naive implementation could leave the volume in the first pod on the first storage system and the second pod on the second storage system at the time of a network fault that results in fault handling to detach storage systems from the two pods. If pods separately determine which storage system succeeds in detaching the other, then the result could be that the same storage system detaches the other storage system for both pods, in which case the result of the volume migration recovery should be consistent, or it could result in a different storage system detaching the other for the two pods. If the first storage system detaches the second storage system for the first pod and the second storage system detaches the first storage system for the second pod, then recovery might result in the volume being recovered to the first pod on the first storage system and into the second pod on the second storage system, with the volume then running and exported to hosts and storage applications on both storage systems. If instead the second storage system detaches the first storage system for the first pod and first storage detaches the second storage system for the second pod, then recovery might result in the volume being discarded from the second pod by the first storage system and the volume being discarded from the first pod by the second storage system, resulting in the volume disappearing entirely. If the pods a volume is being migrated between are on differing sets of storage systems, then things can get even more complicated. A solution to these problems may be to use an intermediate pod along with the techniques described previously for splitting and joining pods. This intermediate pod may never be presented as visible managed objects associated with the storage systems. In this model, volumes to be moved from a first pod to a second pod are first split from the first pod into a new intermediate pod using the split operation described previously. The storage system members for the intermediate pod can then be adjusted to match the membership of storage systems by adding or removing storage systems from the pod as necessary. Subsequently, the intermediate pod can be joined with the second pod. For further explanation,FIG.3Esets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset (311-42) that is synchronized across a plurality of storage systems (311-38,311-40) according to some embodiments of the present disclosure. Although depicted in less detail, the storage systems (311-38,311-40) depicted inFIG.3Emay be similar to the storage systems described above with reference toFIGS.1A-1D,FIGS.2A-2G,FIGS.3A-3B, or any combination thereof. In fact, the storage system depicted inFIG.3Emay include the same, fewer, additional components as the storage systems described above. The dataset (311-42) depicted inFIG.3Emay be embodied, for example, as the contents of a particular volume, as the contents of a particular shared of a volume, or as any other collection of one or more data elements. The dataset (311-42) may be synchronized across a plurality of storage systems (311-38,311-40) such that each storage system (311-38,311-40) retains a local copy of the dataset (311-42). In the examples described herein, such a dataset (311-42) is synchronously replicated across the storage systems (311-38,311-40) in such a way that the dataset (311-42) can be accessed through any of the storage systems (311-38,311-40) with performance characteristics such that any one storage system in the cluster doesn't operate substantially more optimally any other storage system in the cluster, at least as long as the cluster and the particular storage system being accessed are running nominally. In such systems, modifications to the dataset (311-42) should be made to the copy of the dataset that resides on each storage system (311-38,311-40) in such a way that accessing the dataset (311-42) on any storage system (311-38,311-40) will yield consistent results. For example, a write request issued to the dataset must be serviced on all storage systems (311-38,311-40) or on none of the storage systems (311-38,311-40) that were running nominally at the beginning of the write and that remained running nominally through completion of the write. Likewise, some groups of operations (e.g., two write operations that are directed to same location within the dataset) must be executed in the same order, or other steps must be taken as described in greater detail below, on all storage systems (311-38,311-40) such that the dataset is ultimately identical on all storage systems (311-38,311-40). Modifications to the dataset (311-42) need not be made at the exact same time, but some actions (e.g., issuing an acknowledgement that a write request directed to the dataset, enabling read access to a location within the dataset that is targeted by a write request that has not yet been completed on both storage systems) may be delayed until the copy of the dataset on each storage system (311-38,311-40) has been modified. In the example method depicted inFIG.3E, the designation of one storage system (311-40) as the ‘leader’ and another storage system (311-38) as the ‘follower’ may refer to the respective relationships of each storage system for the purposes of synchronously replicating a particular dataset across the storage systems. In such an example, and as will be described in greater detail below, the leader storage system (311-40) may be responsible for performing some processing of an incoming I/O operation and passing such information along to the follower storage system (311-38) or performing other tasks that are not required of the follower storage system (311-40). The leader storage system (311-40) may be responsible for performing tasks that are not required of the follower storage system (311-38) for all incoming I/O operations or, alternatively, the leader-follower relationship may be specific to only a subset of the I/O operations that are received by either storage system. For example, the leader-follower relationship may be specific to I/O operations that are directed towards a first volume, a first group of volumes, a first group of logical addresses, a first group of physical addresses, or some other logical or physical delineator. In such a way, a first storage system may serve as the leader storage system for I/O operations directed to a first set of volumes (or other delineator) while a second storage system may serve as the leader storage system for I/O operations directed to a second set of volumes (or other delineator). The example method depicted inFIG.3Edepicts an embodiment where synchronizing a plurality of storage systems (311-38,311-40) occurs in response to the receipt of a request (311-04) to modify a dataset (311-42) by the leader storage system (311-40), although synchronizing a plurality of storage systems (311-38,311-40) may also be carried out in response to the receipt of a request (311-04) to modify a dataset (311-42) by the follower storage system (311-38), as will be described in greater detail below. The example method depicted inFIG.3Eincludes receiving (311-06), by a leader storage system (311-40), a request (311-04) to modify the dataset (311-42). The request (311-04) to modify the dataset (311-42) may be embodied, for example, as a request to write data to a location within the storage system (311-40) that contains data that is included in the dataset (311-42), as a request to write data to a volume that contains data that is included in the dataset (311-42), as a request to take a snapshot of the dataset (311-42), as a virtual range copy, as an UNMAP operation that essentially represents a deletion of some portion of the data in the dataset (311-42), as a modifying transformations of the dataset (311-42) (rather than a change to a portion of data within the dataset), or as some other operation that results in a change to some portion of the data that is included in the dataset (311-42). In the example method depicted inFIG.3E, the request (311-04) to modify the dataset (311-42) is issued by a host (311-02) that may be embodied, for example, as an application that is executing on a virtual machine, as an application that is executing on a computing device that is connected to the storage system (311-40), or as some other entity configured to access the storage system (311-40). The example method depicted inFIG.3Ealso includes generating (311-08), by the leader storage system (311-40), information (311-10) describing the modification to the dataset (311-42). The leader storage system (311-40) may generate (311-08) the information (311-10) describing the modification to the dataset (311-42), for example, by determining ordering versus any other operations that are in progress, by determining the proper outcome of overlapping modifications (e.g., the appropriate outcome of two requests to modify the same storage location), calculating any distributed state changes such as to common elements of metadata across all members of the pod (e.g., all storage systems across which the dataset is synchronously replicated), and so on. The information (311-10) describing the modification to the dataset (311-42) may be embodied, for example, as system-level information that is used to describe an I/O operation that is to be performed by a storage system. The leader storage system (311-40) may generate (311-08) the information (311-10) describing the modification to the dataset (311-42) by processing the request (311-04) to modify the dataset (311-42) just enough to figure out what should happen in order to service the request (311-04) to modify the dataset (311-42). For example, the leader storage system (311-40) may determine whether some ordering of the execution of the request (311-04) to modify the dataset (311-42) relative to other requests to modify the dataset (311-42) is required, or some other steps must be taken as described in greater detail below, to produce an equivalent result on each storage system (311-38,311-40). Consider an example in which the request (311-04) to modify the dataset (311-42) is embodied as a request to copy blocks from a first address range in the dataset (311-42) to a second address range in the dataset (311-42). In such an example, assume that three other write operations (write A, write B, write C) are directed to the first address range in the dataset (311-42). In such an example, if the leader storage system (311-40) services write A and write B (but does not service write C) prior to copying the blocks from the first address range in the dataset (311-42) to the second address range in the dataset (311-42), the follower storage system (311-38) must also service write A and write B (but does not service write C) prior to copying the blocks from the first address range in the dataset (311-42) to the second address range in the dataset (311-42) in order to yield consistent results. As such, when the leader storage system (311-40) generates (311-08) the information (311-10) describing the modification to the dataset (311-42), in this example, the leader storage system (311-40) could generate information (e.g., sequence numbers for write A and write B) that identifies other operations that must be completed before the follower storage system (311-38) can process the request (311-04) to modify the dataset (311-42). Consider an additional example in which two requests (e.g., Write A and Write B) are directed to overlapping portions of the dataset (311-42). In such an example, if the leader storage system (311-40) services write A and subsequently services write B, while the follower storage system (311-38) services write B and subsequently services write A, the dataset (311-42) would not be consistent across both storage systems (311-38,311-40). As such, when the leader storage system (311-40) generates (311-08) the information (311-10) describing the modification to the dataset (311-42), in this example, the leader storage system (311-40) could generate information (e.g., sequence numbers for write A and write B) that identifies the order in which the requests should be executed. Alternatively, rather than generating information (311-10) describing the modification to the dataset (311-42) which requires intermediate behavior from each storage system (311-38,311-40), the leader storage system (311-40) may generate (311-08) information (311-10) describing the modification to the dataset (311-42) that includes information that identifies the proper outcome of the two requests. For example, if write B logically follows write A (and overlaps with write A), the end result must be that the dataset (311-42) includes the parts of write B that overlap with write A, rather than including the parts of write A that overlap with write B. Such an outcome could be facilitated by merging a result in memory and writing the result of such a merge to the dataset (311-42), rather than strictly requiring that a particular storage system (311-38,311-40) execute write A and then subsequently execute write B. Readers will appreciate that more subtle cases relate to snapshots and virtual address range copies. Readers will further appreciate that correct results for any operation must be committed to the point of being recoverable before the operation can be acknowledged. But, multiple operations can be committed together, or operations can be partially committed if recovery would ensure correctness. For example, a snapshot could locally commit with a recorded dependency on an expected write of A and B, but A or B might not have themselves committed. The snapshot cannot be acknowledged, and recovery might end up backing out the snapshot if the missing I/O cannot be recovered from another array. Also, if write B overlaps with write A, then the leader may “order” B to be after A, but A could actually be discarded and the operation to write A would then simply wait for B. Writes A, B, C, and D, coupled with a snapshot between A,B and C,D could commit and/or acknowledge some or all parts together as long as recovery cannot result in a snapshot inconsistency across arrays and as long as acknowledgement does not complete a later operation before an earlier operation has been persisted to the point that it is guaranteed to be recoverable. The example method depicted inFIG.3Ealso includes sending (311-12), from the leader storage system (311-40) to a follower storage system (311-38), information (311-10) describing the modification to the dataset (311-42). Sending (311-12) information (311-10) describing the modification to the dataset (311-42) from the leader storage system (311-40) to a follower storage system (311-38) may be carried out, for example, by the leader storage system (311-40) sending one or more messages to the follower storage system (311-38). The leader storage system (311-40) may also send, in the same messages or in one or more different messages, I/O payload (311-14) for the request (311-04) to modify the dataset (311-42). The I/O payload (311-14) may be embodied, for example, as data that is to be written to storage within the follower storage system (311-38) when the request (311-04) to modify the dataset (311-42) is embodied as a request to write data to the dataset (311-42). In such an example, because the request (311-04) to modify the dataset (311-42) was received (311-06) by the leader storage system (311-40), the follower storage system (311-38) has not received the I/O payload (311-14) associated with the request (311-04) to modify the dataset (311-42). In the example method depicted inFIG.3E, the information (311-10) describing the modification to the dataset (311-42) and the I/O payload (311-14) that is associated with the request (311-04) to modify the dataset (311-42) may be sent (311-12) from the leader storage system (311-40) to the follower storage system (311-38) via one or more data communications networks that couple the leader storage system (311-40) to the follower storage system (311-38), via one or more dedicated data communications links (e.g., a first link for sending I/O payload and a second link for sending information describing modifications to datasets) that couples the leader storage system (311-40) to the follower storage system (311-38), or via some other mechanism. The example method depicted inFIG.3Ealso includes receiving (311-16), by the follower storage system (311-38), the information (311-10) describing the modification to the dataset (311-42). The follower storage system (311-38) may receive (311-16) the information (311-10) describing the modification to the dataset (311-42) and I/O payload (311-14) from the leader storage system (311-40), for example, via one or more messages that are sent from the leader storage system (311-40) to the follower storage system (311-38). The one or more messages may be sent from the leader storage system (311-40) to the follower storage system (311-38) via one or more dedicated data communications links between the two storage systems (311-38,311-40), by the leader storage system (311-40) writing the message to a predetermined memory location (e.g., the location of a queue) on the follower storage system (311-38) using RDMA or a similar mechanism, or in other ways. In one embodiment, the follower storage system (311-38) may receive (311-16) the information (311-10) describing the modification to the dataset (311-42) and I/O payload (311-14) from the leader storage system (311-40) through the use of the use of SCSI requests (writes from sender to receiver, or reads from receiver to sender) as a communication mechanism. In such an embodiment, a SCSI Write request is used to encode information that is intended to be sent (which includes whatever data and metadata), and which may be delivered to a special pseudo-device or over a specially configured SCSI network, or through any other agreed upon addressing mechanism. Or, alternately, the model can issue a set of open SCSI read requests from a receiver to a sender, also using special devices, specially configured SCSI networks, or other agreed upon mechanisms. Encoded information including data and metadata will be delivered to the receiver as a response to one or more of these open SCSI requests. Such a model can be implemented over Fibre Channel SCSI networks, which are often deployed as the “dark fibre” storage network infrastructure between data centers. Such a model also allows the use of the same network lines for host-to-remote-array multipathing and bulk array-to-array communications. The example method depicted inFIG.3Ealso includes processing (311-18), by the follower storage system (311-38), the request (311-04) to modify the dataset (311-42). In the example method depicted inFIG.3E, the follower storage system (311-38) may process (311-18) the request (311-04) to modify the dataset (311-42) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the follower storage system (311-38) in dependence upon the information (311-10) describing the modification to the dataset (311-42) as well as the I/O payload (311-14) that was received from the leader storage system (311-40). Consider an example in which the request (311-04) to modify the dataset (311-42) is embodied as a write operation that is directed to a volume that is included in the dataset (311-42) and the information (311-10) describing the modification to the dataset (311-42) indicates that the write operation can only be executed after a previously issued write operation has been processed. In such an example, processing (311-18) the request (311-04) to modify the dataset (311-42) may be carried out by the follower storage system (311-38) first verifying that the previously issued write operation has been processed on the follower storage system (311-38) and subsequently writing I/O payload (311-14) associated with the write operation to one or more storage devices that are included in the follower storage system (311-38). In such an example, the request (311-04) to modify the dataset (311-42) may be considered to have been completed and successfully processed, for example, when the I/O payload (311-14) has been committed to persistent storage within the follower storage system (311-38). The example method depicted inFIG.3Ealso includes acknowledging (311-20), by the follower storage system (311-38) to the leader storage system (311-40), completion of the request (311-04) to modify the dataset (311-42). In the example method depicted inFIG.3E, acknowledging (311-20), by the follower storage system (311-38) to the leader storage system (311-40), completion of the request (311-04) to modify the dataset (311-42) may be carried out by the follower storage system (311-38) sending an acknowledgment (311-22) message to the leader storage system (311-40). Such messages may include, for example, information identifying the particular request (311-04) to modify the dataset (311-42) that was completed as well as any additional information useful in acknowledging (311-20) the completion of the request (311-04) to modify the dataset (311-42) by the follower storage system (311-38). In the example method depicted inFIG.3E, acknowledging (311-20) completion of the request (311-04) to modify the dataset (311-42) to the leader storage system (311-40) is illustrated by the follower storage system (311-38) issuing an acknowledgment (311-22) message to the leader storage system (311-38). The example method depicted inFIG.3Ealso includes processing (311-24), by the leader storage system (311-40), the request (311-04) to modify the dataset (311-42). In the example method depicted inFIG.3E, the leader storage system (311-40) may process (311-24) the request (311-04) to modify the dataset (311-42) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the leader storage system (311-40) in dependence upon the information (311-10) describing the modification to the dataset (311-42) as well as the I/O payload (311-14) that was received as part of the request (311-04) to modify the dataset (311-42). Consider an example in which the request (311-04) to modify the dataset (311-42) is embodied as a write operation that is directed to a volume that is included in the dataset (311-42) and the information (311-10) describing the modification to the dataset (311-42) indicates that the write operation can only be executed after a previously issued write operation has been processed. In such an example, processing (311-24) the request (311-04) to modify the dataset (311-42) may be carried out by the leader storage system (311-40) first verifying that the previously issued write operation has been processed by the leader storage system (311-40) and subsequently writing I/O payload (311-14) associated with the write operation to one or more storage devices that are included in the leader storage system (311-40). In such an example, the request (311-04) to modify the dataset (311-42) may be considered to have been completed and successfully processed, for example, when the I/O payload (311-14) has been committed to persistent storage within the leader storage system (311-40). The example method depicted inFIG.3Ealso includes receiving (311-26), from the follower storage system (311-38), an indication that the follower storage system (311-38) has processed the request (311-04) to modify the dataset (311-42). In this example, the indication that the follower storage system (311-38) has processed the request (311-04) to modify the dataset (311-42) is embodied as an acknowledgement (311-22) message sent from the follower storage system (311-38) to the leader storage system (311-40). Readers will appreciate that although many of the steps described above are depicted and described as occurring in a particular order, no particular order is actually required. In fact, because the follower storage system (311-38) and the leader storage system (311-40) are independent storage systems, each storage system may be performing some of the steps described above in parallel. For example, the follower storage system (311-38) may receive (311-16) the information (311-10) describing the modification to the dataset (311-42), process (311-18) the request (311-04) to modify the dataset (311-42), or acknowledge (311-20) completion of the request (311-04) to modify the dataset (311-42) before the leader storage system (311-40) has processed (311-24) the request (311-04) to modify the dataset (311-42). Alternatively, the leader storage system (311-40) may have processed (311-24) the request (311-04) to modify the dataset (311-42) before the follower storage system (311-38) has received (311-16) the information (311-10) describing the modification to the dataset (311-42), processed (311-18) the request (311-04) to modify the dataset (311-42), or acknowledged (311-20) completion of the request (311-04) to modify the dataset (311-42). The example method depicted inFIG.3Ealso includes acknowledging (311-34), by the leader storage system (311-40), completion of the request (311-04) to modify the dataset (311-42). In the example method depicted inFIG.3E, acknowledging (311-34) completion of the request (311-04) to modify the dataset (311-42) may be carried out through the use of one or more acknowledgement (311-36) messages that are sent from the leader storage system (311-40) to the host (311-02) or via some other appropriate mechanism. In the example method depicted inFIG.3E, the leader storage system (311-40) may determine (311-28) whether the request (311-04) to modify the dataset (311-42) has been processed (311-18) by the follower storage system (311-38) prior to acknowledging (311-34) completion of the request (311-04) to modify the dataset (311-42). The leader storage system (311-40) may determine (311-28) whether the request (311-04) to modify the dataset (311-42) has been processed (311-18) by the follower storage system (311-38), for example, by determining whether the leader storage system (311-40) has received an acknowledgment message or other message from the follower storage system (311-38) indicating that the request (311-04) to modify the dataset (311-42) has been processed (311-18) by the follower storage system (311-38). In such an example, if the leader storage system (311-40) affirmatively (311-30) determines that the request (311-04) to modify the dataset (311-42) has been processed (311-18) by the follower storage system (311-38) and also processed (311-24) by the leader storage system (311-38), the leader storage system (311-40) may proceed by acknowledging (311-34) completion of the request (311-04) to modify the dataset (311-42) to the host (311-02) that initiated the request (311-04) to modify the dataset (311-42). If the leader storage system (311-40) determines that the request (311-04) to modify the dataset (311-42) has not (311-32) been processed (311-18) by the follower storage system (311-38) or has not been processed (311-24) by the leader storage system (311-38), however, the leader storage system (311-40) may not yet acknowledge (311-34) completion of the request (311-04) to modify the dataset (311-42) to the host (311-02) that initiated the request (311-04) to modify the dataset (311-42), as the leader storage system (311-40) may only acknowledge (311-34) completion of the request (311-04) to modify the dataset (311-42) to the host (311-02) that initiated the request (311-04) to modify the dataset (311-42) when the request (311-04) to modify the dataset (311-42) has been successfully processed on all storage systems (311-38,311-40) across which a dataset (311-42) is synchronously replicated. Readers will appreciate that in the example method depicted inFIG.3E, sending (311-12), from the leader storage system (311-40) to a follower storage system (311-38), information (311-10) describing the modification to the dataset (311-42) and acknowledging (311-20), by the follower storage system (311-38) to the leader storage system (311-40), completion of the request (311-04) to modify the dataset (311-42) can be carried out using single roundtrip messaging. Single roundtrip messaging may be used, for example, through the use of Fibre Channel as a data interconnect. Typically, SCSI protocols are used with Fibre Channel. Such interconnects are commonly provisioned between data centers because some older replication technologies may be built to essentially replicate data as SCSI transactions over Fibre Channel networks. Also, historically Fibre Channel SCSI infrastructure had less overhead and lower latencies than networks based on Ethernet and TCP/IP. Further, when data centers are internally connected to block storage arrays using Fibre Channel, the Fibre Channel networks may be stretched to other data centers so that hosts in one data center can switch to accessing storage arrays in a remote data center when local storage arrays fail. SCSI could be used as a general communication mechanism, even though it is normally designed for use with block storage protocols for storing and retrieving data in block-oriented volumes (or for tape). For example, SCSI READ or SCSI WRITE could be used to deliver or retrieve message data between storage controllers in paired storage systems. A typical implementation of SCSI WRITE requires two message round trips: a SCSI initiator sends a SCSI CDB describing the SCSI WRITE operation, a SCSI target receives that CDB and the SCSI target sends a “Ready to Receive” message to the SCSI initiator. The SCSI initiator then sends data to the SCSI target and when SCSI WRITE is complete the SCSI target responds to the SCSI initiator with a Success completion. A SCSI READ request, on the other hand, requires only one round trip: the SCSI initiator sends a SCSI CDB describing the SCSI READ operation, a SCSI target receives that CDB and responds with data and then a Success completion. As a result, over distance, a SCSI READ incurs half of the distance-related latency as a SCSI WRITE. Because of this, it may be faster for a data communications receiver to use SCSI READ requests to receive messages than for a sender of messages to use SCSI WRITE requests to send data. Using SCSI READ simply requires a message sender to operate as a SCSI target, and for a message receiver to operate as a SCSI initiator. A message receiver may send some number of SCSI CDB READ requests to any message sender, and the message sender would respond to one of the outstanding CDB READ requests when message data is available. Since SCSI subsystems may timeout if a READ request is outstanding for too long (e.g., 10 seconds), READ requests should be responded to within a few seconds even if there is no message data to be sent. SCSI tape requests, as described in the SCSI Stream Commands standard from the T10 Technical Committee of the InterNational Committee on Information Technology Standards, support variable response data, which can be more flexible for returning variable-sized message data. The SCSI standard also supports an Immediate mode for SCSI WRITE requests, which could allow single-round-trip SCSI WRITE commands. Readers will appreciate that many of the embodiments described below also utilize single roundtrip messaging. For further explanation,FIG.4sets forth an example of a cloud-based storage system (403) in accordance with some embodiments of the present disclosure. In the example depicted inFIG.4, the cloud-based storage system (403) is created entirely in a cloud computing environment (402) such as, for example, Amazon Web Services (‘AWS’), Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system (403) may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system (403) may be used to provide block storage services to users of the cloud-based storage system (403), the cloud-based storage system (403) may be used to provide storage services to users of the cloud-based storage system (403) through the use of solid-state storage, and so on. The cloud-based storage system (403) depicted inFIG.4includes two cloud computing instances (404,406) that each are used to support the execution of a storage controller application (408,410). The cloud computing instances (404,406) may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment (402) to support the execution of software applications such as the storage controller application (408,410). In one embodiment, the cloud computing instances (404,406) may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application (408,410) may be booted to create and configure a virtual machine that may execute the storage controller application (408,410). In the example method depicted inFIG.4, the storage controller application (408,410) may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application (408,410) may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers (110A,110B inFIG.1A) described above such as writing data received from the users of the cloud-based storage system (403) to the cloud-based storage system (403), erasing data from the cloud-based storage system (403), retrieving data from the cloud-based storage system (403) and providing such data to users of the cloud-based storage system (403), monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances (404,406) that each include the storage controller application (408,410), in some embodiments one cloud computing instance (404) may operate as the primary controller as described above while the other cloud computing instance (406) may operate as the secondary controller as described above. In such an example, in order to save costs, the cloud computing instance (404) that operates as the primary controller may be deployed on a relatively high-performance and relatively expensive cloud computing instance while the cloud computing instance (406) that operates as the secondary controller may be deployed on a relatively low-performance and relatively inexpensive cloud computing instance. Readers will appreciate that the storage controller application (408,410) depicted inFIG.4may include identical source code that is executed within different cloud computing instances (404,406). Consider an example in which the cloud computing environment (402) is embodied as AWS and the cloud computing instances are embodied as EC2 instances. In such an example, AWS offers many types of EC2 instances. For example, AWS offers a suite of general purpose EC2 instances that include varying levels of memory and processing power. In such an example, the cloud computing instance (404) that operates as the primary controller may be deployed on one of the instance types that has a relatively large amount of memory and processing power while the cloud computing instance (406) that operates as the secondary controller may be deployed on one of the instance types that has a relatively small amount of memory and processing power. In such an example, upon the occurrence of a failover event where the roles of primary and secondary are switched, a double failover may actually be carried out such that: 1) a first failover event where the cloud computing instance (406) that formerly operated as the secondary controller begins to operate as the primary controller, and 2) a third cloud computing instance (not shown) that is of an instance type that has a relatively large amount of memory and processing power is spun up with a copy of the storage controller application, where the third cloud computing instance begins operating as the primary controller while the cloud computing instance (406) that originally operated as the secondary controller begins operating as the secondary controller again. In such an example, the cloud computing instance (404) that formerly operated as the primary controller may be terminated. Readers will appreciate that in alternative embodiments, the cloud computing instance (404) that is operating as the secondary controller after the failover event may continue to operate as the secondary controller and the cloud computing instance (406) that operated as the primary controller after the occurrence of the failover event may be terminated once the primary role has been assumed by the third cloud computing instance (not shown). Readers will appreciate that while the embodiments described above relate to embodiments where one cloud computing instance (404) operates as the primary controller and the second cloud computing instance (406) operates as the secondary controller, other embodiments are within the scope of the present disclosure. For example, each cloud computing instance (404,406) may operate as a primary controller for some portion of the address space supported by the cloud-based storage system (403), each cloud computing instance (404,406) may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system (403) are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. In such an example, a controller failure may take more time to recover from as a new cloud computing instance that includes the storage controller application would need to be spun up rather than having an already created cloud computing instance take on the role of servicing I/O operations that would have otherwise been handled by the failed cloud computing instance. The cloud-based storage system (403) depicted inFIG.4includes cloud computing instances (424a,424b,424n) with local storage (414,418,422). The cloud computing instances (424a,424b,424n) depicted inFIG.4may be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment (402) to support the execution of software applications. The cloud computing instances (424a,424b,424n) ofFIG.4may differ from the cloud computing instances (404,406) described above as the cloud computing instances (424a,424b,424n) ofFIG.4have local storage (414,418,422) resources whereas the cloud computing instances (404,406) that support the execution of the storage controller application (408,410) need not have local storage resources. The cloud computing instances (424a,424b,424n) with local storage (414,418,422) may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 13 instances that include one or more SSDs, and so on. In some embodiments, the local storage (414,418,422) must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.4, each of the cloud computing instances (424a,424b,424n) with local storage (414,418,422) can include a software daemon (412,416,420) that, when executed by a cloud computing instance (424a,424b,424n) can present itself to the storage controller applications (408,410) as if the cloud computing instance (424a,424b,424n) were a physical storage device (e.g., one or more SSDs). In such an example, the software daemon (412,416,420) may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications (408,410) can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications (408,410) may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications (408,410) and the cloud computing instances (424a,424b,424n) with local storage (414,418,422) may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.4, each of the cloud computing instances (424a,424b,424n) with local storage (414,418,422) may also be coupled to block-storage (426,428,430) that is offered by the cloud computing environment (402). The block-storage (426,428,430) that is offered by the cloud computing environment (402) may be embodied, for example, as Amazon Elastic Block Store (‘EBS’) volumes. For example, a first EBS volume (426) may be coupled to a first cloud computing instance (424a), a second EBS volume (428) may be coupled to a second cloud computing instance (424b), and a third EBS volume (430) may be coupled to a third cloud computing instance (424n). In such an example, the block-storage (426,428,430) that is offered by the cloud computing environment (402) may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon (412,416,420) (or some other module) that is executing within a particular cloud computing instance (424a,424b,424n) may, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage (414,418,422) resources. In some alternative embodiments, data may only be written to the local storage (414,418,422) resources within a particular cloud computing instance (424a,424b,424n). In an alternative embodiment, rather than using the block-storage (426,428,430) that is offered by the cloud computing environment (402) as NVRAM, actual RAM on each of the cloud computing instances (424a,424b,424n) with local storage (414,418,422) may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In the example depicted inFIG.4, the cloud computing instances (424a,424b,424n) with local storage (414,418,422) may be utilized, by cloud computing instances (404,406) that support the execution of the storage controller application (408,410) to service I/O operations that are directed to the cloud-based storage system (403). Consider an example in which a first cloud computing instance (404) that is executing the storage controller application (408) is operating as the primary controller. In such an example, the first cloud computing instance (404) that is executing the storage controller application (408) may receive (directly or indirectly via the secondary controller) requests to write data to the cloud-based storage system (403) from users of the cloud-based storage system (403). In such an example, the first cloud computing instance (404) that is executing the storage controller application (408) may perform various tasks such as, for example, deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances (424a,424b,424n) with local storage (414,418,422). Either cloud computing instance (404,406), in some embodiments, may receive a request to read data from the cloud-based storage system (403) and may ultimately send a request to read data to one or more of the cloud computing instances (424a,424b,424n) with local storage (414,418,422). Readers will appreciate that when a request to write data is received by a particular cloud computing instance (424a,424b,424n) with local storage (414,418,422), the software daemon (412,416,420) or some other module of computer program instructions that is executing on the particular cloud computing instance (424a,424b,424n) may be configured to not only write the data to its own local storage (414,418,422) resources and any appropriate block storage (426,428,430) that are offered by the cloud computing environment (402), but the software daemon (412,416,420) or some other module of computer program instructions that is executing on the particular cloud computing instance (424a,424b,424n) may also be configured to write the data to cloud-based object storage (432) that is attached to the particular cloud computing instance (424a,424b,424n). The cloud-based object storage (432) that is attached to the particular cloud computing instance (424a,424b,424n) may be embodied, for example, as Amazon Simple Storage Service (‘S3’) storage that is accessible by the particular cloud computing instance (424a,424b,424n). In other embodiments, the cloud computing instances (404,406) that each include the storage controller application (408,410) may initiate the storage of the data in the local storage (414,418,422) of the cloud computing instances (424a,424b,424n) and the cloud-based object storage (432). Readers will appreciate that the software daemon (412,416,420) or other module of computer program instructions that writes the data to block storage (e.g., local storage (414,418,422) resources) and also writes the data to cloud-based object storage (432) may be executed on processing units of dissimilar types (e.g., different types of cloud computing instances, cloud computing instances that contain different processing units). In fact, the software daemon (412,416,420) or other module of computer program instructions that writes the data to block storage (e.g., local storage (414,418,422) resources) and also writes the data to cloud-based object storage (432) can be migrated between different types of cloud computing instances based on demand. Readers will appreciate that, as described above, the cloud-based storage system (403) may be used to provide block storage services to users of the cloud-based storage system (403). While the local storage (414,418,422) resources and the block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n) may support block-level access, the cloud-based object storage (432) that is attached to the particular cloud computing instance (424a,424b,424n) supports only object-based access. In order to address this, the software daemon (412,416,420) or some other module of computer program instructions that is executing on the particular cloud computing instance (424a,424b,424n) may be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage (432) that is attached to the particular cloud computing instance (424a,424b,424n). Consider an example in which data is written to the local storage (414,418,422) resources and the block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n) in 1 MB blocks. In such an example, assume that a user of the cloud-based storage system (403) issues a request to write data that, after being compressed and deduplicated by the storage controller application (408,410) results in the need to write 5 MB of data. In such an example, writing the data to the local storage (414,418,422) resources and the block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n) is relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage (414,418,422) resources and the block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n). In such an example, the software daemon (412,416,420) or some other module of computer program instructions that is executing on the particular cloud computing instance (424a,424b,424n) may be configured to: 1) create a first object that includes the first 1 MB of data and write the first object to the cloud-based object storage (432), 2) create a second object that includes the second 1 MB of data and write the second object to the cloud-based object storage (432), 3) create a third object that includes the third 1 MB of data and write the third object to the cloud-based object storage (432), and so on. As such, in some embodiments, each object that is written to the cloud-based object storage (432) may be identical (or nearly identical) in size. Readers will appreciate that in such an example, metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data). Readers will appreciate that the cloud-based object storage (432) may be incorporated into the cloud-based storage system (403) to increase the durability of the cloud-based storage system (403). Continuing with the example described above where the cloud computing instances (424a,424b,424n) are EC2 instances, readers will understand that EC2 instances are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of the EC2 instance. As such, relying on the cloud computing instances (424a,424b,424n) with local storage (414,418,422) as the only source of persistent data storage in the cloud-based storage system (403) may result in a relatively unreliable storage system. Likewise, EBS volumes are designed for 99.999% availability. As such, even relying on EBS as the persistent data store in the cloud-based storage system (403) may result in a storage system that is not sufficiently durable. Amazon S3, however, is designed to provide 99.9999999999% durability, meaning that a cloud-based storage system (403) that can incorporate S3 into its pool of storage is substantially more durable than various other options. Readers will appreciate that while a cloud-based storage system (403) that can incorporate S3 into its pool of storage is substantially more durable than various other options, utilizing S3 as the primary pool of storage may result in storage system that has relatively slow response times and relatively long I/O latencies. As such, the cloud-based storage system (403) depicted inFIG.4not only stores data in S3 but the cloud-based storage system (403) also stores data in local storage (414,418,422) resources and block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n), such that read operations can be serviced from local storage (414,418,422) resources and the block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n), thereby reducing read latency when users of the cloud-based storage system (403) attempt to read data from the cloud-based storage system (403). In some embodiments, all data that is stored by the cloud-based storage system (403) may be stored in both: 1) the cloud-based object storage (432), and 2) at least one of the local storage (414,418,422) resources or block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n). In such embodiments, the local storage (414,418,422) resources and block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n) may effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances (424a,424b,424n) without requiring the cloud computing instances (424a,424b,424n) to access the cloud-based object storage (432). Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system (403) may be stored in the cloud-based object storage (432), but less than all data that is stored by the cloud-based storage system (403) may be stored in at least one of the local storage (414,418,422) resources or block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n). In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system (403) should reside in both: 1) the cloud-based object storage (432), and 2) at least one of the local storage (414,418,422) resources or block-storage (426,428,430) resources that are utilized by the cloud computing instances (424a,424b,424n). As described above, when the cloud computing instances (424a,424b,424n) with local storage (414,418,422) are embodied as EC2 instances, the cloud computing instances (424a,424b,424n) with local storage (414,418,422) are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of each cloud computing instance (424a,424b,424n) with local storage (414,418,422). As such, one or more modules of computer program instructions that are executing within the cloud-based storage system (403) (e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances (424a,424b,424n) with local storage (414,418,422). In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances (424a,424b,424n) with local storage (414,418,422) by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances (424a,424b,424n) from the cloud-based object storage (432), and storing the data retrieved from the cloud-based object storage (432) in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Consider an example in which all cloud computing instances (424a,424b,424n) with local storage (414,418,422) failed. In such an example, the monitoring module may create new cloud computing instances with local storage, where high-bandwidth instances types are selected that allow for the maximum data transfer rates between the newly created high-bandwidth cloud computing instances with local storage and the cloud-based object storage (432). Readers will appreciate that instances types are selected that allow for the maximum data transfer rates between the new cloud computing instances and the cloud-based object storage (432) such that the new high-bandwidth cloud computing instances can be rehydrated with data from the cloud-based object storage (432) as quickly as possible. Once the new high-bandwidth cloud computing instances are rehydrated with data from the cloud-based object storage (432), less expensive lower-bandwidth cloud computing instances may be created, data may be migrated to the less expensive lower-bandwidth cloud computing instances, and the high-bandwidth cloud computing instances may be terminated. Readers will appreciate that in some embodiments, the number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system (403). The number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system (403) in order to more rapidly pull data from the cloud-based object storage (432) and into the new cloud computing instances, as each new cloud computing instance can (in parallel) retrieve some portion of the data stored by the cloud-based storage system (403). In such embodiments, once the data stored by the cloud-based storage system (403) has been pulled into the newly created cloud computing instances, the data may be consolidated within a subset of the newly created cloud computing instances and those newly created cloud computing instances that are excessive may be terminated. Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system (403) have written to the cloud-based storage system (403). In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage (432), distinct 1/100,000thchunks of the valid data that users of the cloud-based storage system (403) have written to the cloud-based storage system (403) and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage (432) in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated. Readers will appreciate that various performance aspects of the cloud-based storage system (403) may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system (403) can be scaled-up or scaled-out as needed. Consider an example in which the monitoring module monitors the performance of the cloud-based storage system (403) via communications with one or more of the cloud computing instances (404,406) that each are used to support the execution of a storage controller application (408,410), via monitoring communications between cloud computing instances (404,406,424a,424b,424n), via monitoring communications between cloud computing instances (404,406,424a,424b,424n) and the cloud-based object storage (432), or in some other way. In such an example, assume that the monitoring module determines that the cloud computing instances (404,406) that are used to support the execution of a storage controller application (408,410) are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system (403). In such an example, the monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances (404,406) that are used to support the execution of a storage controller application (408,410) are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. Consider, as an additional example of dynamically sizing the cloud-based storage system (403), an example in which the monitoring module determines that the utilization of the local storage that is collectively provided by the cloud computing instances (424a,424b,424n) has reached a predetermined utilization threshold (e.g., 95%). In such an example, the monitoring module may create additional cloud computing instances with local storage to expand the pool of local storage that is offered by the cloud computing instances. Alternatively, the monitoring module may create one or more new cloud computing instances that have larger amounts of local storage than the already existing cloud computing instances (424a,424b,424n), such that data stored in an already existing cloud computing instance (424a,424b,424n) can be migrated to the one or more new cloud computing instances and the already existing cloud computing instance (424a,424b,424n) can be terminated, thereby expanding the pool of local storage that is offered by the cloud computing instances. Likewise, if the pool of local storage that is offered by the cloud computing instances is unnecessarily large, data can be consolidated and some cloud computing instances can be terminated. Readers will appreciate that the cloud-based storage system (403) may be sized up and down automatically by a monitoring module applying a predetermined set of rules that may be relatively simple of relatively complicated. In fact, the monitoring module may not only take into account the current state of the cloud-based storage system (403), but the monitoring module may also apply predictive policies that are based on, for example, observed behavior (e.g., every night from 10 PM until 6 AM usage of the storage system is relatively light), predetermined fingerprints (e.g., every time a virtual desktop infrastructure adds 100 virtual desktops, the number of IOPS directed to the storage system increase by X), and so on. In such an example, the dynamic scaling of the cloud-based storage system (403) may be based on current performance metrics, predicted workloads, and many other factors, including combinations thereof. Readers will further appreciate that because the cloud-based storage system (403) may be dynamically scaled, the cloud-based storage system (403) may even operate in a way that is more dynamic. Consider the example of garbage collection. In a traditional storage system, the amount of storage is fixed. As such, at some point the storage system may be forced to perform garbage collection as the amount of available storage has become so constrained that the storage system is on the verge of running out of storage. In contrast, the cloud-based storage system (403) described here can always ‘add’ additional storage (e.g., by adding more cloud computing instances with local storage). Because the cloud-based storage system (403) described here can always ‘add’ additional storage, the cloud-based storage system (403) can make more intelligent decisions regarding when to perform garbage collection. For example, the cloud-based storage system (403) may implement a policy that garbage collection only be performed when the number of IOPS being serviced by the cloud-based storage system (403) falls below a certain level. In some embodiments, other system-level functions (e.g., deduplication, compression) may also be turned off and on in response to system load, given that the size of the cloud-based storage system (403) is not constrained in the same way that traditional storage systems are constrained. Readers will appreciate that embodiments of the present disclosure resolve an issue with block-storage services offered by some cloud computing environments as some cloud computing environments only allow for one cloud computing instance to connect to a block-storage volume at a single time. For example, in Amazon AWS, only a single EC2 instance may be connected to an EBS volume. Through the use of EC2 instances with local storage, embodiments of the present disclosure can offer multi-connect capabilities where multiple EC2 instances can connect to another EC2 instance with local storage (‘a drive instance’). In such embodiments, the drive instances may include software executing within the drive instance that allows the drive instance to support I/O directed to a particular volume from each connected EC2 instance. As such, some embodiments of the present disclosure may be embodied as multi-connect block storage services that may not include all of the components depicted inFIG.4. In some embodiments, especially in embodiments where the cloud-based object storage (432) resources are embodied as Amazon S3, the cloud-based storage system (403) may include one or more modules (e.g., a module of computer program instructions executing on an EC2 instance) that are configured to ensure that when the local storage of a particular cloud computing instance is rehydrated with data from S3, the appropriate data is actually in S3. This issue arises largely because S3 implements an eventual consistency model where, when overwriting an existing object, reads of the object will eventually (but not necessarily immediately) become consistent and will eventually (but not necessarily immediately) return the overwritten version of the object. To address this issue, in some embodiments of the present disclosure, objects in S3 are never overwritten. Instead, a traditional ‘overwrite’ would result in the creation of the new object (that includes the updated version of the data) and the eventual deletion of the old object (that includes the previous version of the data). In some embodiments of the present disclosure, as part of an attempt to never (or almost never) overwrite an object, when data is written to S3 the resultant object may be tagged with a sequence number. In some embodiments, these sequence numbers may be persisted elsewhere (e.g., in a database) such that at any point in time, the sequence number associated with the most up-to-date version of some piece of data can be known. In such a way, a determination can be made as to whether S3 has the most recent version of some piece of data by merely reading the sequence number associated with an object—and without actually reading the data from S3. The ability to make this determination may be particularly important when a cloud computing instance with local storage crashes, as it would be undesirable to rehydrate the local storage of a replacement cloud computing instance with out-of-date data. In fact, because the cloud-based storage system (403) does not need to access the data to verify its validity, the data can stay encrypted and access charges can be avoided. In the example depicted inFIG.4, and as described above, the cloud computing instances (404,406) that are used to support the execution of the storage controller applications (408,410) may operate in a primary/secondary configuration where one of the cloud computing instances (404,406) that are used to support the execution of the storage controller applications (408,410) is responsible for writing data to the local storage (414,418,422) that is attached to the cloud computing instances with local storage (424a,424b,424n). In such an example, however, because each of the cloud computing instances (404,406) that are used to support the execution of the storage controller applications (408,410) can access the cloud computing instances with local storage (424a,424b,424n), both of the cloud computing instances (404,406) that are used to support the execution of the storage controller applications (408,410) can service requests to read data from the cloud-based storage system (403). For further explanation,FIG.5sets forth an example of an additional cloud-based storage system (502) in accordance with some embodiments of the present disclosure. In the example depicted inFIG.5, the cloud-based storage system (502) is created entirely in a cloud computing environment (402) such as, for example, AWS, Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system (502) may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system (502) may be used to provide block storage services to users of the cloud-based storage system (502), the cloud-based storage system (403) may be used to provide storage services to users of the cloud-based storage system (403) through the use of solid-state storage, and so on. The cloud-based storage system (502) depicted inFIG.5may operate in a manner that is somewhat similar to the cloud-based storage system (403) depicted inFIG.4, as the cloud-based storage system (502) depicted inFIG.5includes a storage controller application (506) that is being executed in a cloud computing instance (504). In the example depicted inFIG.5, however, the cloud computing instance (504) that executes the storage controller application (506) is a cloud computing instance (504) with local storage (508). In such an example, data written to the cloud-based storage system (502) may be stored in both the local storage (508) of the cloud computing instance (504) and also in cloud-based object storage (510) in the same manner that the cloud-based object storage (510) was used above. In some embodiments, for example, the storage controller application (506) may be responsible for writing data to the local storage (508) of the cloud computing instance (504) while a software daemon (512) may be responsible for ensuring that the data is written to the cloud-based object storage (510) in the same manner that the cloud-based object storage (510) was used above. In other embodiments, the same entity (e.g., the storage controller application) may be responsible for writing data to the local storage (508) of the cloud computing instance (504) and also responsible for ensuring that the data is written to the cloud-based object storage (510) in the same manner that the cloud-based object storage (510) was used above. Readers will appreciate that a cloud-based storage system (502) depicted inFIG.5may represent a less expensive, less robust version of a cloud-based storage system than was depicted inFIG.4. In yet alternative embodiments, the cloud-based storage system (502) depicted inFIG.5could include additional cloud computing instances with local storage that supported the execution of the storage controller application (506), such that failover can occur if the cloud computing instance (504) that executes the storage controller application (506) fails. Likewise, in other embodiments, the cloud-based storage system (502) depicted inFIG.5can include additional cloud computing instances with local storage to expand the amount local storage that is offered by the cloud computing instances in the cloud-based storage system (502). Readers will appreciate that many of the failure scenarios described above with reference toFIG.4would also apply cloud-based storage system (502) depicted inFIG.5. Likewise, the cloud-based storage system (502) depicted inFIG.5may be dynamically scaled up and down in a similar manner as described above. The performance of various system-level tasks may also be executed by the cloud-based storage system (502) depicted inFIG.5in an intelligent way, as described above. Readers will appreciate that, in an effort to increase the resiliency of the cloud-based storage systems described above, various components may be located within different availability zones. For example, a first cloud computing instance that supports the execution of the storage controller application may be located within a first availability zone while a second cloud computing instance that also supports the execution of the storage controller application may be located within a second availability zone. Likewise, the cloud computing instances with local storage may be distributed across multiple availability zones. In fact, in some embodiments, an entire second cloud-based storage system could be created in a different availability zone, where data in the original cloud-based storage system is replicated (synchronously or asynchronously) to the second cloud-based storage system so that if the entire original cloud-based storage system went down, a replacement cloud-based storage system (the second cloud-based storage system) could be brought up in a trivial amount of time. Readers will appreciate that the cloud-based storage systems described herein may be used as part of a fleet of storage systems. In fact, the cloud-based storage systems described herein may be paired with on-premises storage systems. In such an example, data stored in the on-premises storage may be replicated (synchronously or asynchronously) to the cloud-based storage system, and vice versa. For further explanation,FIG.6sets forth a flow chart illustrating an example method of servicing I/O operations in a cloud-based storage system (604). Although depicted in less detail, the cloud-based storage system (604) depicted inFIG.6may be similar to the cloud-based storage systems described above and may be supported by a cloud computing environment (602). The example method depicted inFIG.6includes receiving (606), by the cloud-based storage system (604), a request to write data to the cloud-based storage system (604). The request to write data may be received, for example, from an application executing in the cloud computing environment, by a user of the storage system that is communicatively coupled to the cloud computing environment, and in other ways. In such an example, the request can include the data that is to be written to the cloud-based storage system (604). In other embodiments, the request to write data to the cloud-based storage system (604) may occur at boot-time when the cloud-based storage system (604) is being brought up. The example method depicted inFIG.6also includes deduplicating (608) the data. Data deduplication is a data reduction technique for eliminating duplicate copies of repeating data. The cloud-based storage system (604) may deduplicate (608) the data, for example, by comparing one or more portions of the data to data that is already stored in the cloud-based storage system (604), by comparing a fingerprint for one or more portions of the data to fingerprints for data that is already stored in the cloud-based storage system (604), or in other ways. In such an example, duplicate data may be removed and replaced by a reference to an already existing copy of the data that is already stored in the cloud-based storage system (604). The example method depicted inFIG.6also includes compressing (610) the data. Data compression is a data reduction technique whereby information is encoded using fewer bits than the original representation. The cloud-based storage system (604) may compress (610) the data by applying one or more data compression algorithms to the data, which at this point may not include data that data that is already stored in the cloud-based storage system (604). The example method depicted inFIG.6also includes encrypting (612) the data. Data encryption is a technique that involves the conversion of data from a readable format into an encoded format that can only be read or processed after the data has been decrypted. The cloud-based storage system (604) may encrypt (612) the data, which at this point may have already been deduplicated and compressed, using an encryption key. Readers will appreciate that although the embodiment depicted inFIG.6involves deduplicating (608) the data, compressing (610) the data, and encrypting (612) the data, other embodiments exist in which fewer of these steps are performed and embodiment exist in which the same number of steps or fewer are performed in a different order. The example method depicted inFIG.6also includes storing (614), in block storage of the cloud-based storage system (604), the data. Storing (614) the data in block storage of the cloud-based storage system (604) may be carried out, for example, by storing (616) the data solid-state storage such as local storage (e.g., SSDs) of one or more cloud computing instances, as described in more detail above. In such an example, the data may be spread across the local storage of many cloud computing instances, along with parity data, to implement RAID or RAID-like data redundancy. The example method depicted inFIG.6also includes storing (618), in object storage of the cloud-based storage system (604), the data. Storing (618) the data in object storage of the cloud-based storage system can include creating (620) one or more equal sized objects, where each equal sized object includes a distinct chunk of the data. In such an example, because each object includes data and metadata, the data portion of each object may be equal sized. In other embodiments, the data portion of each created object may not be equal sized. For example, each object could include the data from a predetermined number of blocks in the block storage that was used in the preceding paragraph, or in some other way. The example method depicted inFIG.6also includes receiving (622), by the cloud-based storage system, a request to read data from the cloud-based storage system (604). The request to read data from the cloud-based storage system (604) may be received, for example, from an application executing in the cloud computing environment, by a user of the storage system that is communicatively coupled to the cloud computing environment, and in other ways. The request can include, for example, a logical address the data that is to be read from the cloud-based storage system (604). The example method depicted inFIG.6also includes retrieving (624), from block storage of the cloud-based storage system (604), the data. Readers will appreciate that the cloud-based storage system (604) may retrieve (624) the data from block storage of the cloud-based storage system (604), for example, by the storage controller application forwarding the read request to the cloud computing instance that includes the requested data in its local storage. Readers will appreciate that by retrieving (624) the data from block storage of the cloud-based storage system (604), the data may be retrieved more rapidly than if the data were read from cloud-based object storage, even though the cloud-based object storage does include a copy of the data. Readers will appreciate that in the example method depicted inFIG.6, the block storage of the cloud-based storage system (604) is characterized by a low read latency relative to the object storage of the cloud-based storage system. As such, by servicing read operations from the block storage rather than the object storage, the cloud-based storage system (604) may be able to service read operations using low latency block storage, while still offering the resiliency that is associated with object storage solutions offered by cloud services providers. Furthermore, the block storage of the cloud-based storage system (604) may offer relatively high bandwidth. The block storage of the cloud-based storage system (604) may be implemented in a variety of ways as will occur to readers of this disclosure. For further explanation,FIG.7sets forth a flow chart illustrating an additional example method of servicing I/O operations in a cloud-based storage system (604). The example method depicted inFIG.7is similar to the example method depicted inFIG.6, as the example method depicted inFIG.7also includes receiving (606) a request to write data to the cloud-based storage system (604), storing (614) the data in block storage of the cloud-based storage system (604), and storing (618) the data in object storage of the cloud-based storage system (604). The example method depicted inFIG.7also includes detecting (702) that at least some portion of the block storage of the cloud-based storage system has become unavailable. Detecting (702) that at least some portion of the block storage of the cloud-based storage system has become unavailable may be carried out, for example, by detecting that one or more of the cloud computing instances that includes local storage has become unavailable, as described in greater detail below. The example method depicted inFIG.7also includes identifying (704) data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable. Identifying (704) data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable may be carried out, for example, through the use of metadata that maps some identifier of a piece of data (e.g., a sequence number, an address) to the location where the data is stored. Such metadata, or separate metadata, may also map the piece of data to one or more object identifiers that identify objects stored in the object storage of the cloud-based storage system that contain the piece of data. The example method depicted inFIG.7also includes retrieving (706), from object storage of the cloud-based storage system, the data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable. Retrieving (706) the data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable from object storage of the cloud-based storage system may be carried out, for example, through the use of metadata described above that maps the data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable to one or more objects stored in the object storage of the cloud-based storage system that contain the piece of data. In such an example, retrieving (706) the data may be carried out by reading the objects that map to the data from the object storage of the cloud-based storage system. The example method depicted inFIG.7also includes storing (708), in block storage of the cloud-based storage system, the retrieved data. Storing (708) the retrieved data in block storage of the cloud-based storage system may be carried out, for example, by creating replacement cloud computing instances with local storage and storing the data in the local storage of one or more of the replacement cloud computing instances, as described in greater detail above. Readers will appreciate that although the embodiments described above relate to embodiments in which data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable is essentially brought back into the block storage layer of the cloud-based storage system by retrieving the data from the object storage layer of the cloud-based storage system, other embodiments are within the scope of the present disclosure. For example, because data may be distributed across the local storage of multiple cloud computing instances using data redundancy techniques such as RAID, in some embodiments the lost data may be brought back into the block storage layer of the cloud-based storage system through a RAID rebuild. For further explanation,FIG.8sets forth a flow chart illustrating an example method of servicing I/O operations in a cloud-based storage system (804). Although depicted in less detail, the cloud-based storage system (804) depicted inFIG.8may be similar to the cloud-based storage systems described above and may be supported by a cloud computing environment (802). The example method depicted inFIG.8includes receiving (806), by the cloud-based storage system (804), a request to write data to the cloud-based storage system (804). The request to write data may be received, for example, from an application executing in the cloud computing environment, by a user of the storage system that is communicatively coupled to the cloud computing environment, and in other ways. In such an example, the request can include the data that is to be written to the cloud-based storage system (804). In other embodiments, the request to write data to the cloud-based storage system (804) may occur at boot-time when the cloud-based storage system (804) is being brought up. The example method depicted inFIG.8also includes deduplicating (808) the data. Data deduplication is a data reduction technique for eliminating duplicate copies of repeating data. The cloud-based storage system (804) may deduplicate (808) the data, for example, by comparing one or more portions of the data to data that is already stored in the cloud-based storage system (804), by comparing a fingerprint for one or more portions of the data to fingerprints for data that is already stored in the cloud-based storage system (804), or in other ways. In such an example, duplicate data may be removed and replaced by a reference to an already existing copy of the data that is already stored in the cloud-based storage system (804). The example method depicted inFIG.8also includes compressing (810) the data. Data compression is a data reduction technique whereby information is encoded using fewer bits than the original representation. The cloud-based storage system (804) may compress (810) the data by applying one or more data compression algorithms to the data, which at this point may not include data that data that is already stored in the cloud-based storage system (804). The example method depicted inFIG.8also includes encrypting (812) the data. Data encryption is a technique that involves the conversion of data from a readable format into an encoded format that can only be read or processed after the data has been decrypted. The cloud-based storage system (804) may encrypt (812) the data, which at this point may have already been deduplicated and compressed, using an encryption key. Readers will appreciate that although the embodiment depicted inFIG.8involves deduplicating (808) the data, compressing (810) the data, and encrypting (812) the data, other embodiments exist in which fewer of these steps are performed and embodiment exist in which the same number of steps or fewer are performed in a different order. The example method depicted inFIG.8also includes storing (814), in block storage of the cloud-based storage system (804), the data. Storing (814) the data in block storage of the cloud-based storage system (804) may be carried out, for example, by storing (816) the data in local storage (e.g., SSDs) of one or more cloud computing instances, as described in more detail above. In such an example, the data spread across local storage of multiple cloud computing instances, along with parity data, to implement RAID or RAID-like data redundancy. The example method depicted inFIG.8also includes storing (818), in object storage of the cloud-based storage system (804), the data. Storing (818) the data in object storage of the cloud-based storage system can include creating (820) one or more equal sized objects, wherein each equal sized object includes a distinct chunk of the data, as described in greater detail above. The example method depicted inFIG.8also includes receiving (822), by the cloud-based storage system, a request to read data from the cloud-based storage system (804). The request to read data from the cloud-based storage system (804) may be received, for example, from an application executing in the cloud computing environment, by a user of the storage system that is communicatively coupled to the cloud computing environment, and in other ways. The request can include, for example, a logical address the data that is to be read from the cloud-based storage system (804). The example method depicted inFIG.8also includes retrieving (824), from block storage of the cloud-based storage system (804), the data. Readers will appreciate that the cloud-based storage system (804) may retrieve (824) the data from block storage of the cloud-based storage system (804), for example, by the storage controller application forwarding the read request to the cloud computing instance that includes the requested data in its local storage. Readers will appreciate that by retrieving (824) the data from block storage of the cloud-based storage system (804), the data may be retrieved more rapidly than if the data were read from cloud-based object storage, even though the cloud-based object storage does include a copy of the data. For further explanation,FIG.9sets forth a flow chart illustrating an additional example method of servicing I/O operations in a cloud-based storage system (804). The example method depicted inFIG.9is similar to the example method depicted inFIG.8, as the example method depicted inFIG.9also includes receiving (806) a request to write data to the cloud-based storage system (804), storing (814) the data in block storage of the cloud-based storage system (804), and storing (818) the data in object storage of the cloud-based storage system (804). The example method depicted inFIG.9also includes detecting (902) that at least some portion of the block storage of the cloud-based storage system has become unavailable. Detecting (902) that at least some portion of the block storage of the cloud-based storage system has become unavailable may be carried out, for example, by detecting that one or more of the cloud computing instances that includes local storage has become unavailable, as described in greater detail below. The example method depicted inFIG.9also includes identifying (904) data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable. Identifying (904) data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable may be carried out, for example, through the use of metadata that maps some identifier of a piece of data (e.g., a sequence number, an address) to the location where the data is stored. Such metadata, or separate metadata, may also map the piece of data to one or more object identifiers that identify objects stored in the object storage of the cloud-based storage system that contain the piece of data. The example method depicted inFIG.9also includes retrieving (906), from object storage of the cloud-based storage system, the data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable. Retrieving (906) the data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable from object storage of the cloud-based storage system may be carried out, for example, through the use of metadata described above that maps the data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable to one or more objects stored in the object storage of the cloud-based storage system that contain the piece of data. In such an example, retrieving (906) the data may be carried out by reading the objects that map to the data from the object storage of the cloud-based storage system. The example method depicted inFIG.9also includes storing (908), in block storage of the cloud-based storage system, the retrieved data. Storing (908) the retrieved data in block storage of the cloud-based storage system may be carried out, for example, by creating replacement cloud computing instances with local storage and storing the data in the local storage of one or more of the replacement cloud computing instances, as described in greater detail above. For further explanation,FIG.10sets forth a flow chart illustrating an additional example method of servicing I/O operations in a cloud-based storage system (604). The example method depicted inFIG.10is similar to the example method depicted in many of the figures above, as the example method depicted inFIG.10also includes receiving (606) a request to write data to the cloud-based storage system (604), storing (614) the data in block storage of the cloud-based storage system (604), and storing (618) the data in object storage of the cloud-based storage system (604). In the example method depicted inFIG.10, receiving (606) the request to write data to the cloud-based storage system can include receiving (1002), by a storage controller application executing in a cloud computing instance, the request to write data to the cloud-based storage. The storage controller application that is executing in a cloud computing instance may be similar to the storage controller applications described above and may be executing, for example, in an EC2 instance as described above in greater detail. In fact, the cloud-based storage system (604) may actually include multiple EC2 instances or similar cloud computing instances, where multiple cloud computing instances are each executing the storage controller application. In the example method depicted inFIG.10, storing (614), in block storage of the cloud-based storage system, the data can include issuing (1004), by the storage controller application executing in the cloud computing instance, an instruction to write the data to local storage within one or more cloud computing instances with local storage. The one or more cloud computing instances with local storage may be similar to the cloud computing instances with local storage that are described above. In the example method depicted inFIG.10, the storage controller application executing in the cloud computing instance may be coupled for data communications with a plurality of cloud computing instances with local storage. In such a way, the storage controller application that is executing in the cloud computing instance may treat the plurality of cloud computing instances with local storage as individual storage devices, such that the storage controller application that is executing in the cloud computing instance may issue (1004) an instruction to write the data to local storage within one or more cloud computing instances with local storage by issuing the same set of commands that the storage controller application would issue when writing data to a connected storage device. Readers will appreciate that because the storage controller application that is executing in the cloud computing instance may be coupled for data communications with a plurality of cloud computing instances with local storage, the storage array controller may be connected to multiple sources of block storage, the storage array controller could only be connected to a single EBS volume if the storage array controller were configured to use EBS as its block-storage. In the example method depicted inFIG.10, one or more of the plurality of cloud computing instances with local storage may be coupled for data communications with a plurality of cloud computing instances that are each executing the storage controller application. Readers will appreciate that in some embodiments, because there are a plurality of cloud computing instances that are each executing the storage controller application, a storage controller application that is executing on a first cloud computing instance may serve as the primary controller whereas additional storage controller applications that are executing on additional cloud computing instances may serve as the secondary controllers that can take over for the primary controller upon the occurrence of some event (e.g., failure of the primary controller). For further explanation,FIG.11sets forth a flow chart illustrating an additional example method of servicing I/O operations in a cloud-based storage system (604). The example method depicted inFIG.11is similar to the example method depicted in many of the figures above, as the example method depicted inFIG.11also includes receiving (606) a request to write data to the cloud-based storage system (604), storing (614) the data in block storage of the cloud-based storage system (604), and storing (618) the data in object storage of the cloud-based storage system (604). In the example method depicted inFIG.11, storing (614), in block storage of the cloud-based storage system, the data can include writing (1102), into one or more blocks of the block storage, the data using a block-level protocol. In the example method depicted inFIG.11, the block storage may be embodied as one or more block storage devices such as NAND flash memory where data is stored in blocks that can each be used to store data of a maximum size (i.e., a block size). Data may be written (1102) to such storage devices using a block-level protocol such as, for example, iSCSI, Fibre Channel and FCoE (Fibre Channel over Ethernet), and so on. Readers will appreciate that by writing (1102) the data into one or more blocks of the block storage using a block-level protocol, the data that is written to the block storage of the cloud-based storage system is therefore stored in blocks. In the example method depicted inFIG.11, storing (618), in object storage of the cloud-based storage system, the data can include writing (1104), into one or more objects in the object storage, the data using an object-level protocol. In the example method depicted inFIG.11, the object storage may be configured to manage data as objects, as opposed to other storage architectures like file systems which manage data as a file hierarchy, and block storage which manages data as blocks. Such object storage can be implemented at the device level (object storage device), the system level, the interface level, or in some other way. Data may be written (1104) to the object storage using an object-level protocol such as, for example, the SCSI command set for Object Storage Devices, RESTful/HTTP protocols, AWS S3 APIs, the Cloud Data Management Interface for accessing cloud storage, and others. Readers will appreciate that by writing (1104) one or more objects into the object storage using an object-level protocol, the data that is written to the object storage of the cloud-based storage system is therefore stored in objects—rather than blocks as was the case in the preceding paragraph. In the example method depicted inFIG.11, for each block of data, the data contained in a particular block may be written into a unique object. Readers will appreciate that each object that is written (1104) to object storage may include includes the data itself, as well as its associated metadata and each object may be associated with a globally unique identifier—rather than a file name and a file path, block number, and so on. As such, the data that is contained in a particular block may be written into a unique object in the sense that the unique object includes the data itself, metadata associated with the data, and a globally unique identifier. In such embodiments, the cloud-based storage system may therefore maintain a mapping from each block of data that is stored in the cloud-based storage system's block storage and each object that is stored in the cloud-based storage system's object storage. In some embodiments, each object may include the data that is contained in multiple blocks, but the data that is contained in multiple blocks need only be stored in a single object. For further explanation,FIG.12illustrates an example virtual storage system architecture1200in accordance with some embodiments. The virtual storage system architecture may include similar cloud-based computing resources as the cloud-based storage systems described above with reference toFIGS.4-11. As described above with reference toFIGS.1A-3E, in some embodiments of a physical storage system, a physical storage system may include one or more controllers providing storage services to one or more hosts, and with the physical storage system including durable storage devices, such as solid state drives or hard disks, and also including some fast durable storage, such as NVRAM. In some examples, the fast durable storage may be used for staging or transactional commits or for speeding up acknowledgement of operation durability to reduce latency for host requests. Generally, fast durable storage is often used for intent logging, fast completions, or quickly ensuring transactional consistency, where such (and similar) purposes are referred to herein as staging memory. Generally, both physical and virtual storage systems may have one or more controllers, and may have specialized storage components, such as in the case of physical storage devices, specialized storage devices. Further, in some cases, in physical and virtual storage systems, staging memory may be organized and reorganized in a variety of ways, such as in examples described later. In some examples, in whatever way that memory components or memory devices are constructed, generated, or organized, there may be a set of storage system logic that executes to implement a set of advertised storage services and that stores bulk data for indefinite durations, and there may also be some quantity of staging memory. In some examples, controller logic that operates a physical storage system, such as physical storage systems1A-3E, may be carried out within a virtual storage system by providing suitable virtual components to, individually or in the aggregate, serve as substitutes for hardware components in a physical storage system where the virtual components are configured to operate the controller logic and to interact with other virtual components that are configured to replace physical components other than the controller. Continuing with this example, virtual components, executing controller logic, may implement and/or adapt high availability models used to keep a virtual storage system operating in case of failures. As another example, virtual components, executing controller logic, may implement protocols to keep the virtual storage system from losing data in the face of transient failures that may exceed what the virtual storage system may tolerate while continuing to operate. In some implementations, and particularly with regard to the various virtual storage system architectures described with reference toFIGS.12-17, a computing environment may include a set of available, advertised constructs that are typical to cloud-based infrastructures as service platforms, such as cloud infrastructures provided by Amazon Web Services™, Microsoft Azure™, and/or Google Cloud Platform™. In some implementations, example constructs, and construct characteristics within such cloud platforms may include:Compute instances, where a compute instance may execute or run as virtual machines flexibly allocated to physical host servers;Division of computing resources into separate geographic regions, where computing resources may be distributed or divided among separate, geographic regions, such that users within a same region or same zone as a given cloud computing resource may experience faster and/or higher bandwidth access as compared to users in a different region or different zone than computing resources;Division of resources within geographic regions into “availability” zones with separate availability and survivability in cases of wide-scale data center outages, network failures, power grid failures, administrative mistakes, and so on. Further, in some examples, resources within a particular cloud platform that are in separate availability zones within a same geographic region generally have fairly high bandwidth and reasonably low latency between each other;Local instance storage, such as hard drives, solid-state drives, rack-local storage, that may provide private storage to a compute instance. Other examples of local instance storage are described above with reference toFIGS.4-11;Block stores that are relatively high-speed and durable, and which may be connected to a virtual machine, but whose access may be migrated. Some examples include EBS (Elastic Block Store™) in AWS, Managed Disks in Microsoft Azure™, and Compute Engine persistent disks in Google Cloud Platform™. EBS in AWS operates within a single availability zone, but is otherwise reasonably reliable and available, and intended for long-term use by compute instances, even if those compute instances can move between physical systems and racks;Object stores, such as Amazon S3™ or an object store using a protocol derived from, compatible with S3, or that has some similar characteristics to S3 (for example, Microsoft's Azure Blob Storage™). Generally, object stores are very durable, surviving widespread outages through inter-availability zone and cross-geography replication;Cloud platforms, which may support a variety of object stores or other storage types that may vary in their combinations of capacity prices, access prices, expected latency, expected throughput, availability guarantees, or durability guarantees. For example, in AWS™, Standard and Infrequent Access S3 storage classes (referenced herein as standard and write-mostly storage classes) differ in availability (but not durability) as well as in capacity and access prices (with the infrequent access storage tier being less expensive on capacity, but more expensive for retrieval, and with 1/10th the expected availability). Infrequent Access S3 also supports an even less expensive variant that is not tolerant to complete loss of an availability zone, which is referred to herein as a single-availability-zone durable store. AWS further supports archive tiers such as Glacier™ and Deep Glacier™ that provide their lowest capacity prices, but with very high access latency on the order of minutes to hours for Glacier, and up to 12 hours with limits on retrieval frequency for Deep Glacier. Glacier and Deep Glacier are referred to herein as examples of archive and deep archive storage classes;Databases, and often multiple different types of databases, including high-scale key-value store databases with reasonable durability (similar to high-speed, durable block stores) and convenient sets of atomic update primitives. Some examples of durable key-value databases include AWS DynamoDB™, Google Cloud Platform Big Table™, and/or Microsoft Azure's CosmoDB™; andDynamic functions, such as code snippets that can be configured to run dynamically within the cloud platform infrastructure in response to events or actions associated with the configuration. For example, in AWS, these dynamic functions are called AWS Lambdas™, and Microsoft Azure and Google Cloud Platform refers to such dynamic functions as Azure Functions™ and Cloud Functions™, respectively. In some implementations, local instance storage is not intended to be provisioned for long-term use, and in some examples, local instance storage may not be migrated as virtual machines migrate between host systems. In some cases, local instance storage may also not be shared between virtual machines, and may come with few durability guarantees due to their local nature (likely surviving local power and software faults, but not necessarily more widespread failures). Further, in some examples, local instance storage, as compared to object storage, may be reasonably inexpensive and may not be billed based on I/Os issued against them, which is often the case with the more durable block storage services. In some implementations, objects within object stores are easy to create (for example, a web service PUT operation to create an object with a name within some bucket associated with an account) and to retrieve (for example, a web service GET operation), and parallel creates and retrievals across a sufficient number of objects may yield enormous bandwidth. However, in some cases, latency is generally very poor, and modifications or replacement of objects may complete in unpredictable amounts of time, or it may be difficult to determine when an object is fully durable and consistently available across the cloud platform infrastructure. Further, generally, availability, as opposed to durability, of object stores is often low, which is often an issue with many services running in cloud environments. In some implementations, as an example baseline, a virtual storage system may include one or more of the following virtual components and concepts for constructing, provisioning, and/or defining a virtual storage system built on a cloud platform:Virtual controller, such as a virtual storage system controller running on a compute instance within a cloud platform's infrastructure or cloud computing environment. In some examples, a virtual controller may run on virtual machines, in containers, or on bare metal servers;Virtual drives, where a virtual drive may be a specific storage object that is provided to a virtual storage system controller to represent a dataset; for example, a virtual drive may be a volume or an emulated disk drive that within the virtual storage system may serve analogously to a physical storage system “storage device”. Further, virtual drives may be provided to virtual storage system controllers by “virtual drive servers”;Virtual drive servers may be implemented by compute instances, where virtual drive servers may present storage, such as virtual drives, out of available components provided by a cloud platform, such as various types of local storage options, and where virtual drive servers implement logic that provides virtual drives to one or more virtual storage system controllers, or in some cases, provides virtual drives to one or more virtual storage systems.Staging memory, which may be fast and durable, or at least reasonably fast and reasonably durable, where reasonably durable may be specified according to a durability metric, and where reasonably fast may be specified according to a performance metric, such as IOPS;Virtual storage system dataset, which may be a defined collection of data and metadata that represents coherently managed content that represents a collection of file systems, volumes, objects, and other similar addressable portions of memory;Object storage, which may provide back-end, durable object storage to the staging memory. As illustrated inFIGS.12, cloud-based object storage432may be managed by the virtual drives1210-1216;Segments, which may be specified as medium-sized chunks of data. For example, a segment may be defined to be within a range of 1 MB-64 MB, where a segment may hold a combination of data and metadata; andVirtual storage system logic, which may be a set of algorithms running at least on the one or more virtual controllers408,410, and in some cases, with some virtual storage system logic also running on one or more virtual drives1210-1216. In some implementations, a virtual controller may take in or receive I/O operations and/or configuration requests from client hosts1260,1262(possibly through intermediary servers, not depicted) or from administrative interfaces or tools, and then ensure that I/O requests and other operations run through to completion. In some examples, virtual controllers may present file systems, block-based volumes, object stores, and/or certain kinds of bulk storage databases or key/value stores, and may provide data services such as snapshots, replication, migration services, provisioning, host connectivity management, deduplication, compression, encryption, secure sharing, and other such storage system services. In the example virtual storage system1200architecture illustrated inFIG.12, a virtual storage system1200includes two virtual controllers, where one virtual controller is running within one time zone, time zone1251, and another virtual controller is running within another time zone, time zone1252. In this example, the two virtual controllers are depicted as, respectively, storage controller application408running within cloud computing instance404and storage controller application410running within cloud computing instance406. In some implementations, a virtual drive server, as discussed above, may represent to a host something similar to physical storage device, such as a disk drive or a solid-state drive, where the physical storage device is operating within the context of a physical storage system. However, while in this example, the virtual drive presents similarly to a host as a physical storage device, the virtual drive is implemented by a virtual storage system architecture where the virtual storage system architecture may be any of those depicted amongFIGS.4-16. Further, in contrast to virtual drives that have as an analog a physical storage device, as implemented within the example virtual storage system architectures, a virtual drive server, may not have an analog within the context of a physical storage system. Specifically, in some examples, a virtual drive server may implement logic that goes beyond what is typical of storage devices in physical storage systems, and may in some cases rely on atypical storage system protocols between the virtual drive server and virtual storage system controllers that do not have an analog in physical storage systems. However, conceptually, a virtual drive server may share similarities to a scale-out shared-nothing or software-defined storage systems. In some implementations, with reference toFIG.12, the respective virtual drive servers1210-1216may implement respective software applications or daemons1230-1236to provide virtual drives whose functionality is similar or even identical to that of a physical storage device—which allows for greater ease in porting storage system software or applications that are designed for physical storage systems. For example, they could implement a standard SAS, SCSI or NVMe protocol, or they could implement these protocols but with minor or significant non-standard extensions. In some implementations, with reference toFIG.12, staging memory may be implemented by one or more virtual drives1210-1216, where the one or more virtual drives1210-1216store data within respective block-store volumes1240-1246and local storage1220-1226. In this example, the block storage volumes may be AWS EBS volumes that may be attached, one after another, as depicted inFIG.12, to two or more other virtual drives. As illustrated inFIG.12, block storage volume1240is attached to virtual drive1212, block storage volume1242is attached to virtual drive1214, and so on. In some implementations, a segment may be specified to be part of an erasure coded set, such as based on a RAID-style implementation, where a segment may store calculated parity content based on erasure codes (e.g., RAID-5 P and Q data) computed from content of other segments. In some examples, contents of segments may be created once, and after the segment is created and filled in, not modified until the segment is discarded or garbage collected. In some implementations, virtual storage system logic may also run from other virtual storage system components, such as dynamic functions. Virtual storage system logic may provide a complete implementation of the capabilities and services advertised by the virtual storage system1200, where the virtual storage system1200uses one or more available cloud platform components, such as those described above, to implement these services reliably and with appropriate durability. While the example virtual storage system1200illustrated inFIG.12includes two virtual controllers, more generally, other virtual storage system architectures may have more or fewer virtual controllers, as illustrated inFIGS.13-16. Further, in some implementations, and similar to the physical storage systems described inFIGS.1A-4, a virtual storage system may include an active virtual controller and one or more passive virtual controllers. For further explanation,FIG.13illustrates an example virtual storage system architecture1300in accordance with some embodiments. The virtual storage system architecture may include similar cloud-based computing resources as the cloud-based storage systems described above with reference toFIGS.4-12. In this implementation, a virtual storage system may run virtual storage system logic, as specified above with reference toFIG.12, concurrently on multiple virtual controllers, such as by dividing up a dataset or by careful implementation of concurrent distributed algorithms. In this example, the multiple virtual controllers1320,408,410,1322are implemented within respective cloud computing instances1310,404,406,1312. As described above with reference toFIG.12, in some implementations, a particular set of hosts may be directed preferentially or exclusively to a subset of virtual controllers for a dataset, while a particular different set of hosts may be directed preferentially or exclusively to a different subset of controllers for that same dataset. For example, SCSI ALUA (Asymmetric Logical Unit Access), or NVMe ANA (Asymmetric Namespace Access) or some similar mechanism, could be used to establish preferred (sometimes called “optimized”) path preferences from one host to a subset of controllers where traffic is generally directed to the preferred subset of controllers but where, such as in the case of faulted requests or network failures or virtual storage system controller failures, that traffic could be redirected to a different subset of virtual storage system controllers. Alternately, SCSI/NVMe volume advertisements or network restrictions, or some similar alternative mechanism, could force all traffic from a particular set of hosts exclusively to one subset of controllers, or could force traffic from a different particular set of hosts to a different subset of controllers. As illustrated inFIG.13, a virtual storage system may preferentially or exclusively direct I/O requests from host1260to virtual storage controllers1320and408with storage controllers410and perhaps1322potentially being available to host1260for use in cases of faulted requests, and may preferentially or exclusively direct I/O requests from host1262to virtual storage controllers410and1322with storage controllers408and perhaps1320potentially being available to host1262for use in cases of faulted requests. In some implementations, a host may be directed to issue I/O requests to one or more virtual storage controllers within the same availability zone as the host, with virtual storage controllers in a different availability zone from the host being available for use in cases of faults. For further explanation,FIG.14illustrates an example virtual storage system architecture1400in accordance with some embodiments. The virtual storage system architecture may include similar cloud-based computing resources as the cloud-based storage systems described above with reference toFIGS.4-13. In some implementations, boundaries between virtual controllers and virtual drive servers that host virtual drives may be flexible. Further, in some examples, the boundaries between virtual components may not be visible to client hosts1450a-1450p, and client hosts1450a-1450pmay not detect any distinction between two differently architected virtual storage systems that provides a same set of storage system services. For example, virtual controllers and virtual drives may be merged into a single virtual entity that may provide similar functionality to a traditional, blade-based scale-out storage system. In this example, virtual storage system1400includes n virtual blades, virtual blades1402a-1402n, where each respective virtual blade1402a-1402nmay include a respective virtual controller1404a-1404n, and also include respective local storage1220-1226,1240-1246, but where the storage function may make use of a platform provided object store as might be the case with virtual drive implementations described previously. In some implementations, because virtual drive servers support general purpose compute, this virtual storage system architecture supports functions migrating between virtual storage system controllers and virtual drive servers. Further, in other cases, this virtual storage system architecture supports other kinds of optimizations, such as optimizations described above that may be performed within staging memory. Further, virtual blades may be configured with varying levels of processing power, where the performance specifications of a given one or more virtual blades may be based on expected optimizations to be performed. For further explanation,FIG.15illustrates an example virtual storage system architecture1500in accordance with some embodiments. The virtual storage system architecture may include similar cloud-based computing resources as the cloud-based storage systems described above with reference toFIGS.4-14. In this implementations, a virtual storage system1500may be adapted to different availability zones, where such a virtual storage system1500may use cross-storage system synchronous replication logic to isolate as many parts of an instance of a virtual storage system as possible within one availability zone. For example, the presented virtual storage system1500may be constructed from a first virtual storage system1502in one availability zone, zone 1, that synchronously replicates data to a second virtual storage system1504in another availability zone, zone 2, such that the presented virtual storage system can continue running and providing its services even in the event of a loss of data or availability in one availability zone or the other. Such an implementation could be further implemented to share use of durable objects, such that the storing of data into the object store is coordinated so that the two virtual storage systems do not duplicate the stored content. Further, in such an implementation, the two synchronously replicating storage systems may synchronously replicate updates to the staging memories and perhaps local instance stores within each of their availability zones, to greatly reduce the chance of data loss, while coordinating updates to object stores as a later asynchronous activity to greatly reduce the cost of capacity stored in the object store. In this example, virtual storage system1504is implemented within cloud computing environments1501. Further, in this example, virtual storage system1502may use cloud-based object storage1550, and virtual storage system1504may use cloud-based storage1552, where in some cases, such as AWS S3, the different object storages1550,1552may be a same cloud object storage with different buckets. Continuing with this example, virtual storage system1502may, in some cases, synchronously replicate data to other virtual storage systems, or physical storage systems, in other availability zones (not depicted). In some implementations, the virtual storage system architecture of virtual storage systems1502and1504may be distinct, and even incompatible where synchronous replication may depend instead on synchronous replication models being protocol compatible. Synchronous replication is described in greater detail above with reference toFIGS.3D and3E. In some implementations, virtual storage system1502may be implemented similarly to virtual storage system1400, described above with reference toFIG.14, and virtual storage system1504may be implemented similarly to virtual storage system1200, described above with reference toFIG.12. For further explanation,FIG.16illustrates an example virtual storage system architecture1500in accordance with some embodiments. The virtual storage system architecture may include similar cloud-based computing resources as the cloud-based storage systems described above with reference toFIGS.4-15. In some implementations, similar to the example virtual storage system1500described above with reference toFIG.15, a virtual storage system1600may include multiple virtual storage systems1502,1504that coordinate to perform synchronous replication from one virtual storage system to another virtual storage system. However, in contrast to the example virtual storage system1500described above, the virtual storage system1600illustrated inFIG.16provides a single cloud-based object storage1650that is shared among the virtual storage systems1502,1504. In this example, the shared cloud-based object storage1650may be treated as an additional data replica target, with delayed updates using mechanisms and logic associated with consistent, but non-synchronous replication models. In this way, a single cloud-based object storage1650may be shared consistently between multiple, individual virtual storage systems1502,1504of a virtual storage system1600. In each of these example virtual storage systems, virtual storage system logic may generally incorporate distributed programming concepts to carry out the implementation of the core logic of the virtual storage system. In other words, as applied to the virtual storage systems, the virtual system logic may be distributed between virtual storage system controllers, scale-out implementations that combine virtual system controllers and virtual drive servers, and implementations that split or otherwise optimize processing between the virtual storage system controllers and virtual drive servers. For further explanation,FIG.17sets forth a flow chart illustrating an example method of data flow within in a virtual storage system1700. The example method depicted inFIG.17may be implemented on any of the virtual storage systems described above with reference toFIGS.12-16. In other words, virtual storage system1700may be implemented by either virtual storage system1200,1300,1400,1500, or1600. As depicted inFIG.17, the example method includes receiving (1702), by a virtual storage system1700, a request to write data to the virtual storage system1700; storing (1704), within staging memory provided by one or more virtual drives of the virtual storage system1700, the data1754; and migrating (1706), from the staging memory to more durable data storage provided by a cloud service provider, at least a portion of data stored within the staging memory. Receiving (1702), by the virtual storage system1700, the request to write data to the virtual storage system1700may be carried out as described above with reference toFIGS.4-16, where the data may be included within one or more received storage operations1752, and the request may be received using one or more communication protocols, or one or more API calls provided by a cloud computing environment402that is hosting the virtual storage system1700. Storing (1704), within staging memory provided by one or more virtual drives of the virtual storage system1700, the data1754may be carried out as described above with reference to virtual storage systems1200-1600, where a virtual storage system, for example, virtual storage system1200, receives data from a client host1260at a virtual controller408,410, and where the virtual controller408,410stores the data among the local storage of the layer of virtual drives1210-1216. Staging memory provided by virtual drives is described in greater detail above with reference toFIG.12. Migrating (1706), from the staging memory to more durable data storage provided by a cloud service provider, at least a portion of data stored within the staging memory may be carried out as described above with reference toFIGS.4-16, where data is migrated from staging memory to a cloud-based object storage. Additional examples of receiving data and storing the data within staging memory, and subsequently migrating data from staging memory to more durable storage are described within co-pending patent application Ser. No. 16/524,861, which is incorporated in its entirely for all purposes herein. Specifically, all of the migration techniques described in co-pending patent application Ser. No. 16/524,861, which describe storing data within staging memory, also referred to as a first tier of storage, and optionally processing, modifying, or optimizing the data within the staging memory before, based on a migration event, the staging memory data is migrated to more durable memory, or cloud-based object storage. For further explanation,FIG.18sets forth a flow chart illustrating an example method of data flow within in a virtual storage system1700. The example method depicted inFIG.18may be implemented by one any of the virtual storage systems described above with reference toFIGS.4-16. In other words, virtual storage system1700may be implemented at least by either virtual storage system1200,1300,1400,1500,1502,1504, or1600, either individually or by a combination of individual features. The above example with regard toFIG.18describes an implementation of data flow through storage tiers of a virtual storage system, and more specifically, data flowing from staging memory to more durable object storage. However, more generally, data flow through a virtual storage system may occur in stages between any pair of multiple, different tiers of storage. Specifically, in this example, different tiers of storage may be: (1) virtual controller storage, (2) staging memory for transactional consistency and fast completions, (3) storage within virtual drives provided by virtual drive servers, (4) virtual drive server local instance store(s), and (5) an object store that is provided by a cloud services provider. As depicted inFIG.18, the example method includes: receiving (1802), by a virtual storage system1700, a request to write data to the virtual storage system1700; storing (1804), within storage provided by a first tier of storage of the virtual storage system1700, the data1854; and migrating (1806), from the first tier of storage to a second tier of storage, at least a portion of data stored within the first tier of storage. Receiving (1802), by the virtual storage system1700, the request to write data1854to the virtual storage system1700may be carried out as described above with reference toFIGS.4-17, where the data may be included within one or more received storage operations1852from a host computer or application, and the request may be received using one or more communication protocols, or one or more API calls provided by a cloud computing environment402that is hosting the virtual storage system1700. Storing (1804), within storage provided by a first tier of storage of the virtual storage system1700, the data1854may be carried out as described above with reference toFIGS.4-17, where one or more virtual controllers may be configured to receive and handle storage operations1852, including processing write requests and storing corresponding write data into one or more storage tiers of the virtual storage system1700. Five example storage tiers of the virtual storage system are described above, with reference to the beginning description forFIG.18. Migrating (1806), from the first tier of storage to a second tier of storage, at least a portion of data stored within the first tier of storage may be carried as described above with regard to movement of data through various tiers of storage. Further, in some examples, as described above, data may be transformed in various ways at one or of the storage tiers, including deduplication, overwriting, aggregating into segments, among other transformations, generating recovery metadata or continuous-data-protection metadata, as data flows from the one or more virtual controllers through the virtual storage system1700into backend storage, including one or more of object storage and any of the storage class options described below. A virtual storage system may dynamically adjust cloud platform resource usage in response to changes in cost requirements based upon cloud platform pricing structures, as described in greater detail below. Under various conditions, budgets, capacities, usage and/or performance needs may change, and a user may be presented with cost projections and a variety of costing scenarios that may include increasing a number of server or storage components, the available types of components, the platforms that may provide suitable components, and/or models for both how alternatives to a current setup might work and cost in the future. In some examples, such cost projections may include costs of migrating between alternatives given that network transfers incur a cost, where migrations tend to include administrative overhead, and for a duration of a transfer of data between types of storage or vendors, additional total capacity may be needed until necessary services are fully operational. Further, in some implementations, instead of pricing out what is being used and providing options for configurations based on potential costs, a user may, instead, provide a budget, or otherwise specify an expense threshold, and the storage system service may generate a virtual storage system configuration with specified resource usage such that the storage system service operates within the budget or expense threshold. Continuing with this example of a storage system service operating within a budget or expense threshold—with regard to compute resources, while limiting compute resources limits performance, costs may be managed based on modifying configurations of virtual application servers, virtual storage system controllers, and other virtual storage system components by adding, removing, or replacing with faster or slower virtual storage system components. In some examples, if costs or budgets are considered over given lengths of time, such as monthly, quarterly, or yearly billing, then by ratcheting down the cost of virtual compute resources in response to lowered workloads, more compute resources may be available in response to increases in workloads. Further, in some examples, in response to determining that given workloads may be executed at flexible times, those workloads may be scheduled to execute during periods of time that are less expensive to operate or initiate compute resources within the virtual storage system. In some examples, costs and usage may be monitored over the course of a billing period to determine whether usage earlier in the billing period may affect the ability to run at expected or acceptable performance levels later in the billing period, or whether lower than expected usage during parts of a billing period suggest there is sufficient budget remaining to run optional work or to suggest that renegotiating terms would reduce costs. Continuing with this example, such a model of dynamic adjustments to a virtual storage system in response to cost or resource constraints may be extend from compute resources to also include storage resources. However, a different consideration for storage resources is that storage resources have less elastic costs than compute resources because stored data continues to occupy storage resources over a given period of time. Further, in some examples, there may be transfer costs within cloud platforms associated with migrating data between storage services that have different capacity and transfer prices. Each of these costs of maintaining virtual storage system resources must be considered and may serve as a basis for configuring, deploying, and modifying compute and/or storage resources within a virtual storage system. In some cases, the virtual storage system may adjust in response to storage costs based on cost projections that may include comparing continuing storage costs using existing resources as compared to a combination of transfer costs of the storage content and storage costs of less expensive storage resources (such as storage provided by a different cloud platform, or to or from storage hardware in customer-managed data centers, or to or from customer-managed hardware kept in a collocated shared management data center). In this way, over a given time span that is long enough to support data transfers, and in some cases based on predictable use patterns, a budget limit-based virtual storage system model may adjust in response to different cost or budget constraints or requirements. In some implementations, as capacity grows in response to an accumulation of stored data, and as workloads, over a period of time, fluctuate around some average or trend line, a dynamically configurable virtual storage system may calculate whether a cost of transferring an amount of data to some less expensive type of storage class or less expensive location of storage may be possible within a given budget or within a given budget change. In some examples, the virtual storage system may determine storage transfers based on costs over a period of time that includes a billing cycle or multiple billing cycles, and in this way, preventing a budget or cost from being exceeded in a subsequent billing cycle. In some implementations, a cost managed or cost constrained virtual storage system, in other words, a virtual storage system that reconfigures itself in response to cost constraints or other resource constraints, may also make use of write-mostly, archive, or deep archive storage classes that are available from cloud infrastructure providers. Further, in some cases, the virtual storage system may operate in accordance with the models and limitations described elsewhere with regard to implementing a storage system to work with differently behaving storage classes. For example, a virtual storage system may make automatic use of a write-mostly storage class based on a determination that a cost or budget may be saved and reused for other purposes if data that is determined to have a low likelihood of access is consolidated, such as into segments that consolidate data with similar access patterns or similar access likelihood characteristics. Further, in some cases, consolidated segments of data may then be migrated to a write-mostly storage class, or other lower cost storage class. In some examples, use of local instance stores on virtual drives may result in cost reductions that allow virtual storage system resource adjustments that result in reducing costs to satisfy cost or budget change constraints. In some cases, the local instance stores may use write-mostly object stores as a backend, and because read load is often taken up entirely by the local instance stores, the local instance stores may operate mostly as a cache rather than storing complete copies of a current dataset. In some examples, a single-availability, durable store may also be used if a dataset may be identified that is not required or expected to survive loss of an availability zone, and such use may serve as a cost savings basis in dynamically reconfiguring a virtual storage system. In some cases, use of a single-availability zone for a dataset may include an explicit designation of the dataset, or indirect designation through some storage policy. Further, the designation or storage policy may also include an association with a specific availability zone; however, in some cases, the specific availability zone may be determined by a dataset association with, for example, host systems that are accessing a virtual storage system from within a particular availability zone. In other words, in this example, the specific availability zone may be determined to be a same availability zone that includes a host system. In some implementations, a virtual storage system may base a dynamic reconfiguration on use of archive or deep archive storage classes, if the virtual storage system is able to provide or satisfy performance requirements while storage operations are limited by the constraints of archive and/or deep archive storage classes. Further, in some cases, transfer of old snapshot or continuous data protection datasets, or other datasets that are no longer active, may be enabled to be transferred to archive storage classes based on a storage policy specifying a data transfer in response to a particular activity level, or based on a storage policy specify a data transfer in response to data not being accessed for a specified period of time. In other examples, the virtual storage system may transfer data to an archive storage class in response to a specific user request. Further, given that retrieval from an archive storage class may take minutes, hours, or days, users of the particular dataset being stored in an archive or deep archive storage class may be requested by the virtual storage system to provide specific approval of the time required to retrieve the dataset. In some examples, in the case of using deep archive storage classes, there may also be limits on how frequently data access is allowed, which may put further constraints on the circumstances in which the dataset may be stored in archive or deep archive storage classes. Implementing a virtual storage system to work with differently behaving storage classes may be carried out using a variety of techniques, as described in greater detail below. In various implementations, some types of storage, such as a write-mostly storage class may have lower prices for storing and keeping data than for accessing and retrieving data. In some examples, if data may be identified or determined to be rarely retrieved, or retrieved below a specified threshold frequency, then costs may be reduced by storing the data within a write-mostly storage class. In some cases, such a write-mostly storage class may become an additional tier of storage that may be used by virtual storage systems with access to one or more cloud infrastructures that provide such storage classes. For example, a storage policy may specify that a write-mostly storage class, or other archive storage class, may be used for storing segments of data from snapshots, checkpoints, or historical continuous data protection datasets that have been overwritten or deleted from recent instances of the datasets they track. Further, in some cases, these segments may be transferred based on exceeding a time limit without being accessed, where the time limit may be specified in a storage policy, and where the time limit corresponds to a low likelihood of retrieval—outside of inadvertent deletion or corruption that may require access to an older historical copy of a dataset, or a fault or larger-scale disaster that may require some forensic investigation, a criminal event, an administrative error such as inadvertently deleting more recent data or the encryption or deletion or a combination of parts or all of a dataset and its more recent snapshots, clones, or continuous data protection tracking images as part of a ransomware attack. In some implementations, use of a cloud-platform write-mostly storage class may create cost savings that may then be used to provision compute resources to improve performance of the virtual storage system. In some examples, if a virtual storage system tracks and maintains storage access information, such as using an age and snapshot/clone/continuous-data-protection-aware garbage collector or segment consolidation and/or migration algorithm, then the virtual storage system may use a segment model as part of establishing efficient metadata references while minimizing an amount of data transferred to the mostly-write storage class. Further, in some implementations, a virtual storage system that integrates snapshots, clones, or continuous-data-protection tracking information may also reduce an amount of data that may be read back from a write-mostly storage repository as data already resident in less expensive storage classes, such as local instance stores on virtual drives or objects stored in a cloud platform's standard storage class, may be used for data that is still available from these local storage sources and has not been overwritten or deleted since the time of a snapshot, clone, or continuous-data-protection recovery point having been written to write-mostly storage. Further, in some examples, data retrieved from a write-mostly storage class may be written into some other storage class, such as virtual drive local instance stores, for further use, and in some cases, to avoid being charged again for retrieval. In some implementations, an additional level of recoverable content may be provided based on the methods and techniques described above with regard to recovering from loss of staging memory content, where the additional level of recoverable content may be used to provide reliability back to some consistent points in the past entirely from data stored in one of these secondary stores including objects stored in these other storage classes. Further, in this example, recoverability may be based on recording the information necessary to roll back to some consistent point, such as a snapshot or checkpoint, using information that is held entirely within that storage class. In some examples, such an implementation may be based on a storage class including a complete past image of a dataset instead of only data that has been overwritten or deleted, where overwriting or deleting may prevent data from being present in more recent content from the dataset. While this example implementation may increase costs, as a result, the virtual storage system may provide a valuable service such as recovery from a ransomware attack, where protection from a ransomware attack may be based on requiring additional levels of permission or access that restrict objects stored in the given storage class from being deleted or overwritten. In some implementations, in addition to or instead of using a write-mostly storage class, a virtual storage system may also use archive storage classes and/or deep archive storage classes for content that is—relative to write-mostly storage classes—even less likely to be accessed or that may only be needed in the event of disasters that are expected to be rare, but for which a high expense is worth the ability to retrieve the content. Examples of such low access content may include historical versions of a dataset, or snapshots, or clones that may, for example, be needed in rare instances, such as a discovery phase in litigation or some other similar disaster, particularly if another party may be expected to pay for retrieval. However, as noted above, keeping historical versions of a dataset, or snapshots, or clones in the event of a ransomware attack may be another example. In some examples, such as the event of litigation, and to reduce an amount of data stored, a virtual storage system may only store prior versions of data within datasets that have been overwritten or deleted. In other examples, such as in the event of ransomware or disaster recovery, as described above, a virtual storage system may store a complete dataset in archive or deep archive storage class, in addition to storing controls to eliminate the likelihood of unauthorized deletions or overwrites of the objects stored in the given archive or deep archive storage class, including storing any data needed to recover a consistent dataset from at least a few different points in time. In some implementations, a difference between how a virtual storage system makes use of: (a) objects stored in a write-mostly storage class and (b) objects stored in archive or deep archive storage classes, may include accessing a snapshot, clone, or continuous-data-protection checkpoint that accesses a given storage class. In the example of a write-mostly storage class, objects may be retrieved with a similar, or perhaps identical, latency to objects stored in a standard storage class provided by the virtual storage system cloud platform, where the cost for storage in the write-mostly storage class may be higher than the standard storage class. In some examples, a virtual storage system may implement use of the write-mostly storage class as a minor variant of a regular model for accessing content that correspond to segments only currently available from objects in the standard storage class. In particular, in this example, data may be retrieved when some operation is reading that data, such as by reading from a logical offset of a snapshot of a tracking volume. In some cases, a virtual storage system may request agreement from a user to pay extra fees for any such retrievals at the time access to the snapshot, or other type of stored image, is requested, and the retrieved data may be stored into local instance stores associated with a virtual drive or copied (or converted) into objects in a standard storage class to avoid continuing to pay higher storage retrieval fees using the other storage class that is not included within the architecture of the virtual storage system. In some implementations, in contrast to the negligible latencies in write-mostly storage classes discussed above, latencies or procedures associated with retrieving objects from archive or deep archive storage classes may make implementation impractical. In some cases, if it requires hours or days to retrieve objects from an archive or deep archive storage class, then an alternative procedure may be implemented. For example, a user may request access to a snapshot that is known to require at least some segments stored in objects stored in an archive or deep archive storage class, and in response, instead of reading any such segments on demand, the virtual storage system may determine a list of segments that include the requested dataset (or snapshot, clone, or continuous data protection recovery point) and that are stored into objects in the archive or deep archive storage. In this way, in this example, the virtual storage system may request that the segments in the determined list of segments be retrieved to be copied into, say, objects in a standard storage class or into virtual drives to be stored in local instance stores. In this example, the retrieval of the list of segments may take hours or days, but from a performance and cost basis, it is preferable to request the entire list of segments at once instead of making individual requests on demand. Finishing with this example, after the list of segments has been retrieved from the archive or deep archive storage, then access may be provided to the retrieved snapshot, clone, or continuous data protection recovery point. Readers will appreciate that although the embodiments described above relate to embodiments in which data that was stored in the portion of the block storage of the cloud-based storage system that has become unavailable is essentially brought back into the block-storage layer of the cloud-based storage system by retrieving the data from the object storage layer of the cloud-based storage system, other embodiments are within the scope of the present disclosure. For example, because data may be distributed across the local storage of multiple cloud computing instances using data redundancy techniques such as RAID, in some embodiments the lost data may be brought back into the block-storage layer of the cloud-based storage system through a RAID rebuild. Readers will further appreciate that although the preceding paragraphs describe cloud-based storage systems and the operation thereof, the cloud-based storage systems described above may be used to offer block storage as-a-service as the cloud-based storage systems may be spun up and utilized to provide block service in an on-demand, as-needed fashion. In such an example, providing block storage as a service in a cloud computing environment, can include: receiving, from a user, a request for block storage services; creating a volume for use by the user; receiving I/O operations directed to the volume; and forwarding the I/O operations to a storage system that is co-located with hardware resources for the cloud computing environment. For further explanation,FIG.19illustrates an example container-based storage system architecture1900in accordance with some embodiments. The container-based storage system architecture may include similar computing and storage resources as the physical storage systems described above with reference toFIGS.1A-3E, cloud-based storage systems described above with reference toFIGS.4-11, or virtual storage systems described above with reference toFIGS.12-18. The example container-based storage system architecture1900ofFIG.19includes a cluster1902of nodes1910-1918that are coupled or couplable to storage resources in backing storage1904and are operable to support the execution of containerized storage controller applications that provide access to the storage resources in the backing storage1904. A node may be, for example, a server attached to the backing storage or a storage system that includes the backing storage. The backing storage1904may include a variety of storage types, classes, and tiers that are accessible by a particular node through using various types of connectivity—for example, some nodes may be coupled to a collection of direct-attached disks, whereas other may be coupled to a SAN, or to cloud-based storage. In the example ofFIG.19, the backing storage1904includes direct-attached storage resources of one or more nodes (e.g., the direct attached storage1940of node1910), network-attached or SAN storage resources coupled to one or more nodes (e.g., the network-based storage1942-1944coupled to nodes1914,1916), or cloud-based storage resources available to one or more nodes (e.g., cloud-based storage1946available to the cluster1902of nodes1910-1918). The storage resources may include various storage tiers and classes of both physical and cloud-based storage, including block storage, file systems, object storage, bucket storage, archival storage, and others described previously. As will be explained in detail below, the containerized storage controller applications may be deployed on particular nodes to provide data services respective of the storage resources that are available to those particular nodes. For example, through a network interface of the nodes1910-1918, a cluster of instances of the containerized storage controller applications may provide data services for the disparate storage resources in the backing storage1904. The nodes1910-1918may be physical or virtual machines that host a container platform1906. In some examples, the container platform is a runtime environment provided by an operating system level virtualization service embodied, for example, as a module of computer software that, when executed on computer hardware, provides a managed environment for the deployment, scaling, and management of containerized applications. Examples of operating system level virtualization services include containerization services such as Docker™, hybrid cloud container orchestration such as Mesosphere™, and container orchestration services such as Kubernetes™. The container platform1906provides a runtime environment for executing containers1950-1958on nodes1910-1516. The container platform1906may also provide a proxy to allow client host communication with the containers1950-1958. The container platform1906may also include an agent that communicates with a control plane implemented, for example, in a manager node. In the example ofFIG.19, the node1918is a manager node that includes a control plane1980. The control plane1980includes an API server1982that facilitates communication among internal cluster components as well as provides an interface for user/administrators to communicate with the control plane. The control plane1980also includes a cluster manager1984that creates, coordinates and scales containers, forms and manages a distributed configuration database for the cluster, defines and distributes data services policies, and coordinates cluster activities. The control plane1980also includes a scheduler1986that assigns containers to run on nodes and monitors the resource needs of a container and resources consumed on a node. In assigning containers to run on nodes, the scheduler1986may consider resource requirements of containers and resource availability of nodes, policy constraints defined by the manager1984, data locality for a container's workload, and workload conflicts among containers. The control plane1980may be distributed on multiple nodes, may be implemented on a node that is also a workload node (i.e., a node that runs containers), or may be implemented on node that is independent of workloads (as illustrated inFIG.19). The containers1950-1958respectively include a storage controller application such as a data services microcontroller (DSM)1920-1528, which is a stateless data services provider that processes I/O between one or more hosts1960-1564and the backing storage1904. As such, a DSM is a containerized storage controller application that provides storage services to one or more hosts1960-1564for data stored in the backing storage1904. A DSM may provide storage services typical of the storage controllers discussed above, or may provide a subset of these storage services such that a cluster of DSMs collectively provide a microservice architecture for storage services offered to a client host. For example, a single DSM may provide storage services such as read/write access to data in the backing storage1904, data protection and recovery, quality of service (QOS), class of service (COS), encryption, compression, replication, migration, indexing, caching, tiering, and so on. Or, the DSM may provide one of these services (e.g., data protection and recovery) while other DSMs carry out the other services. Storage services configuration data is stored persistently on the backing storage such that, in the event of a failure of DSM or a scale out, another DSM may be launched using the storage services configuration data. The nodes1910-1516provide network connectivity for the DSMs1920-1528through which client hosts1960-1564communicate with the DSMs. The nodes1910-1516also provide the DSMs access to the backing storage1904. A single node (e.g., node1910) may host multiple DSMs (e.g., DSMs1920-1522) or a node (e.g., node1916) may host a single DSM (e.g., DSM1928) based on the resources required by a DSM, the resources available on the node, availability requirements, data locality, and other requirements. For example, where DSM provides complete set of volume-servicing storage services, DSMs could be provisioned on separate nodes to satisfy availability requirements. Where a DSM provides a storage microservice either as part of a cluster of DSMs or as a service offloaded from a volume-servicing DSM, multiple DSMs might be provisioned on the same node. Each DSM1920-1528presents one or more virtualized volumes1930-1538for which it offers the storage and data services. The virtualized volumes1930-1538are a virtualization of storage resources in the backing storage1904, which may be a raw device (e.g., a disk), a filesystem, block storage, an object bucket, or some other structured or unstructured storage construct. As such, the backing storage1904provides the capacity required to store data written by a client host into the virtualized volume. In the example ofFIG.19, the DSMs1920,1922each present a single volume, where the DSM1924presents multiple volumes1934-1536. A single DSM can present and manage one or more volumes to one or more front-end client hosts1960-1564, or multiple DSMs can service a single client host for performance or capacity scaling. The virtualized volumes1930-1538are used by a client host1960-1564for storing and accessing data. In some examples, a DSM may not present a volume and instead performs a service offloaded from a volume-servicing DSM (e.g., indexing, access and security analytics, trending and reporting). With respect to the virtualized volumes1930-1538and the corresponding data in the backing storage1904, the DSMs1920-1528may provide storage services such as performance services (e.g., QOS, COS, DRAM/SCM caching, intelligent load balancing), mobility services (e.g., non-disruptive migration, multi-platform and cloud mobility), efficiency services (e.g., compression, cloning, scaling, tiering, archiving), data intelligence services (e.g., indexing and search, access and security analytics, trending and reporting), and protection services (e.g., encryption and key management, high availability, erasure coding, disaster recovery, snapshots, cataloging, backup, recovery). In some examples, a virtualized volume1936may be paired with another volume1938to create a replication pair1970, for example, on separate respective nodes1914,1916for high availability. In these examples, the virtualized volume1936and the virtualized volume1938are replicas. In some cases, the DSMs1924,1928may act as active/active storage controllers for the replication pair1970in accordance with multipathing protocols described above. In other cases, the DSM1928may act as a failover controller for the DSM1924with respect to the replication pair1970. In still other cases, where the virtualized volume1938is a clone of the virtualized volume1936, the DSM1928and virtualized volume1938may be used for disaster recovery testing, analytics, application development and testing, and so on. In some examples, the DSM manager1984creates, coordinates, and scales DSMs1920-1928. Data services processing capacity provided by a DSM deployment can scale linearly with provisioned demand as the virtualized volumes1930-1938can be distributed across massively scalable DSMs1920-1928. Individual DSM deployments can be scaled up or down in resourcing to conserve hardware resources for light workloads or consume additional resources for heavy workloads. In some cases, service-specific DSMs can be deployed as a means to offload services such as data protection, indexing or archiving from volume-servicing DSMs. Moreover, latency common in shared-everything scale-out architectures is eliminated as each DSM1920-1928works as an independent entity with cluster communication occurring entirely out of band. DSMs1920-1928can be intelligently provisioned across a large pool of nodes and backing storage1904to ensure balanced capacity and performance, and dynamically migrated to maintain optimum balance as workloads change over time. Management of the DSM cluster1902can be performed through an API or user interface of the cluster manager1984, which stores configuration and policy data on a distributed key-value store and distributes configuration metadata to DSMs1920-1928for persistent storage on the backing storage1904. DSMs1920-1928are managed through cluster configuration to define data services policies, failure domains, compute resources, backing storage resources and location, scheduling, versioning, monitoring, reporting and security configuration. DSM access and management can be securely isolated and exposed to end-users, thus enabling users to directly view and manage the data services and service levels applied to their volumes. DSM and data services usage can be monitored and reported on for both operational uses and customer billing based on the services they configure and consume. The DSM architecture1900enhances service availability and data integrity through a variety of mechanisms and attributes. For example, a DSM1920-1928is not dependent on the DSM manager1984or cluster services in order to maintain data services operations. A cluster failure would only result in the loss of management or configurability, but not impact data services. Even the loss of the entire cluster configuration database could be recovered and rebuilt from the DSM configuration metadata that is persistently stored in the backing storage1904along with the contents of each virtualized volume. The health of DSMs1920-1928can be monitored (e.g., by the DSM manager1984) using liveness, readiness and startup probes and automatically restarted by container orchestration services in the event a DSM1920-1928becomes unhealthy. Because each DSM1920-1928is stateless and containerized, hardware and software failures can be rapidly recovered from simply by restarting the failed DSM on either the same DSM host node or another node in the cluster1902. DSMs1920-1928work in write-through mode to ensure I/O acknowledgement to the client hosts1960-1964are first persistently committed to the backing storage1904to guarantee that the sudden loss of a DSM1920-1928will not result in data loss. Moreover, DSMs1920-1928each run independent storage services software stacks. Thus, a failure in an individual DSM's software will be limited only to that DSM, thereby limiting the impact of a code failure or bug only to the affected DSM and the volume(s) it services. The availability of front-end virtualized volumes1930-1938is further enhanced with DSM data services policies that provide mirroring, erasure coding, data protection, replication, or active/active clustering of the serviced volumes between DSMs. The DSM architecture1900enhances portability, as a DSM1920-1928serves as an abstraction layer between the backing storage1904and the front-end client hosts1960-1964. This enables seamless portability across different platforms with minimal to no changes to front-end client hosts1960-1964. The backing storage1904includes both the data contents and all necessary metadata relating to the data services policy configured for a virtualized volume1930-1938. This ensures that the required data services for that virtualized volume are configured and enforced regardless of where that volume moves across nodes, and all connected datasets (e.g., backup, archiving and tiering destinations) remain configured and connected. A DSM1920-1928can attach to and virtualize heterogeneous volumes and filesystems already serviced by existing on-premises and third-party storage platforms for the purposes of migrating or cloning those volumes into the DSM cluster architecture1900. The DSM architecture1900enhances upgradability due to each DSM1920-1928being isolated to a separate image of the data services software stack. Thus, DSM deployments existing in the cluster1902can operate on various software versions at the same time and be upgraded independently. Upgrades to DSMs1920-1928may be facilitated through rolling updates to DSM deployments in conjunction with stateful sets to ensure each individual DSM deployment has unique network addressing and dedicated backing storage that is appropriately protected during rolling updates. In the event a rolling update fails, a rollback can be performed on the DSM deployment to return it to a known good software version. For further explanation,FIG.20illustrates an example container-based storage system architecture2000in accordance with some embodiments. The container-based storage system architecture may include similar computing and storage resources as the physical storage systems described above with reference toFIGS.1A-3E, cloud-based storage systems described above with reference toFIGS.4-11, virtual storage systems described above with reference toFIGS.12-18, and the container-based storage system architecture1900ofFIG.19. Like the cluster1902in the container-based storage system architecture1900ofFIG.19, the cluster2002includes nodes1910-1916that support the execution of containers1950-1958, which include the DSMs1920-1928that present virtualized volumes1930-1938. In the example ofFIG.20, the nodes1910-1916may be on-premises nodes that provide a container run-time environment such as the container platform1906. In addition to the DSMs supported by the on-premises environment2070, additional DSMs in the cluster2002may be supported in a cloud-based environment. Thus, in the example ofFIG.20, the cluster2002also includes one or more cloud-based container platform instances2010-2016that provide a cloud-based container runtime environment for containers2050-2058. The containers2050-2058include DSMs2020-2028that present virtualized volumes2030-2038. The cloud-based container platform instances2010-2016may be coupled to cloud-based storage2046that includes, for example, cloud-based block storage, cloud-based files systems, cloud-based object storage, and so on. In the example ofFIG.20, the control plane1980additionally includes a cloud controller2088that coordinates communication between the on-premises environment2070and the cloud environment2072that includes the cloud-based container platform instances2010-2016, containers2050-2058, DSMs2020-2028that present virtualized volumes2030-2038, and cloud-based storage2046. In this way, where datasets may be migrated back and forth between the one-premises environment2070and the cloud environment2072, DSMs and their virtualized volumes may also be moved in and out of the cloud to service volumes that virtualize those datasets. For further explanation,FIG.21illustrates an example method of providing scalable and reliable container-based storage services in accordance with some embodiments. The example ofFIG.21includes a container-based storage cluster2100, which may include similar components as the container-based storage system architecture1900ofFIG.19or the container-based storage system architecture2000ofFIG.20. In fact, the container-based storage cluster2100may include additional or fewer components than the container-based architectures described with reference toFIGS.19and20. The example method ofFIG.21includes deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller. In some examples, the containerized storage controller is a storage controller application that executes within a container. In these examples, the containerized storage controller may be a DSM (e.g., DSMs1920-1928ofFIG.19) or a virtual storage controller as discussed above. As such, in some examples, deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller is carried out by deploying a DSM on a node in a cluster of nodes (e.g., nodes1910-1918ofFIG.19) that supports the execution of the DSM. A node supports execution of the DSM by hosting a container runtime environment (e.g., container platform1906ofFIG.19) that virtualizes the host operating system of the node. The DSM or other storage controller application executes in the container supported by this runtime environment. In these examples, a node may be a bare metal server or a virtual machine. In some examples, deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller is carried out by a container orchestration service (e.g., the cluster manager1984in the control plane1980ofFIG.19). The example method ofFIG.21also includes associating2104a dataset stored in backing storage accessible by the first node with one or more virtualized volumes presented by the containerized storage controller. A containerized storage controller such as a DSM may utilize storage resources that are available to the node (e.g., direct-attached storage, network-attached or SAN storage, or cloud-based storage). These storage resources provide backing storage for the virtualized volumes(s) presented by the DSM. A DSM may attach a dataset stored in these storage resources, such as raw disks or devices, volumes, file systems, object buckets, and other structured or unstructured bodies of data. The storage resources may include various storage tiers and classes of both physical and cloud-based storage, including block storage, file systems, object storage, bucket storage, archival storage, and others described previously. The DSM presents the dataset that is physically stored in the backing storage as one or more virtualized volumes. For example, the DSM1920depicted inFIG.19presents the virtualized volume1930as a virtualized entity corresponding to a dataset (e.g., a volume or file system) physically stored in the direct attached storage1940accessible by the node1910. With continued reference toFIG.19, to provide access to a dataset stored in network-based storage1942, the DSM1924deployed on node1914may present the virtualized volume1934as a virtualization of that dataset. Thus, the DSM may be deployed on a particular node based on the availability of a dataset to that node in order to virtualize the dataset. In this way, if the dataset moves from one physical storage location to another, the DSM may also be moved (e.g., to a node connected to the new storage location) without altering the presentation of the virtualized volume. The example method ofFIG.21also includes providing2106, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes. Although a containerized storage controller such as a DSM does not have ownership over any particular dataset, the DSM presents the one or more virtualized volumes to one or more client hosts as if it were a typical storage controller. In other words, the DSM can provide the services of a typical storage controller with respect to data represented in the one or more virtualized volumes. Such storage services may include read/write/modify access, performance services (e.g., QOS, COS, load balancing), mobility services (e.g., non-disruptive migration, multi-platform and cloud mobility), efficiency services (e.g., compression, cloning, scaling, tiering, archiving), data intelligence services (e.g., indexing and search, access and security analytics, trending and reporting), and protection services (e.g., encryption and key management, high availability, erasure coding, disaster recovery, snapshots, cataloging, backup). As such, in some examples, providing2106, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes is carried out by providing an API that allows a client host to request such services for the one or more virtualized volumes presented by the DSM to the client host. While in some examples a DSM provides an aggregate of storage services for a particular virtualized volume, in other examples many DSMs each provide a particular service for that virtualized volume. For example, for a particular virtualized volume, one DSM may be dedicated to read/write/modify access while another may be dedicated to backup and snapshotting, while yet another may be dedicated to indexing, while still another may be dedicated to analytics, and so on. In this way, multiple deployed DSMs may provide a set of storage services as a storage microservice architecture. For further explanation,FIG.22illustrates another example method of providing scalable and reliable container-based storage services in accordance with some embodiments. Like the example method ofFIG.21, the example method ofFIG.22also includes deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller; associating2104a dataset stored in backing storage accessible by the first node with one or more virtualized volumes presented by the containerized storage controller; and providing2106, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes. The example method ofFIG.22also includes storing2202configuration metadata for the containerized storage controller in the backing storage, wherein the configuration metadata includes at least a data services policy relating to the one or more volumes. Configuration metadata for the DSM is persistently stored in the backing storage accessible by the node. Thus, in the event of a failure or scale out of the DSM, the DSM may be relaunched on the node or on another node that can access the backing storage including the configuration metadata and the data virtualized by the virtualized volume(s). The configuration metadata includes, for example, a reference to a container image for a particular DSM and one or more data service policies. The data services policies define what data should be included in the virtualized volumes presented by a DSM and the set of storage services offered for that data. For example, a data service policy may indicate a particular volume of data stored in the backing storage to be included in the virtualized volume, the data services and storage services to be provided by the DSM for the virtualized volume, authorized client hosts and permissions for the virtualized volume, and so on. In some examples, a data service policy may include mirroring, erasure coding, data protection, replication and/or active/active clustering policies for the virtualized volumes. In some examples, the configuration data is generated by a cluster manager (e.g., the cluster manager1984ofFIG.19). In some examples, the cluster manager communicates the configuration data to each node, which store the configuration data in backing storage available to the node. In these examples, storing2202configuration metadata for the containerized storage controller in the backing storage, wherein the configuration metadata includes at least a data services policy relating to the one or more volumes is carried out by the node receiving the configuration metadata from the cluster manager and storing the configuration data in backing storage resources that are accessible by the node. For further explanation,FIG.23illustrates another example method of providing scalable and reliable container-based storage services in accordance with some embodiments. Like the example method ofFIG.22, the example method ofFIG.23also includes deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller; associating2104a dataset stored in backing storage accessible by the first node with one or more virtualized volumes presented by the containerized storage controller; providing2106, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes; and storing2202configuration metadata for the containerized storage controller in the backing storage, wherein the configuration metadata includes at least a data services policy relating to the one or more volumes. The example method ofFIG.23also includes deploying2302a second instance of the containerized storage controller on a second node using the configuration metadata. Deploying2302a second instance of the containerized storage controller on a second node using the configuration metadata may be carried out in response to a variety of conditions. For example, the DSM or the node hosting the DSM may fail, thus necessitating the second instance to take over servicing the virtualized volume(s) of the unavailable DSM. In another example, the second instance may be deployed to scale out service to the virtualized volume(s) to satisfy an increased demand. In yet another example, the second instance may be deployed on the second node to alleviate resource consumption on the first node. In some examples, the first node and the second node share the same backing storage resources. In these examples, the configuration metadata stored on the shared backing storage resources may be used to launch the second instance of the DSM. In other examples, the first node and the second node do not share the same backing storage resources, thus the configuration metadata stored on the backing storage resources of the first node may not be available to the second node. However, the second instance of the DSM may be launched on the second node if the backing storage resources of the second node also contain the configuration metadata and the data virtualized by the virtualized volume(s). For example, data stored in the backing storage resources of the first node may be replicated to the backing storage resources of the second node as part of a replication policy. In this example, the configuration metadata for the DSM may also be stored in the backing storage resources of each node such that the DSM may be launched on the second node to present a volume that virtualizes the replicated data. In some implementations, a particular client host may be directed preferentially or exclusively to the first DSM that presents the virtualized volume(s), while a different client host may be directed preferentially or exclusively to the second instance of the DSM that presents those same virtualized volume(s). For example, SCSI ALUA or some similar mechanism may be used to establish preferred (sometimes called “optimized”) path preferences from one host to a DSM where traffic is generally directed to the preferred DSM but where, such as in the case of faulted requests, network or node failures, or DSM failures, that traffic could be redirected to a second instance of the DSM. Alternately, SCSI volume advertisements or network restrictions, or some similar alternative mechanism, could force all traffic from a particular client host exclusively to one DSM and force traffic from a different client host to the second instance of the DSM. Thus, the DSM and second instance of the DSM may be paired in that requests from a client host may be preferentially or exclusively directed to the first DSM and with the second instance of the DSM potentially being available to the client host for use in cases of faulted requests. In some implementations, a host may be directed to issue I/O requests to a DSM on a node that is within the same availability zone as the host, with DSMs on nodes in a different availability zone from the client host being available for use in cases of faults. In some examples, deploying2302a second instance of the containerized storage controller on a second node using the configuration metadata is carried out by the cluster manager (e.g., manager1984) distributing the configuration metadata to multiple nodes and/or by the scheduler (e.g., scheduler1986) launching the second instance of the DSM on the second node. The configuration metadata may be distributed to the second node, for example, as part of an initial cluster configuration, as part of adding the second node to the cluster, in response to a failure of a DSM or node, and under a variety of other circumstances. For further explanation,FIG.24illustrates another example method of providing scalable and reliable container-based storage services in accordance with some embodiments. Like the example method ofFIG.21, the example method ofFIG.24also includes deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller; associating2104a dataset stored in backing storage accessible by the first node with one or more virtualized volumes presented by the containerized storage controller; and providing2106, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes. The example method ofFIG.24also includes deploying2402an additional containerized storage controller. In some examples, deploying2402an additional containerized storage controller is carried out by the scheduler deploying another DSM either on the same node or on a different node. The additional DSM may be redundant DSM, or may be a different DSM with respect to the data services policies of the DSM, storage services offered, container image source, or other aspects. For example, the first DSM may provide a particular set of storage services for the virtualized volumes, whereas the additional DSM may provide different storage services or may provide a subset of the storage services provided by the first DSM. In some examples, the first DSM and the additional DSM share at least one virtualized volume. The example method ofFIG.24also includes offloading2404at least one storage service to the additional containerized storage controller. In some examples, offloading2404at least one storage service to the additional containerized storage controller is carried out by tasking the additional DSM with one or more storage services for the virtualized volume(s) and redirecting client host requests for those services from the first DSM to the additional DSM. Consider an example where the first DSM is a DSM that provides a robust set of storage services for a virtualized volume. To alleviate the workload of the first DSM, one or more specific services (e.g., backup or snapshotting) for the virtualized volume is offloaded to the additional DSM. Consider another example where the first DSM is volume-servicing DSM that does not perform background services. In response to a client host request for such services, one or more background services (e.g., indexing or analytics) for the virtualized volume is offloaded to the additional DSM. For further explanation,FIG.25illustrates another example method of providing scalable and reliable container-based storage services in accordance with some embodiments. Like the example method ofFIG.21, the example method ofFIG.25also includes deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller; associating2104a dataset stored in backing storage accessible by the first node with one or more virtualized volumes presented by the containerized storage controller; and providing2106, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes. The example method ofFIG.25also includes deploying2502an upgraded version of the containerized storage controller that presents the one or more virtualized volumes. In some examples, deploying2502an upgraded version of the containerized storage controller that presents the one or more virtualized volumes is carried out by distributing a container image that includes an upgraded software stack for the DSM and launching the upgraded version of the DSM on a node to execute contemporaneously with the original version of the DSM. Because neither DSM owns the data and storage resources it virtualizes, each DSM may present the same virtualized volume(s). The example method ofFIG.25also includes redirecting2504a client host from the containerized storage controller to the upgraded version. In some examples, redirecting2504a client host from the containerized storage controller to the upgraded version is carried out by redirecting a client host from the original DSM to the upgraded version that presents the same virtualized volumes. For example, redirecting the client host may be carried out by signaling to the client host to send requests to the upgraded version, through SCSI ALUA mechanisms to indicate that the upgraded version of the DSM is a preferred or ‘optimized’ controller, through volume advertisements, or through some other mechanism. For further explanation,FIG.26illustrates another example method of providing scalable and reliable container-based storage services in accordance with some embodiments. Like the example method ofFIG.21, the example method ofFIG.26also includes deploying2102a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller; associating2104a dataset stored in backing storage accessible by the first node with one or more virtualized volumes presented by the containerized storage controller; and providing2106, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes. The example method ofFIG.26also includes deploying2602a plurality of containerized storage controllers on one or more nodes operable to support execution of the plurality of containerized storage controllers, wherein each of the plurality of containerized storage controllers presents one or more virtualized volumes attached to storage resources in backing storage of the respective nodes. In some examples, deploying2602a plurality of containerized storage controllers on one or more nodes operable to support execution of the plurality of containerized storage controllers, wherein each of the plurality of containerized storage controllers presents one or more virtualized volumes attached to storage resources in backing storage of the respective nodes is carried out by a container orchestration services, as discussed above, such as the cluster manager1984inFIG.19. The example method ofFIG.26also includes constructing2604a cluster configuration database that identifies each containerized storage controller and the one or more virtualized volumes presented. As discussed above, configuration metadata for each DSM and its virtualized volume(s) is stored in the backing storage resources of each node. The cluster configuration database may include information such as an identification of a virtualized volume, a node or nodes connected to the backing storage resources that include the data for the virtualized volume, a set of data services policies for the virtualized volume, an identification of a DSM that enacts one or more of those data services policies, and a container image for such a DSM. The cluster configuration database provides representation of the accessible data in the container-based storage system and the endpoints through which that data can be accessed, as well as the data services policies for that data. As data moves from one location to another in the backing storage, the cluster configuration database may be updated to indicate the new location of this data and the nodes that are attached to it. As new nodes are added to the cluster, the backing storage resources that are available to that node may be added to the cluster configuration database. The DSM may be deployed on the various nodes in dependence upon such information contained in the cluster configuration database. In some examples, constructing2604a cluster configuration database that identifies each containerized storage controller and the one or more virtualized volumes presented is carried out by a container orchestration services, as discussed above, such as the cluster manager1984inFIG.19. Example embodiments are described largely in the context of a fully functional computer system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure. Embodiments can include be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to some embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Advantages and features of the present disclosure can be further described by the following statements: Statement 1. A method of servicing I/O operations in a virtual storage system, the method comprising: receiving, by the virtual storage system, a request to write data to the virtual storage system; storing, within storage provided by a first tier of storage of the virtual storage system, the data; and migrating, from the first tier of storage to a second tier of storage that is more durable than the first tier of storage of the virtual storage system, at least a portion of data stored within the first tier of storage. Statement 2. The method of statement 1, wherein migrating the at least the portion of data stored within the staging memory is responsive to detecting a condition for transferring data between the staging memory to the durable data storage provided by the cloud services provider. Statement 3. The method of statement 2 or statement 1, wherein the staging memory includes multiple virtual drive servers. Statement 4. The method of statement 3, statement 2, or statement 1, wherein the multiple virtual drive servers include respective local storage. Statement 5. The method of statement 4, statement 3, statement 2, or statement 1, wherein the multiple virtual drive servers provide virtual drives as block-type data storage. Statement 6. The method of statement 5, statement 4, statement 3, statement 2, or statement 1, wherein the request to write data to the virtual storage system is received by one or more virtual controllers running within a virtual machine, a container, or a bare metal server. Statement 7. The method of statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, wherein staging memory is provided by multiple virtual drive servers that respectively include a both virtual controller and local memory. Statement 8. The method of statement 7, statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, wherein the at least the portion of the data stored within the staging memory is deduplicated, encrypted, or compressed prior to migration from the staging memory to the durable data storage. Statement 9. The method of statement 8, statement 7, statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, wherein the staging memory of the virtual storage system is characterized by a low read latency relative to the durable data storage provided by the cloud services provider. Statement 10. The method of statement 9, statement 8, statement 7, statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, wherein the first tier of storage includes staging memory that provides transactional consistency and write acknowledgments, and wherein the second tier of storage includes virtual drives provided by virtual drive servers of the virtual storage system. Statement 11. The method of statement 10, statement 9, statement 8, statement 7, statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, wherein the first tier of storage includes the virtual drives provided by virtual drive servers of the virtual storage system, and wherein the second tier includes object storage provided by a cloud services provider that provides object storage independent of the virtual storage system. Advantages and features of the present disclosure can be further described by the following statements: Statement 1. A method comprising: deploying a containerized storage controller on a first node among of plurality of nodes operable to support execution of the containerized storage controller; associating a dataset stored in backing storage accessible by the first node with one or more virtualized volumes presented by the containerized storage controller; and providing, by the containerized storage controller to one or more client hosts, a set of storage services for the one or more virtualized volumes. Statement 2. The method of statement 1, wherein the containerized storage controller is a data services microcontroller providing one or more storage microservices. Statement 3. The method of statement 2 or statement 1, further comprising storing configuration metadata for the containerized storage controller in the backing storage, wherein the configuration metadata includes at least a data services policy relating to the one or more virtualized volumes. Statement 4. The method of statement 3 further comprising: deploying a second instance of the containerized storage controller on a second node among the plurality of nodes using the configuration metadata. Statement 5. The method of statement 4, wherein the containerized storage controller and the second instance of the containerized storage controller are paired; and wherein the one or more virtualized volumes presented by the containerized storage controller and one or more virtualized volumes presented by the second instance of the containerized storage controller are replicas. Statement 6. The method of statement 5, statement 4, statement 3, statement 2, or statement 1, further comprising: deploying an additional containerized storage controller; and offloading at least one storage service to the additional containerized storage controller. Statement 7. The method of statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, further comprising: deploying an upgraded version of the containerized storage controller that presents the one or more virtualized volumes; and redirecting a client host from the containerized storage controller to the upgraded version. Statement 8. The method of statement 7, statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, further comprising: deploying a plurality of containerized storage controllers on the plurality of nodes operable to support execution of the plurality of containerized storage controllers, wherein each of the plurality of containerized storage controllers presents one or more virtualized volumes that virtualizes backing storage resources of the respective nodes; and constructing a cluster configuration database that identifies each containerized storage controller and the one or more virtualized volumes presented. Statement 9. The method of statement 8, statement 7, statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, wherein the containerized storage controllers are deployed on a plurality of nodes that include on-premises servers and cloud-based computing instances. Statement 10. The method of statement 9, statement 8, statement 7, statement 6, statement 5, statement 4, statement 3, statement 2, or statement 1, wherein the backing storage includes a plurality of storage tiers. | 420,924 |
11861222 | DETAILED DESCRIPTION Systems, apparatuses, and methods related to object management in tiered memory systems are described. An example method can include writing a memory object to a first memory device of a first type of memory medium. The example method can include determining that a size of the memory object meets or exceeds a threshold data size. The example method can include writing the memory object to a second memory device that comprises a second type of memory medium different than the first type. The first memory medium can be a non-volatile memory comprising phase-change memory or resistive random access memory (RAM) and the second memory medium can be NAND Flash or NOR Flash. In some embodiments, the first memory device can include a first type of memory medium including an emerging memory device, such as a three-dimensional (3D) cross-point memory, a phase-change memory, resistive random access memory (RAM), etc. A second memory device can include a second type of memory medium including NAND Flash or NOR Flash. The memory system can include an address space that is split between or contiguous across the first memory device and the second memory device. As an example, the address space can span both the first memory device and the second memory device. A memory object to be stored in the memory system can be associated with a particular address location in the address space irrespective of which of the first memory device and the second memory device the memory object is stored in. Embodiments described herein can further include writing each of a plurality of memory objects to one of a first memory device and a second memory device. A particular one of the plurality of memory objects can be written (e.g., transferred) to another of the second memory device or the first memory device, respectively, in response to a size of the particular one memory object being a threshold data size. As an example, data associated with a memory object can be written to a first memory device, such as an emerging memory device. When the data of a memory object reaches a threshold data size, such as a page size of a flash-based memory device, the memory object and the data can be transferred to the non-volatile memory device. In this way, smaller portions of data can be initially written to the emerging memory device and when the smaller portions of the data are able to be combined to be a full flash-based page size (e.g., a NAND page size), the combined data can be transferred to the flash-based memory device. In an example, data associated with the memory object can be written to a second memory device, such as a non-volatile memory device which can include a flash-based memory device. When the data of the memory object is requested by a host from the flash-based (e.g., NOR or NAND) memory device, the data can be accessed from the flash-based memory device if the data is a same data size as a page size, or within a threshold range of the page size. Further, when the data is requested by the host, if the data is a smaller data size than the page size, or a threshold data size, the data can be written to (e.g., transferred to) the emerging memory device prior to being accessed by the host. In this way, data of a smaller data size can be accessed from the emerging memory device where smaller portions of data can be accessed without accessing a full page size of data. In an approach where the data remains in the flash-based memory device, the host may be accessing a full page size of data in order to access a portion of data that is smaller than the full page size, thereby transferring data that has not been requested by the host and consuming unnecessary resources of the memory system. As used herein, the term “memory object” and variants thereof, generally refer to a contiguously addressed region of data that is uniquely identified on the device and can be read or written. As used herein, “semantics” generally refer to the format of a memory object, an instruction, a command, or a signal in reference to the meaning of the memory object, instruction, command, or signal. For example, memory objects or instructions that can be understood by a first memory device may not be understood by the second memory device, and vice versa. By configuring the semantics associated with the memory object into semantics that can be understood by the first memory device or the second memory device, the memory objects can be selectively written to the first memory device or the second memory device. As described herein, embodiments can include using a memory system including a host and memory devices that use a key value database system. A key value database is a data storage method for storing, retrieving, and managing associative arrays and a data structure which can be referred to as a dictionary or hash table, where the dictionary can include memory objects. These memory objects can be stored and retrieved using a key that uniquely identifies the record and can be used to find the data within the database, as will be described in further detail below in association withFIG.2. In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and structural changes may be made without departing from the scope of the present disclosure. It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” can include both singular and plural referents, unless the context clearly dictates otherwise. In addition, “a number of,” “at least one,” and “one or more”, e.g., a number of memory banks, can refer to one or more memory banks, whereas a “plurality of” is intended to refer to more than one of such things. Furthermore, the words “can” and “may” are used throughout this application in a permissive sense, e.g., having the potential to, being able to, not in a mandatory sense, e.g., must. The term “include,” and derivations thereof, means “including, but not limited to.” The terms “coupled” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures may be identified by the use of similar digits. For example,120may reference element “20” inFIG.1A, and a similar element may be referenced as220inFIG.2. A group or plurality of similar elements or components may generally be referred to herein with a single element number. For example, a plurality of reference elements117-1to117-2may be referred to generally as130. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and/or the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present disclosure and should not be taken in a limiting sense. FIG.1Ais a block diagram in the form of a computing system100including a host120and an apparatus including a memory system110in accordance with a number of embodiments of the present disclosure. As used herein, an “apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. The memory system110can include a storage class memory (“SCM”) controller115-1, a storage controller115-2, an emerging memory device130, and a non-volatile (“NV”) memory device140, which includes a flash-based memory device, as will be described below. The SCM controller115-1can include a processor117-1(e.g., a processing device or processing unit) configured to execute instructions stored in a local memory119-1. Likewise, the storage controller115-2can include a processor117-2(e.g., processing device or processing unit) configured to execute instructions stored in a local memory119-2. In the illustrated example, the local memory119-1,119-2of the SCM controller/storage controller115-1,115-2each include an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory system110, including handling communications between the memory system110and the host120. The host120can communicate with the memory system110through a kernel121, as will be described further below. In some embodiments, the local memory119-1,119-2can include memory registers storing memory pointers, fetched data, etc. The local memory119-1,119-2can also include read-only memory (ROM) for storing micro-code. While the example memory system110inFIG.1Ahas been illustrated as including the controllers115-1,115-2, in another embodiment of the present disclosure, a memory system110does not include a memory system controller, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the controllers115-1,115-2can receive commands or operations from the host120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the emerging memory device130and/or the NV memory device140. The controllers115-1,115-2can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., logical block address (LBA), namespace) and a physical address (e.g., physical block address, physical media locations, etc.) that are associated with the memory devices130,140. The controllers115-1,115-2can further include host interface circuitry to communicate with the host120via the physical host interface (e.g., host interface111inFIG.1B). The host interface circuitry can convert the commands received from the host into command instructions to access the memory device130and/or the memory device140as well as convert responses associated with the memory device130and/or the memory device140into information for the host120. The host120can designate a location in an address space for a memory object to be stored in the memory system110. The memory system110can use an address space that is split between the first memory device130and the second memory device140. As an example, the address space can span across both the first memory device130and the second memory device140. The host120can be a host system such as a personal laptop computer, a vehicle, a desktop computer, a digital camera, a mobile telephone, an internet-of-things (IoT) enabled device, or a memory card reader, graphics processing unit, e.g., a video card, among various other types of hosts. The host120can include a system motherboard and/or backplane and can include a number of memory access devices such as a number of processing resources, e.g., one or more processors, microprocessors, image processor, and/or some other type of controlling circuitry. One of ordinary skill in the art will appreciate that “a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc. The host120can be coupled to a host interface (e.g., host interface111inFIG.1B) of the memory system110by a communication channel103. A kernel121of the host120can communicate to the host interface (e.g., host interface111ofFIG.1B). As used herein an “IoT enabled device” can refer to devices embedded with electronics, software, sensors, actuators, and/or network connectivity which enable such devices to connect to a network and/or exchange data. Examples of IoT enabled devices include mobile phones, smart phones, tablets, phablets, computing devices, implantable devices, vehicles, home appliances, smart home devices, monitoring devices, wearable devices, devices enabling intelligent shopping systems, among other cyber-physical systems. The host120can be responsible for executing an operating system for a computing system100that includes the memory system110. Accordingly, in some embodiments, the host120can be responsible for controlling operation of the memory system110. For example, the host120can execute instructions, e.g., in the form of an operating system, that manage the hardware of the computing system100such as scheduling tasks, executing applications, controlling peripherals, etc. The emerging memory device130can include a three-dimensional (3D) cross-point memory, phase-change memory, and resistive random access memory (RAM), and the NV memory device140can include a NAND or NOR memory device. As used herein, the term “emerging memory device” generally refers to resistive variable memory, such as 3-D cross-point (cross-point memory device, 3D XP device, etc.), phase-change memory, resistive RAM, a memory device that includes an array of self-selecting memory (SSM), ferroelectric random access memory (FeRAM), etc., or any combination thereof. Memory system110can be located at a location that is remote, e.g., part of a cloud database, from a host and/or from a location of a user that is accessing the memory system110. A non-limiting example of multiple memory devices having various types are described inFIG.1A. Resistance variable memory devices can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, resistance variable non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. In contrast to flash-based memories and resistance variable memories, self-selecting memory cells can include memory cells that have a single chalcogenide material that serves as both the switch and storage element for the memory cell. In one example, the emerging memory device130is not used as a cache for the memory system and the emerging memory device130is not used as a cache for the NV memory device140. In one example, an address space for each of the plurality of objects to be written to in the first memory device or the second memory device can be a contiguous address space across the first memory device and the second memory device. That is, the address space of both the emerging memory device130and the NV memory device140can make up a total address space and be used seamlessly as if the two memory devices were a same memory device. While two memory device types, e.g., emerging memory and NAND, are illustrated, embodiments are not so limited, however, and there can be more or less than two memory media types. For instance, a number of embodiments provide that memory devices that include a different type of emerging memory and/or a different type of non-volatile or volatile memory can be used. That is, for example, other types of volatile and/or non-volatile memory media devices are contemplated. As illustrated inFIG.1A, in a number of embodiments, the controllers115-1,115-2, the memory devices130,140, and/or the host interface (111inFIG.1B) can be physically located on a single die or within a single package, e.g., a managed memory application. Also, in a number of embodiments, a plurality of memory devices130,140can be included on a single memory system110. Also, in some embodiments, more than one memory device can include a same type of array of memory cells. FIG.1Bis a block diagram in the form of a computing system100including a host120and an apparatus including a memory system110in accordance with a number of embodiments of the present disclosure. The computing system101can be similar to computing system100inFIG.1A, except that a single memory controller115can communicate with each of emerging memory device130and non-volatile (“NV”) memory device140. As an example, the memory controller115can generate commands and/or signals in order to read and write data to and from each of the emerging memory device130and NV memory device140. The memory controller115can be capable of communicating with both emerging memory cells (e.g., 3D cross-point memory cells, phase-change memory cells, resistive RAM memory cells) and NV memory cells (e.g., NAND memory cells). Further, the memory devices130,140may each include respective control circuitry that the memory controller115communicates with in order to perform memory read and write operations within each of the memory devices130,140. However, embodiments are not so limited. For instance, embodiments provide that a number of memory devices include the control circuitry, while a number of different memory devices do not include the control circuitry. Operations discussed herein may be performed by the controller, the control circuitry, or combinations thereof. FIG.2is a block diagram202representing object management in tiered memory systems in accordance with a number of embodiments of the present disclosure. The block diagram200includes a host220and a memory system210. The host220can be analogous to the host120inFIGS.1A and1B. The memory system210can be analogous to the memory system110inFIGS.1A and1B. The host220includes a host application231, a mapping file system233, and a kernel235. The host application231can be using a key value database approach, as described below, to read or request data from the memory devices230,240and sending or storing data in the memory devices230,240. The memory system210includes an emerging memory device230and a non-volatile (“NV”) memory device240. A key value database is a type of nonrelational database that uses a key value method to store data. The key value database stores data as a collection of key value pairs in which a key serves as a unique identifier. The key value database associates a value (which can be anything from a number or simple string, to a complex object) with a key, which is used to keep track of the object. The key value database can use compact, efficient index structures to be able to locate a value by its key, making the key value database useful for systems that find and retrieve data in constant time. Both keys and values can be anything, ranging from simple objects to complex compound objects. Key value databases are partitionable and can allow horizontal scaling at scales that other types of databases may not be able to achieve. The key value database can allow programs or users of programs to retrieve data by keys, which are essentially names, or identifiers, that point to some stored value. The key value database can be associated with a set of operations including: retrieving a value (if there is one) stored and associated with a given key, deleting the value (if there is one) stored and associated with a given key, and setting, updating, and replacing the value (if there is one) associated with a given key. A host application231can request data to be stored or retrieved from a memory device, such as the emerging memory device230or the NV memory device240. The mapping file system233can designate a key for a particular memory object and indicate a location for that memory object to either be stored or retrieved from. A first mapping list243can be used to designate that a memory object is stored in an emerging memory device230and a second mapping list245can be used to designate that a memory object is stored in a NV memory device240. As an example, as illustrated inFIG.2, a first key (e.g., “File 45”) in a first mapping list243can be designated as being stores as logical address 8 (e.g., “LA8”). A second key (e.g., “File 5”) in the first mapping list243can be designated as being stored as logical address 10 (e.g., “LA10”) and a third key (e.g., “File 9”) can be designated as stored as logical address 234 (e.g., “LA234”). Further, a first key (e.g., “File 0”), a second key (e.g., “File 1”), and a third key (e.g., “File 2”) of a second mapping list245can be designated as stored as logical block addresses 0, 1, and 2, respectively (e.g., “LBA 0”, “LBA 1”, and “LBA 2”, respectively). The “LA” portion can indicate that the memory object is to be located (either stored at or retrieved from) in the emerging memory device230and an “LBA” portion can indicate that the memory object is to be located in the NV memory device240. Each of these key values (e.g., “File 45,” “File 5,” “File 9,” “File 0,” “File 1,” File 2”) can be sent to a kernel235of a host (e.g., host120inFIGS.1A and1B) in order to retrieve or store the associated memory object. While the “LA” can designate the first memory device230and the “LBA” can designate the second memory device240, the address space used to address a location for a memory object can be split between, or span across, both the first memory device230and the second memory device240. For example, when using a total user addressable space of 1 gigabyte (GB), the memory objects can be split 300 megabytes (MB) into the first memory device230and 700 MB into the second memory, 250 MB into the first memory device and 750 MB into the second memory, 900 MB into the first memory device and 100 MB into the second memory device, or any ratio. Further, for a given percentage of small data size memory objects a percentage of address space could be allocated to the first memory device and the remaining data could be allocated to the second memory device. The kernel235can be software and/or code that performs operations (e.g., low level operations) and interacts with hardware and/or software components of the operating system (OS) and is controlled and/or executed by the computing system. The kernel235can coordinate memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit and can connect the application software to the hardware of a computer. The kernel235can performs tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts. The interface of the kernel235can be a low-level abstraction layer. The kernel235can communicate with the emerging memory device230using double-data rate (DDR) software protocol237used to communicate with emerging memories. The kernel235can communicate with the NV memory device240using a non-volatile memory express (NVMe) software protocol239. The NVME software protocol239is an open logical-device interface specification for accessing non-volatile memory media attached via PCI Express (PCIe) bus. In some embodiments, a determination of whether to store a key and associated data in the emerging memory device230or the NV memory device240can be based on a type of characteristic set. Embodiments provide that a type of characteristic set can include one or more characteristics including, but not limited to, access frequency, memory access size (e.g., a quantity of bits associated with a memory object), and/or whether a memory access includes sequential or non-sequential accesses. For example, a memory object accessed with a first access frequency during a particular period of time that is greater than a second access frequency during a particular period of time can be stored in the emerging memory device230. As an example, a higher access frequency can include several times a day, several times a week, etc. A lower access frequency can include once a month, once a year, etc. A more frequently accessed memory object can be referred to as “hot” and can refer to a memory object that is updated more frequently by the host and/or other external devices. A memory object accessed with the second access frequency can be stored in the NV memory device240. The memory object with the second access frequency can be referred to as “cold” and can refer to a memory object that is updated less frequently by the host and/or external device. In this way, an access frequency can be used to designate whether the key value pair of the memory object indicates whether to indicate an “LA” (and store in the emerging memory device230) or indicate an “LBA” (and store in the NV memory device240). Further, access frequency can indicate how often an associated address space is accessed during a particular time interval. Embodiments provide that for a first characteristic set, the particular time interval can be smaller, i.e., a shorter time passage, as compared to time intervals for a second characteristic set. The particular time interval can have various values, e.g., for different applications. As an example, a stock account database can be frequently updated or accessed as data may change quickly. A health database can be less frequently updated or accessed as the data may be updated when a patient visits a healthcare facility, etc. In one example, such as an aviation database, a hybrid of both small, frequently updated memory objects (e.g., such as with data associated with flight tracking coordinates), and large, less frequently updated memory objects (e.g., such as with maintenance data) can be accessed. As a further example, the particular time intervals may be 5 microseconds, 10 microseconds, 1 second, 1 minute, a day, a month, a year, among other values. Further, embodiments provide that the particular time interval may change over time, e.g., based upon changing workloads, benchmarks, and/or the host data traffic behavior, for instance. Generally, a greater access frequency will make a memory object more “hot,” as compared to another memory object having a lesser access frequency, which may be more “cold.” In other words, a memory object with greater or the greatest access frequency will generally have the first designation (and be stored in the emerging memory device230) and memory objects with lower or the least access frequency will generally be stored in the NV memory device240. Embodiments provide that a type of characteristic set can include one or more characteristics including, but not limited to, a size of a memory object, access frequency, a type of key value data, etc. For example, a memory object with a first data size greater than a second data size can be stored in the emerging memory device230. An example, a smaller data size can include 1 kilobyte (KB), 2 KB, 4 KB, 16 KB. As an example, a larger data size can include data sizes ranging from 16 KB to several gigabytes (GBs). A memory object accessed with the second data size can be stored in the NV memory device240. This can be particularly useful when using hybrid workloads that can run more efficiently when optimized for both large data blocks (e.g., writing to a NAND memory device in the description herein) and smaller data segments (e.g., writing to a emerging memory device in the description herein). In this way, a data size can be used to designate whether the key value pair of the memory object indicates whether to indicate an “LA” (and store in the emerging memory device230) or indicate an “LBA” (and store in the NV memory device240). While examples describe storing initially in the emerging memory device230or the NV memory device240, embodiments are not so limited. For example, a memory object can be initially stored in the emerging memory device230and can be sent to a host and expanded to include a larger memory object (but still associated with a same key) and be subsequently stored in the NV memory device240. Likewise, a memory object can be initially stored in the emerging memory device230and can be sent to a host and then be accessed with less frequency and subsequently stored in the NV memory device240. The size of the memory object can correspond to a quantity of bits or other information contained within the memory object. Generally, a smaller size will make a memory object more “hot,” as compared to another memory object having a greater size, as the memory object may also be accessed more frequently if it is smaller. Embodiments provide that memory objects may change designations over time. Over time, the type of characteristic set associated with a memory object may change. In other words, over time one or more characteristics associated with a memory object may change. For instance, the access frequency of a memory object over a short-term time interval may decrease over time or a data size of a memory object may increase or decrease over time. As an example, a decrease in access frequency may contribute to that memory object becoming less “hot,” as compared to the memory object prior to the decrease in access frequency. As such, this decrease can result in the memory objects being transferred from one memory device type to another. Embodiments provide that a type of characteristic set can include one or more characteristics including, but not limited to, a size of a memory object, an access frequency, and/or a type of key value data. For example, a key of a memory object can be stored in the emerging memory device230and data associated with the key can be stored in the NV memory device240. At this point, in this example, the key value data may no longer be a memory object as a memory object in a key value database includes both the key and the data associated with the key. In this way, a key value data type can be used to designate whether the key value pair indicates whether to indicate an “LA” (and store in the emerging memory device230) or indicate an “LBA” (and store in the NV memory device240). A hash table used to associate the key with the data can be stored in the emerging memory device230as well. In this way, a determination of whether to locate data in the NV memory device240can be performed quickly. Further, updates to the keys can be stored in the emerging memory device230in a cached update table during a foreground operation, e.g., while the memory system is performing additional operations. Updates to the data in the NV memory device240associated with the updated keys (stored in the emerging memory device230) can be performed in the background using the cached update table, e.g., while the memory device is not performing additional operations, is in a sleep mode, or other reduced power state, etc. In this way, memory resources can be preserved for operations currently being performed and used for updating the data to the NV memory device240when the operations have completed. FIG.3is a flow diagram351representing an example method for object management in tiered memory systems in accordance with a number of embodiments of the present disclosure. The method351can be performed by processing logic that can include hardware, e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc., software, e.g., instructions run or executed on a processing device, or a combination thereof. In some embodiments, the method351is performed by a processor of the host120inFIG.1A. In some embodiments, the method351is performed by control circuitry of the host120, illustrated inFIG.1A. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block353, the method351can include writing a memory object to a first memory device. The first memory device can be analogous to the first memory device130and230inFIGS.1A/1B and2, respectively. The first memory device can be a first type of memory medium including an emerging memory such as cross-point memory, phase-change memory, and resistive RAM, etc. The second memory device can be analogous to the second memory device140and240inFIGS.1A/1B and2, respectively. The second memory device can be a second type of memory medium including a non-volatile memory including NAND Flash or NOR Flash. The flash-based memory device can be a NAND memory device or a NOR memory device. In some examples, writing the memory object can include initially writing the memory object to an emerging memory device. At block355, the method351can include determining that a size of the memory object is equal to or exceeding a threshold data size. As an example, the threshold data size can be a page size (such as a NAND page size or NOR page size). The size of the memory object can then be determined to be equal to or greater than (exceeding) a page size. At block357, the method351can include writing the memory object to the second memory device. For example, data stored to the first memory device can be written to (or transferred to) the second memory device. The memory object can be written in response to a data size of the memory object being equal to or greater than a threshold data size. As an example, the memory object can be written from the first memory device to the second memory device in response to the memory object being equal to or greater than 16 kilobytes (KBs) (which in some examples can refer to a page size). Put another way, data can be written to a first memory device (e.g., cross-point memory device) until the data reaches a data size equal to a page size of a second memory device (e.g., a flash-based memory device), at which point the data can be written to (or transferred to) the second memory device (e.g., the flash-based memory device). In another example, the particular memory object can be transferred from the second memory device to the first memory device in response to data associated with the particular one memory object being less than 16 KBs. The data can be transferred from a flash-based memory device to a cross-point memory device so that a host requesting access to the data can access a smaller data size than a full page size. In response to the data being equal to a page size, the data can remain in the flash-based memory device and be accessed by the host from the flash-based memory device. FIG.4is a flow diagram471representing an example of object management in tiered memory systems in accordance with a number of embodiments of the present disclosure. The operations of the flow diagram471can be performed by processing logic that can include hardware, e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc., software, e.g., instructions run or executed on a processing device, or a combination thereof. In some embodiments, the method is performed by a processor of the host120inFIG.1A. In some embodiments, the method is performed by control circuitry of the host120, illustrated inFIG.1A. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation473, a plurality of memory objects can be written to a flash-based memory device. At operation475, a request to access data of a memory object in the flash-based memory device can be received. The request can be sent by a host to a controller of a memory system that includes the flash-based memory device. Example data sizes can include 1 kilobyte (KB), 2 KB, 4 KB, 16 KB, and can also range from 16 KB to several gigabytes (GBs)). At operation477, whether the data size is less than a threshold data size can be determined. In response to the data size being less than the threshold data size (indicated by “YES”), as illustrated at operation479, the memory object can be transferred to a cross-point memory device. The data transferred to the cross-point memory device can be accessed in the cross-point memory device by a host. The host can request to access a portion of data that may be a portion of a full page size of data. The data accessed by the host can be a portion of data that would have been accessed due to an entire page size of data that may have been read had the data remained in the flash-based memory device. That is, the host can access data requested to be accessed while not accessing data that would have been part of a full page of data had the data remained in the flash-based memory device. In response to the data size being equal to or greater than the threshold data size (indicated by “NO”), as illustrated at operation481, the memory object can be read from the flash-based memory device. In this instance, the memory object can be read from the flash-based memory device without transferring the memory object to the cross-point memory device. Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. | 39,553 |
11861223 | DETAILED DESCRIPTION The specific structural or functional description disclosed herein is merely illustrative for the purpose of describing embodiments according to the concept of the present disclosure. The embodiments according to the concept of the present disclosure can be implemented in various forms, and cannot be construed as limited to the embodiments set forth herein. FIG.1is a diagram illustrating a storage system in accordance with an embodiment. The storage system may be implemented as a data processing system including, for example, a personal computer (PC), a data center, and an enterprise type data storage system, and a direct attached storage (DAS), a data processing system including a storage area network (SAN), a data processing system including a network attached storage, or another type of system or device. Referring toFIG.1, the storage system may include a storage device1000and a host400. The storage device1000may store data according to a request of the host400, such as a mobile phone, a smart phone, an MP3 player, a laptop computer, a desktop computer, a game console, a TV, a tablet PC or an in-vehicle infotainment system. The storage device1000may be manufactured as any one of various types of storage devices according to a host interface that is a communication scheme with the host400. Examples include a Solid State Drive (SSD), a Multi-Media Card (MMC), an Embedded MMC (eMMC), a Reduced Size MMC (RS-MMC), a micro-MMC (micro-MMC), a Secure Digital (SD) card, a mini-SD card, a micro-SD card, a Universal Serial Bus (USB) storage device, a Universal Flash Storage (UFS) device, a Compact Flash (CF) card, a Smart Media Card (SMC), a memory stick, and the like. The storage device1000may be manufactured as any one of various kinds of package types. Examples a Package-On-Package (POP), a System-In-Package (SIP), a System-On-Chip (SOC), a Multi-Chip Package (MCP), a Chip-On-Board (COB), a Wafer-level Fabricated Package (WFP), and a Wafer-level Stack Package (WSP). In an embodiment, one storage device1000may be provided as shown inFIG.1. However, the present disclosure is not limited thereto, and two or more storage devices1000may be provided. A plurality of storage devices1000may operate by using a redundant array of independent disks (RAID) scheme or a redundant array of inexpensive disks (RAID) scheme, in which the plurality of storage devices1000operate as one storage device. The storage device1000may include a memory device100and a memory controller200. The memory device100may operate under the control of the memory controller200. For example, the memory device100may receive a command and an address from the memory controller200, and access a memory cell selected by the address among memory cells. The memory device100may perform an operation instructed by the command on the memory cell selected by the address. The command may be, for example, a program command, a read command, or an erase command. A program command may instruct the memory device to perform a program operation (or write operation). A read command may instruct the memory device100to perform a read operation. An erase command may instruct the memory device100to perform an erase operation. Thus, operations instructed by corresponding ones of the commands may be, for example, a program operation (or write operation), a read operation, or an erase operation. Additionally, a program operation may be an operation in which the memory device100stores data provided from the host400under the control of the memory controller200. In one embodiment, the program operation may be an operation of storing data in any one memory block among a plurality of memory blocks in the memory device100. For example, the memory device100may receive a program command, an address, and data, and program the data in a memory cell selected by the address. The data to be programmed in the selected memory cell may be referred to as write data. The write data may include data (or user data) provided from the host400and meta data of the data. A read operation may be an operation in which the memory device100reads read data stored in the memory device100under the control of the memory controller200. For example, the memory device100may receive a read command and an address, and read data from an area selected by the address in a memory cell array. The data to be read from the selected area among data stored in the memory device100may be defined as read data. An erase operation may be an operation in which the memory device100erases data stored in the memory device100under the control of the memory controller200. In one embodiment, an erase operation may erase data stored in any one memory block among the plurality of memory blocks in the memory device100. For example, the memory device100may receive an erase command and an address, and erase data stored in an area selected by the address. The memory device100may be implemented as a volatile memory device or a nonvolatile memory device. Examples of a volatile memory device include a Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM), a Low Power Double Data Rate 4 (LPDDR4) SDRAM, a Graphics Double Data Rate (GDDR) SRAM, a Low Power DDR (LPDDR), a Rambus Dynamic Random Access Memory (RDRAM), and the like. Examples of a nonvolatile memory device may include a Resistive Random Access Memory (RRAM), a Phase-Change Random Access Memory (PRAM), a Magnetoresistive Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Spin Transfer Torque Random Access Memory (STT-RAM), and a flash memory. The flash memory may include, for example, a NAND flash memory, a vertical NAND flash memory, a NOR flash memory, and the like. For illustrative purposes, it is assumed that the memory device100is a NAND flash memory. The memory device100may store write data under the control of the memory controller200, or may read stored read data and provide the read data to the memory controller200. The memory device100may include a plurality of planes101,102,103, and104. The number of planes may be 4 as shown inFIG.1, but the present disclosure is not limited thereto. Each plane may include a memory cell array including memory cells for storing write data. The memory cell array may include a plurality of memory blocks. A memory block may be a unit for performing an erase operation of erasing data. A memory block may include a plurality pages, with each page corresponding to a unit for performing a program operation of storing write data or a read operation of reading stored read data. The memory cell may be configured as any one of a Single Level Cell (SLC) storing 1-bit data, a Multi-Level Cell (MLC) storing 2-bit data, a Triple Level Cell (TLC) storing 3-bit data, and a Quadruple Level Cell (QLC) storing 4-bit data. However, the present disclosure is not limited thereto, and the memory cell may store 5 or more-bit data. In an embodiment, the memory device100may perform an operation instructed by a command using, for example, a plane interleaving scheme. A plane interleaving scheme may be a scheme in which operations on respective ones of two or more planes at least partially overlap with each other. For example, the memory device100may perform a read operation on a zeroth plane101and a read operation on a first plane102to overlap with each other. However, the present disclosure is not limited thereto. The memory controller200may control overall operation of the storage device1000. For example, when power is applied to the storage device1000, the memory controller200may execute instructions (e.g., firmware). When the memory device100is a flash memory device, the firmware may include a host interface layer, a flash translation layer, and a flash interface layer. The power may be supplied, for example, from an external supply. The host interface layer may control an operation between the host400and the memory controller200. The flash translation layer may translate a logical address provided from the host400into a physical address, and may control communications between the memory controller200and the memory device100. The memory controller200may control the memory device100to perform a program operation, a read operation, and an erase operation respectively in response to a write request, a read request, and an erase request of the host400. In a program operation, the memory controller200may provide the memory device100with a program command, a physical address, and write data. In an embodiment, in a program operation, the memory controller200may provide the memory device100with a program command and a physical address. Also, the memory controller200may provide a flush command to a buffer memory to provide (or flush) data temporarily stored in the buffer memory to the memory device100. When the data temporarily stored in the buffer memory is provided to the memory device100, data temporarily stored in the buffer memory may be erased. In a read operation, the memory controller200may provide the memory device100with a read command and a physical address. In an erase operation, the memory controller200may provide the memory device100with an erase command and a physical address. In an embodiment, the physical address may include a plane number, a block number, a page number, and a sub-page number. The memory controller200may autonomously generate a command, an address, and data regardless of any request provided from the host400. The memory controller200may transmit the command, the address, and the data, which are autonomously generated, to the memory device100. For example, the memory controller200may generate a command for performing a background operation, an address, and data. Also, the memory controller200may provide memory device100with the command, the address, and the data. The command for performing the background operation may be, for example, a program command or a read command. The background operation may be at least one of wear leveling, read reclaim, and garbage collection. Wear leveling may include, for example, static wear leveling, dynamic wear leveling, etc. Static wear leveling may include an operation of storing a number of times memory blocks are erased, and moving cold data on which any erase operation or any write operation is hardly performed to a memory block which is erased a largest number of times. Dynamic wear leveling may include an operation of storing a number of times memory blocks are erased, and programing data in a memory block which is erased a smallest number of times. Read reclaim may include an operation of moving data stored in a memory block to another memory block before an uncorrectable error occurs in the data stored in the memory block. Garbage collection may include an operation of copying valid data included in a bad block among memory blocks to a free block, and erasing invalid data in the bad block. Copying valid data in a bad block to a free block may include moving the valid data in the bad block to the free block. In an embodiment, the memory controller200may control the memory device100to perform a background operation in predetermined period, e.g., an idle period or another period. An idle period may include, for example, a period in which any request of the host400is not provided. In one embodiment, the idle period may include a period corresponding to that from a time at which a response to a request of the host400is provided to the host400to a time at which a subsequent request of the host400is provided to the storage device1000. In an embodiment, the memory controller200may control two or more memory devices100. The memory controller200may control the memory devices100according, for example, to an interleaving scheme to improve operational performance. An interleaving scheme may include controlling operations on the two or more memory devices100to overlap with each other. The memory controller200may sequentially store a command and a physical address in at least one command queue, and may provide the memory device with the command and the physical address, which are stored in the command queue, according to a scheduled sequence. The command and the physical address, which are stored in the command queue, may be output according to a first-in first-out (FIFO) scheme. However, the present disclosure is not limited thereto. For example, the memory controller200may sequentially store a read command and a physical address in a read command queue, and sequentially provide the read command and the physical address, which are stored in the read command queue, to the memory device100. In one example, the memory controller200may store a program command and a physical address in a program command queue, and store a read command and a physical address in a read command queue. Also, the memory controller200may first provide the program command and the physical address to the memory device100and then provide the read command and the physical address to the memory device100. The memory controller200may provide an erase command to the memory device100. While the memory device100performs an erase operation in response to the erase command, the memory controller200may receive a request (e.g., a read request) from the host400. The memory controller200may provide a suspend command in response to the request of the host400. The suspend command may instruct the memory device100to suspend the erase operation. After the suspend command is provided to the memory device100, the memory controller200may provide the memory device100with a command (e.g., a read command) instructing the memory device100to perform an operation corresponding to the request of the host400. The memory device100may suspend the erase operation in response to the suspend command, and perform an operation (e.g., a read operation) in response to the command (e.g., the read command). After the operation corresponding to the request is completed, the memory controller200may provide a resume command to the memory device100. A resume command may instruct the memory device100to resume the erase operation. The memory device100may resume the erase operation in response to the resume command. When the erase command is resumed in response to the resume command, there may be a certain preparation time until the erase operation is normally performed. This preparation time may be a time delayed until the erase operation is normally performed. When the host400provides a request (e.g., a read request) to the memory controller200during the preparation time, a command (e.g., a read command) instructing the memory device100to perform an operation corresponding to the request may be continuously stored in a command queue. When commands are continuously queued in the command queue, a response to the request of the host400may be delayed. When commands instructing an operation on any one plane among the plurality of planes101,102,103, and104are sequentially stored in a command queue, a command to be output next may be provided to the memory device100only when an operation instructed by a command output first is completed. For example, while the operation instructed by the command output first is performed, the command to be output next may be queued in the command queue. In one embodiment, when a program command instructing a program operation on the zeroth plane101and a read command instructing a read operation on the zeroth plane101are sequentially stored in each command queue, based on the program command the read command, the read command may be queued in the command queue while the program operation is performed. While the operation instructed by the command output first is not completed but continuously performed, a read request of the host400may be provided to the memory controller200. A read command for the read request of the host400may be stored in a read command queue, a response to the read request of the host400may be delayed when read commands are continuously queued in the read command queue. In an embodiment, the memory controller200may store, in a first read command queue, a first read command and a first physical address mapped to a logical address provided from the host400, in response to a read request from the host400. The memory controller200may compare physical addresses stored in the first read command queue. The memory controller200may compare a page number of a second physical address, which is scheduled in a first priority index number among index numbers of the first read command queue, with a page number of the first physical address. When the page number of the first physical address and the page number of the second physical address are the same, memory controller200may schedule the first physical address in the first priority index number among the index numbers of the first read command queue. The memory controller200may sequentially provide read commands and the physical addresses, which are stored in the first read command queue, to the memory device100according to a scheduled sequence. In one embodiment, the memory controller200may generate a first read command in response to a read request provided from the host400, translate a logical address provided from the host400into a first physical address, and store the first read command and a first physical address in the first read command queue. Also, the memory controller200may search for a first physical address group which includes at least one second physical address including a page number equal to that of the first physical address among physical addresses stored in the first read command queue and the first physical address in response to that a scheduling event has occurred. Also, the memory controller200may sequentially schedule, in consecutive index numbers of a second read command queue, a second physical address group including all physical address including a plane number different from all plane numbers of the first physical address group among the physical addresses stored in the first read command queue and the first physical address group. In one embodiment, the memory controller200may store, in the first read command queue, a first read command instructing the memory controller100to perform a read operation and a first physical address, while a background operation is performed. Also, the memory controller200may store a second read command and a second physical address in the second read command queue in response to a read request provided from the host400while the background operation is performed. Also, the memory controller200may schedule the first physical address, the second physical address, and a second read command in a third read command queue according to a result obtained by comparing the first physical address and the second physical address. The number of physical addresses in a physical address group may be one or more, and the number of physical address groups may be one or more. The scheduling event may occur after the resume command is provided to the memory device100. For example, the scheduling event may occur in a period corresponding to that from a time at which the resume command is provided to the memory device100to a time at which the suspend command is provided to the memory device100. The scheduling event may occur, for example, at a time at which the erase operation is completed or before the program operation (or write operation) is completed. The memory controller200may include a command generation controller210, a command storage220, and a command schedule controller230. A command generation controller210may generate a command in response to a request of the host400. For example, the command generation controller210may generate a read command in response to a read request of the host400. For example, the command generation controller210may generate a program command in response to a write request of the host400. For example, the command generation controller210may generate an erase command in response to an erase request of the host400. For example, the command generation controller210may generate a suspend command or a resume command. The command generation controller210may translate a logical address provided from the host400to a physical address. In one embodiment, the command generation controller210may be implemented as a flash translation layer. The command generation controller210may provide the memory device100with a command and a physical address, which are stored in the command storage220. In an embodiment, the command generation controller210may provide an erase command to the memory device100, provide a suspend command to the memory device100in response to a request provided by the host400during an erase operation, and provide a resume command to the memory device100when an operation corresponding to the request is completed. In an embodiment, after the resume command is provided to the memory device100, the command generation controller210may provide a scheduling event signal to the command schedule controller230. An embodiment will be described with reference toFIG.5. In one embodiment, the command generation controller210may provide the scheduling event signal to the command schedule controller230after a predetermined period elapses, measured from a time at which a command instructing the memory device100to perform an erase operation or a write operation is provided to the memory device100. An embodiment will be described with reference toFIGS.6and7. The command storage220may store a command and a physical address. The command storage220may include at least one read command queue. For example, the command storage220may include one or more of a read command queue, a program command queue, or an erase command queue. The command scheduling controller230may search at least one second physical address including a page number equal to that of a first physical address, among physical addresses stored in a first read command queue, in response to the scheduling event signal. The first physical address and the at least one second physical address may be in a first physical address group. The command schedule controller230may search for a second physical address group among the physical addresses stored in the first read command queue. The second physical address group may include all physical addresses including a plane number different from all plane numbers of the first physical address group. The command schedule controller230may sequentially schedule the first physical address group and the second physical address group, for example, in consecutive index numbers of a second read command queue. In one embodiment, the storage device1000may include a buffer memory for storing data only while power is supplied from a power source. The buffer memory may be in memory controller200. In one embodiment, the buffer memory may be outside and coupled to the memory controller200. The buffer memory may be, for example, a volatile memory device, e.g., a Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM), a Low Power Double Data Rate 4 (LPDDR4) SDRAM, a Graphics Double Data Rate (GDDR) SRAM, a Low Power DDR (LPDDR), and a Rambus Dynamic Random Access Memory (RDRAM). The host400may communicate with the storage device1000through an interface. The interface may be implemented, for example, as a serial advanced technology attachment (SATA) interface, a SATA express (SATAe) interface, a serial attached small computer system interface (SAS) interface, a peripheral component interconnect express (PCIe) interface, a non-volatile memory express (NVMe) interface, an advanced host controller interface (AHCI) interface, or a multimedia card interface. However, the present disclosure is not limited thereto. The host400may store data in the storage device1000or may communicate with the storage device1000to acquire data stored in the storage device1000. In an embodiment, the host400may provide the storage device1000with a write request for requesting the storage device1000to store data. Also, the host400may provide the storage device1000with a write request, data, and a logical address for identifying the data. The storage device1000may store write data (e.g., including the data provided by the host400and meta data) in the memory device100and may provide the host400with a response representing that the storing of the write data has been completed, in response to the write request from the host400. In an embodiment, the host400may provide the storage device1000with a read request for requesting the storage device1000to provide data stored in the storage device1000to the host400. Also, the host400may provide the read request and a read address to storage device1000. The storage device1000may read, from the memory device100, read data corresponding to the read address provided by the host400and may provide the host400with the read data as a response to the read request, in response to the read request provided from the host400. FIG.2is a diagram illustrating a plurality of planes PLANE0, PLANE1, PLANE2, and PLANE3in accordance with an embodiment. The planes PLANE0, PLANE1, PLANE2, and PLANE3may correspond, for example, to the planes101,102,103, and104shown inFIG.1. Referring toFIG.2, the planes PLANE0, PLANE1, PLANE2, and PLANE3may be connected to one channel. In an embodiment, data output from each plane may be sequentially provided to the memory controller200through the one channel. For example, after data output from any one plane among the planes PLANE0, PLANE1, PLANE2, and PLANE3is provided to the memory controller200through the one channel, data output from another one of the planes PLANE0, PLANE1, PLANE2, and PLANE3may be provided to the memory controller200through the one channel. Thus, in one embodiment, two or more planes may not simultaneously output data through the one channel. Each of the planes PLANE0, PLANE1, PLANE2, and PLANE3may include a plurality of memory blocks MB1, MB2, MB3, . . . , and MBm, where m is a natural number of 2 or more. In an embodiment, the memory blocks MB1, MB2, MB3, . . . , and MBm may include one or more system blocks and one or more user blocks. For example, a first memory block MB1and a second memory bock MB2may be system blocks and third to mth memory blocks MB3to MBm may be user blocks. A system block may store meta data including map data, validity data, uncorrectable error data, operation data, and the like. The map data may be data representing a mapping relationship between logical and physical addresses. The validity data may be data representing validity of data (or user data) provided from the host400. The uncorrectable error data may be data representing that data (or user data) provided from the host400is data having an uncorrectable error. The operation data may be data representing whether a physical address stored in a command queue is an address translated from a logical address from the host400or an address generated to perform a background operation. However, the present disclosure is not limited thereto. A user block may store data, for example, provided from the host400and meta data. A plurality of user blocks may be included. Each of the memory blocks MB1, MB2, MB3, . . . , and MBm may include a plurality of pages PAGE1, PAGE2, . . . , and PAGEn, where n is a natural number of 2 or more. Each of the pages PAGE1, PAGE2, . . . , and PAGEn may be divided into virtual sub-pages SP0, SP1, SP2, and SP3according to a read unit as a unit for performing a read operation. The read unit may be predetermined based on a size of the page and a number of the sub-pages. For example, when the size of the page 16 KB and the number of the sub-pages is 4, the read unit may be 4 KB. However, the present disclosure is not limited thereto. In one or more embodiments, the terms “sub-page,” “slice,” “section,” and the like may have the same meaning. In an embodiment, the physical address may include a plane number, a bock number, a page number, and a sub-page number. A plane number may indicate any one plane among the planes PLANE0, PLANE1, PLANE2, and PLANE3. A block number may indicate any one memory block among a plurality of memory blocks MB1, MB2, MB3, . . . , and MBm in one plane. A page number may indicate any one page among a plurality of pages PAGE1, PAGE2, . . . , and PAGEn in one memory block. A sub-page number may indicate one of sub-pages SP0SP1, SP2, and SP3in one page. FIG.3is a diagram illustrating an embodiment of memory device memory device100, which may include a memory cell array110, a peripheral circuit120, and a control logic130. The memory cell array110may include a plurality of memory blocks MB1to MBk (k is a positive integer). The number of memory blocks MB1to MBk shown is merely an example and may be a different number in another embodiment. Each of the memory blocks MB1to MBk may be connected to local lines LL and bit lines BL1to BLn (n is a positive integer). The local lines LL may be connected to a row decoder122and to each of the memory blocks MB1to MBk. The local lines LL may include a first select line, a second select line, and a plurality of word lines arranged between the first select line and the second select line. The local lines LL may further include dummy lines arranged between the first select line and the word lines, dummy lines arranged between the second select line and the word lines, and pipe lines. The bit lines BL1to BLn may be commonly connected to the memory blocks MB1to MBk. The memory blocks MB1to MBk may be implemented in a two-dimensional or three-dimensional structure. For example, memory cells in the memory blocks MB1to MBk having the two-dimensional structure may be arranged in a direction parallel to a substrate. Memory cells in the memory blocks MB1to MBk having the three-dimensional structure may, for example, be stacked in a direction vertical to a substrate. The peripheral circuit120may include a voltage generator121, the row decoder122, a page buffer group123, a column decoder124, an input/output circuit125, and a sensing circuit126. The voltage generator121may generate various operating voltages Vop used for a program operation, a read operation, and an erase operation in response to an operation command OP_CMD. Also, the voltage generator121may selectively discharge the local lines LL in response to the operation command OP_CMD. For example, the voltage generator121may generate a program voltage, a verify voltage, pass voltages, a turn-on voltage, a read voltage, an erase voltage, a source line voltage, and the like under the control of the control logic130. In an embodiment, the voltage generator121may generate an internal power voltage by regulating external power voltage. The internal power voltage generated by the voltage generator121may be used as an operating voltage of the memory device100. In an embodiment, the voltage generator121may generate a plurality of voltages using the external power voltage or the internal power voltage. For example, the voltage generator121may include a plurality of pumping capacitors for receiving the internal power voltage, and may generate a plurality of voltages by selectively activating the pumping capacitors under control of the control logic130. The generated voltages may be supplied to the memory cell array110by the row decoder122. The row decoder122may transfer the operating voltages Vop to the local lines LL in response to a row address RADD. The operating voltages Vop may be transferred to a selected memory block among the memory blocks MB1to MBk through the local lines LL. For example, in a program operation, the row decoder122may apply a program voltage to a selected word line, and apply a program pass voltage having a level lower than that of the program voltage to unselected word lines. In a program verify operation, the row decoder122may apply a verify voltage to the selected line, and apply a verify pass voltage higher than the verify voltage to the unselected word lines. In a read operation, the row decoder122may apply a read voltage to the selected word line, and apply a read pass voltage higher than the read voltage to the unselected word lines. In an erase operation, the row decoder122may select one memory block according to a decoded address. In the erase operation, the row decoder122may apply a reference (e.g., ground) voltage to word lines connected to the selected memory block. The page buffer group123may include first to nth page buffers PB1to PBn connected to the memory cell array110respectively through first to nth bit lines BL1to BLn. The first to nth page buffers PB1to PBn may operate under the control of the control logic130. For example, the first to nth page buffers PB1to PBn may operate in response to page buffer control signals PBSIGNALS. The first to nth page buffers PB1to PBn may, for example, temporarily store data received through the first to nth bit lines BL1to BLn, or sense a voltage or current of the bit lines BL1to BLn in a read operation or a verify operation. In a program operation, when the program voltage is applied to the selected word line, the first to nth page buffers PB1to PBn may transfer data DATA received through the column decoder124and the input/output circuit125to selected memory cells through the first to nth bit lines BL1to BLn. Memory cells of a selected page are programmed according to the transferred data DATA. A memory cell connected to a bit line to which a program allow voltage (e.g., a ground voltage) is applied may have an increased threshold voltage. A threshold voltage of a memory cell connected to a bit line to which a program inhibit voltage (e.g., a power voltage) is applied may be maintained. In a verify operation, the first to nth page buffers PB1to PBn may sense data stored in selected memory cells from the selected memory cell through the first to nth bit lines BL1to BLn. In a read operation, the first to nth page buffers PB1to PBn may sense data DATA from memory cells of a selected page through the first to nth bit lines BL1to BLn, and output the sensed data DATA to the input/output circuit125under the control of the column decoder124. In an erase operation, the first to nth page buffers PB1to PBn may float the first to nth bit lines BL1to BLn. The column decoder124may transfer data between the input/output circuit125and the page buffer group123in response to a column address CADD. For example, the column decoder124may exchange data with the page buffers PB1to PBn through data lines DL, or exchange data with the input/output circuit125through column lines CL. The input/output circuit125may transfer a command CMD and an address ADD, which are transferred from the memory controller200, to the control logic130, or exchange data DATA to the column decoder124. In a read operation or a verify operation, the sensing circuit126may generate a reference current in response to an allow bit VRY_BIT<#>, and output a pass signal PASS or a fail signal FAIL by comparing a sending voltage VPB received from the page buffer group123with a reference voltage generated by the reference voltage. The control logic130may control the peripheral circuit120by outputting the operation command OP_CMD, the row address RADD, the page buffer control signal PBSIGNALS, and the allow bit VRY_BIT<#> in response to the command CMD and the address ADD. FIG.4is a diagram illustrating an embodiment of the memory controller200, which may include a command generation controller210, a command storage220, and a command schedule controller230. Referring toFIG.4, the command generation controller210may receive a read request from the host400. The command generation controller210may generate a first read command instructing the memory device to read data stored in the memory device in response to the read request. The first read command may be generated in response to the read request currently provided by the host400. The command generation controller210may translate a logical address provided from the host400to a first physical address. The first physical address may be a physical address to be provided together with the first read command. In one embodiment, the first physical address may be a physical address to be provided together with a read command stored in the command storage220. The command generation controller210may provide the first read command and the first physical address to the command storage220. The command generation controller210may generate a scheduling event signal, and provide the scheduling event signal to the command schedule controller230. The command storage220may store read commands and physical addresses. The read commands and the physical addresses, which are stored in the command storage220, may be generated before the first read command and the first physical address are generated. In an embodiment, the command storage220may include a first read command queue and a second read command queue. The first read command queue may store read commands and physical addresses before a scheduling event occurs. The second read command queue may be a read command queue in which the read commands and the physical addresses are realigned and stored, after the scheduling even occurs. The command schedule controller230may search for a first physical address group and a second physical address group among the physical addresses stored in the first read command queue. The command schedule controller230may schedule the first physical address group and the second physical address group in the second read command queue. FIG.5is a diagram illustrating an embodiment of an operation of providing a scheduling event signal. Referring toFIG.5, at time T1, the command generation controller210may provide an erase command ECMD to the memory device100. The memory device100may start an erase operation in response to the erase command ECMD. At time T2, the host400may provide a request to the command generation controller210. The request of the host400may be, for example, a write request or a read request. The command generation controller210may provide a suspend command SPD_CMD to the memory device100. The memory device100may suspend the erase operation started at the time T1in response to the suspend command SPD_CMD. At time T3, the command generation controller210may provide the memory device100with a command CMD corresponding to the request of the host400, which is provided at the time T2. When the request of the host400is the write request, the command CMD may be a program command. In one embodiment, when the request of the host400is the read request, the command CMD may be a read command. The memory device100may perform an operation instructed by the command CMD in response to the command CMD. At time T4, the memory device100may complete the operation instructed by the command CMD. The command generation controller210may provide a resume command RSM_CMD to the memory device100. The memory device100may resume the erase operation suspended at the time T2in response to the resume command RSM_CMD. A certain preparation time may be included for the memory device100to normally perform the erase operation. This preparation time may be referred to as a resume delay time. For example, when the erase operation is resumed at time T4, the erase operation may be normally performed from T6after the resume delay time elapses. During the resume delay time, the host400may provide a read request to the command generation controller210. A read command and a physical address may be sequentially stored in the first read command queue included in the command storage220, whenever the read request of the host400is provided to the command generation controller210. When read commands and physical addresses, which are stored in the first read command queue, are continuously queued in the first read command queue, a response to the read request of the host400may be delayed. In an embodiment, at a time T5after the resume command RSM_CMD is provided to the memory device100, the command generation controller210may provide a scheduling event signal EVT_SIG to the command schedule controller230. FIG.6is a diagram illustrating an embodiment of an operation of providing the scheduling event signal. Referring toFIG.6, the command generation controller210may not generate the suspend command SPD_CMD and the resume command RSM_CMD. After a predetermined period elapses from when the command CMD shown inFIG.5is provided to the memory device100, the command generation controller210may output the scheduling event signal EVT_SIG. At time T1′, the command generation controller210may provide a program command PCMD to the memory device100. The memory device100may start a program operation (or write operation) in response to the program command PCMD. A time may be included for the program operation to be normally completed. This time may be referred to as a program operation time tPROG. Information (or data) on the program operation time tPROG may be stored in a memory block allocated as a CAM block among a plurality of memory blocks MB1, MB2, MB3, . . . , and MBm in the memory device100. The memory controller200may acquire the information on the program operation time tPROG from the memory device100in booting. The program operation time tPROG may be, for example, a period corresponding to that from the time T1′ to a time T3′. Before the program operation time tPROG elapses after a time at which the program command PCMD is provided to the memory device100, the command generation controller210may provide the scheduling event signal EVT_SIG to the command schedule controller230. In an embodiment, at a time at which a predetermined first reference time tSET1elapses after the time at which the program command PCMD is provided to the memory device100, the command generation controller210may provide the scheduling event signal EVT_SIG to the command schedule controller230. For example, the time at which the predetermined first reference time tSET1elapses after the time T1′ may be T2′. Information (or data) on the first reference time tSET1may be stored in the memory block allocated as the CAM block among the plurality of memory blocks MB1, MB2, MB3, . . . , and MBm in the memory device100. The memory controller200may acquire the information on the first reference time tSET1from the memory device100in booting. In an embodiment, the first reference time tSET1may be shorter than the program operation time tPROG. At time T3′, the command generation controller210may provide the memory device100with a command for checking a result of the program operation. FIG.7is a diagram illustrating an embodiment of an operation of providing the scheduling event signal. Referring toFIG.7, at time T1′, the command generation controller210may provide an erase command ECMD to the memory device100. The memory device100may start an erase operation in response to the erase command ECMD. A time may be included for the erase operation to be normally completed may be an erase operation time tER. Information (or data) on the erase operation time tER may be stored in a memory block allocated as a CAM block among a plurality of memory blocks MB1, MB2, MB3, . . . , and MBm in the memory device100. The memory controller200may acquire the information on the erase operation time tER from the memory device100in booting. The erase operation time tER may be, for example, a period corresponding to that from the time T1′ to a time T5′. In one embodiment, the erase operation time tER may be longer than the program operation time tPROG shown inFIG.6. Before the erase operation time tER elapses after a time at which the erase command ECMD is provided to the memory device100, the command generation controller210may provide the scheduling event signal EVT_SIG to the command schedule controller230. In an embodiment, at a time at which a predetermined second reference time tSET2elapses after the time at which the erase command ECMD is provided to the memory device100, the command generation controller210may provide the scheduling event signal EVT_SIG may be provided to the command schedule controller230. For example, the time at which the predetermined second reference time tSET2elapses after the time T1′ may be T4′. Information (or data) on the second reference time tSET2may be stored in the memory block allocated as the CAM block among the plurality of memory blocks MB1, MB2, MB3, . . . , and MBm in the memory device100. The memory controller200may acquire the information on the second reference time tSET2from the memory device100in booting. In an embodiment, the second reference time tSET2may be shorter than the erase operation time tER. In an embodiment, the second reference time tSET2may be equal to the first reference time tSET1or be longer than the first reference time tSET1. FIG.8is a diagram illustrating an embodiment of the command storage the command storage220, which may include a first read command queue221and a second read command queue222. The first read command queue221may sequentially store a read commands RCMD and physical addresses Physical Address based on index number. For example, a physical address including plane number0P0, block number100BLK100, page number30PG30, sub-page numbers0to3S0to S3, and a read command RCMD1may be stored in index number0Index0of the first read command queue221. A physical address including plane number0P0, block number200BLK200, page number10PG10, sub-page number3S3, and a read command RCMD2may be stored in index number1Index1of the first read command queue221. A physical address including plane number1P1, block number301BLK301, page number50PG50, sub-page numbers0and3S0and S3, and a read command RCMD3may be stored in index number2Index2of the first read command queue221. A physical address including plane number3P3, block number903BLK903, page number75PG75, sub-page number1S1, and a read command RCMD4may be stored in index number3Index3of the first read command queue221. The read commands RCMD1, RCMD2, RCMD3, and RCMD4and the physical addresses, which are respectively stored in standby columns for Index0, Index1, Index2, and Index3of the first read command queue221, may be sequentially provided to the memory device100. For example, the read commands RCMD1, RCMD2, RCMD3, and RCMD4and corresponding ones of the physical addresses may be sequentially provided according to a predetermined order, e.g., beginning with a lowest number of the standby columns Index0, Index1, Index2, and Index3of the first read command queue221. The predetermined order may be a different order in another embodiment. The second read command queue222may be empty before a scheduling event occurs. FIG.9is a diagram illustrating an embodiment of an operation of storing read commands and corresponding physical addresses in the first read command queue. Referring toFIG.9, while the read commands RCMD1, RCMD2, RCMD3, and RCMD4and the physical addresses are respectively stored in the standby columns Index0, Index1, Index2, and Index3of the first read command queue221, the command generation controller210may store, in the first read command queue221, a physical address mapped to a logical address and a read command, for example, as provided from the host400. For example, a physical address including plane number1P1, block number201BLK201, page number30PG30, sub-page number2S2, and a read command may be stored in index number Index4of the first read command queue221. In one embodiment, commands and physical addresses stored in a command queue may be output according to the first-in first-out (FIFO) scheme. For example, the physical address and read command RCMD1stored in index number0Index0of the first read command queue221may be output first. The physical address and read command RCMD2stored in index number1Index1of the first read command queue221may be output next, and so on. FIG.10is a diagram illustrating an embodiment of a read operation performed, for example, in accordance with the embodiment ofFIG.9. Referring toFIGS.9and10, the physical address and read command RCMD1stored in index number0Index0of the first read command queue221may be provided to the memory device100. The memory device100may perform a read operation on a page of a single plane (e.g., zeroth plane101) having the physical address including plane number0P0, block number100BLK10, page number30PG30, and sub-page number0S0in response to the read command RCMD1. The read operation performed on the page of the single plane may be referred to as a single plane read operation SP read. Data according to the single plane read operation SP read may be output (DATA OUT). The output data may be provided to the memory controller200through the channel. Next, the read command RCMD2and physical address stored in index number1Index1of the first read command queue221may be provided to the memory device100. The memory device100may perform a read operation PGS read on a sub-page corresponding to the physical address including plane number0P0, block number200BLK200, page number10PG10, and sub-page number3S3in response to the read command RCMD2. Data stored in the sub-page may be output by the read operation PGS read (DATA OUT). After the read command RCMD2stored in index number1Index1of the first read command queue221is provided to the memory device100, the read command RCMD3and physical address stored in index number2Index2of the first read command queue221may be provided to the memory device100. The memory device100may perform a single plane read operation SP read on a page of a single plane (e.g., first plane102) having the physical address including plane number1P1, block number301BLK301, page number50PG50, and sub-page numbers0and1S0and S1in response to the read command RCMD3. Since the plane number of the physical address stored in index number1Index1of the first read command queue221and the plane number of the physical address stored in index number2Index2of the first read command queue221are different from each other, the memory device100may perform a read operation on the zeroth plane101having plane number0P0and first plane102having the plane number1P1using the plane interleaving scheme. For example, the read operation PGS read on the zeroth plane101and the single plane read operation SP read on the first plane102may partially overlap with each other. Data to be output by the single plane read operation SP read performed on the first plane102is output after data output by the read operation PGS read performed on the zeroth plane101is provided to the memory controller200. The reason is that a plurality of planes (e.g., planes101to104shown inFIG.1or planes PLANE0, PLANE1, PLANE2, and PLANE3shown inFIG.2) are connected to one channel. Next, the read command RCMD4and physical address stored in index number3Index3of the first read command queue221may be provided to the memory device100. The memory device100may perform a read operation PGS read on a sub-page corresponding to the physical address including plane number3P3, block number903BLK903, page number75PG75, and sub-page number1S1in response to the read command RCMD4. Data stored in the sub-page may be output by the read operation PGS read (DATA OUT). Next, the read command and physical address stored in index number4Index4of the first read command queue221may be provided to the memory device100. The memory device100may perform a read operation PGS read, and data stored in a sub-page may be output by the read operation PGS read (DATA OUT). The read commands and physical addresses stored in the first read command queue221may be ones queued while the memory device100performs the erase operation, for example, as described above with reference toFIG.5. The read commands and physical addresses may continuously stand by in the respective index numbers of the first read command queue221, until an operation currently performed in the memory device100is completed. Accordingly, a time (or a read busy time tR) for which a read operation on a read request of the host400is performed is increased, the performance of the read operation is decreased, and a read response to the read request of the host400is delayed. In order to alleviate this concern, the first read command queue221may be scheduled again according to whether the physical addresses stored in the first read command queue221are to be provided to the memory device100according to a specific output sequence. FIG.11is a diagram illustrating an embodiment of an operation of scheduling the read commands and physical addresses, which are stored in the first read command queue, in the second read command queue. For purposes of illustration, inFIG.11, it is assumed that a first physical address is the physical address including plane number1P1, block number201BLK201, page number30PG30, and sub-page number2S2. Referring toFIG.11, the command schedule controller230may search for at least one second physical address including a page number equal to that of the first physical address among the physical addresses stored in the first read command queue221. In an embodiment, the first physical address and the at least one second physical address may be physical addresses having different plane numbers. For example, the physical address stored in index number0Index0of the first read command queue221may be a physical address having a plane number different from that of the first physical address and/or a page number different from that of the first physical address. The command schedule controller230may schedule a first read command, at least one second read command corresponding to the at least one second physical address, and a first physical address group in any one index number among index numbers of the second read command queue222. For example, the physical address stored in index number0Index0of the first read command queue221and the first physical address may be scheduled in index number0Index0of second read command queue222. A first read command corresponding to the first physical address and a second read command corresponding to the physical address stored in index number0Index0of first read command queue221may also be scheduled in index number0Index0of second read command queue222. A read command RCMD1stored in index number0Index0of the second read command queue222may include the first read command and the second read command. Index number0Index0of the second read command queue222may be a first priority index number. In one embodiment, the first physical address and the at least one second physical address may have the same plane number and/or the same block number. The command schedule controller230may schedule the first physical address group in any one index number among the index numbers of the second read command queue222. Also, the command schedule controller230may schedule the first read command or the at least one second read command corresponding to the at least one second physical address in one index number among the index numbers of second read command queue2222. In one embodiment, when any physical address including a page number equal to that of the first physical address (among the physical addresses stored in the first read command queue221) does not exist, the first physical address may be scheduled in index number4Index4of the second read command queue222, for example, as shown inFIG.9. FIG.12is a diagram illustrating an embodiment of a read operation performed in accordance with the embodiment shown inFIG.11. Referring toFIGS.11and12, the read command RCMD1and the physical addresses, which are stored in the index number0Index0of the second read command queue222may be provided to the memory device100. The memory device100may simultaneously perform a read operation MP read on pages of each of planes having the physical addresses stored in the index number0Index0of the second read command queue222in response to the read command RCMD1. The read operation simultaneously performed on the pages of each of the planes may be defined as a multi-plane read operation MP read. Data may be output by a multi-plane read operation MP read performed on the zeroth plane101(DATA OUT). Next, data may be output by a multi-plane read operation MP read performed on the first plane102(DATA OUT). The read command and physical address stored in each of the index number1Index1, the index number2Index2, and an index number3Index3of the second read command queue may be sequentially provided to the memory device100, and the memory device100may sequentially perform a read operation in response to each read command. This has been described above with reference withFIG.10. As described above, since a read command queue is realigned, a number of times all sensing operations for a read operation on a read request of the host400are performed can be decreased. As a result, a phenomenon in which a response to the read request of the host400is delayed can be reduced or prevented, and thus performance of the read operation can be increased. FIG.13is a diagram illustrating an embodiment of an operation of scheduling read commands and physical addresses, which are stored in the first read command queue, in the second read command queue. For purposes of illustration, it is assumed that a first physical address is the physical address including plane number1P1, block number201BLK201, page number30PG30, and sub-page number2S2. Referring toFIG.13, as described above with reference toFIG.11, the physical address stored in index number0Index0of the first read command queue221and the first physical address may be scheduled in index number0Index0of the second read command queue222. A first read command corresponding to the first physical address and a second read command corresponding to the physical address stored in index number0Index0of the first read command queue221may also be scheduled in index number0Index0of second read command queue222. In an embodiment, the command schedule controller230may search for a second physical address group among the physical addresses stored in the first read command queue221. Also, the command schedule controller230may sequentially schedule a first physical address group and the second physical address group in consecutive index numbers of the second read command queue222. For example, the physical address stored in index number3Index3of the first read command queue221may be included in the second physical address group. When the first physical address group is stored in index number0Index0of the second read command queue222, the physical address and read command stored in index number3Index3of the first read command queue221may be scheduled in index number1Index1of the second read command queue222. The number of read commands RCMD2stored in index number1Index1of second read command queue222may be one. The physical address and read command stored in index number1Index1of the first read command queue221may be scheduled in index number3Index3of the second read command queue222. The physical address and read command stored in index number2Index2of the first read command queue221may be scheduled in index number2Index2of the second read command queue222. FIG.14is a diagram illustrating an embodiment of a read operation performed in accordance with the embodiment shown inFIG.13. Referring toFIGS.13and14, a read command RCMD1and physical addresses, which are stored in the index number0Index0of the second read command queue222, may be provided to the memory device100. The memory device100may perform a multi-plane read operation MP read. Data may be sequentially output by the multi-plane read operation MP read (DATA OUT). After the read command RCMD1stored in the index number0Index0of the second read command queue222is provided to the memory device100, a read command RCMD2and a physical address, which are stored in the index number1Index1of the second read command queue222, may be provided to the memory device100. The memory device100may perform a read operation PGS read. The memory device100may perform a read operation on each of the zeroth plane101having the plane number0P0, the first plane102having the plane number1P1, and the second plane103having the plane number3P3using the plane interleaving scheme. Thus, the multi-plane read operation MP read and the read operation PGS read may partially overlap with each other. Data according to the read operation PGS read may be output, after data according to the multi-plane read operation MP read are sequentially output (DATA OUT). When a read command RCMD3and physical addresses, which are stored in the index number2Index2of the second read command queue222, may be provided to the memory device100, the memory device100may perform a read operation PGS read, and data according to the read operation PGS read may be output. When a read command RCMD4and a physical address, which are stored in the index number3Index3of the second read command queue222, are provided to the memory device100, the memory device100may perform a read operation PGS read, and data according to the read operation PGS read may be output. As described above, since a read command queue is realigned, a number of times all sensing operations required to perform a read operation on a read request of the host400are performed can be decreased, a phenomenon in which a response to the read request of the host400is delayed can be prevented, and the performance of the read operation can be increased. FIG.15is a diagram illustrating an embodiment of the command storage220, which may include a first read command queue221′, a second read command queue222′, and a third read command queue223′. The first read command queue221′ may be a read command queue in which a first read command instructing the memory device100to perform a read operation while a background operation is performed and a first physical address are stored. For example, the first physical address stored in the first read command queue221′ may include the plane number0P0, the block number100BLK100, a page number5PG5, the sub-page number0S0, and the sub-page number1S1. The first read command BRCMD1may be a command instructing the memory device100to read valid data included in a victim block in a garbage collection operation. However, the present disclosure is not limited thereto. The second read command queue222′ may be the same as the first read command queue221described above with reference toFIG.8. In one embodiment, the second read command queue223′ may be the same as the second read command queue222described above with reference toFIG.8. FIG.16is a diagram illustrating an embodiment an operation of scheduling a read command and a physical address. Referring toFIG.16, a first read command instructing the memory device100to perform a read operation while a background operation is performed and a first physical address may be stored in the first read command queue221′. The first physical address may include the plane number0P0, the block number100BLK100, the page number5PG5, the sub-page number0S0, and the sub-page number1S1. While the background operation is performed, the host400may provide a read request to the command generation controller210. The command generation controller210may generate a second read command in response to the read request of the host400. The command generation controller210may translate a logical address provided from the host400into a second physical address. The command generation controller210may store a second read command and the second physical address in the second read command queue222′. The second physical address may include, for example, the plane number0P0, a block number101BLK101, the page number5PG5, and the sub-page number0S0. The command schedule controller230may a first page number of the first physical address with a second page number of the second physical address. For example, it may be checked whether the first page number and the second page number are the same. The command schedule controller230may schedule the first physical address, the second physical address, and the second read command in the third read command queue223′ according to whether the first page number and the second page number are the same. In an embodiment, a first plane number of the first physical address may be different from a second plane number of the second physical address. The command schedule controller230may schedule the first read command, the first physical address, the second physical address, and the second read command in a first priority index number of the third read command queue223′. For example, the first physical address and the second physical address may be scheduled in an index number0Index of the third read command queue223′. In addition, the read commands may be scheduled in the index number0Index of the third read command queue223′. The read command RCMD1stored in the index number0Index of the third read command queue223′ may include a read command corresponding to the first physical address and a read command corresponding to the second physical address. In one embodiment, the first plane number of the first physical address may be equal to the second plane number of the second physical address, and a first block number of the first physical address may be equal to a second block number of the second physical address. The command schedule controller230may schedule the first physical address, the second physical address, and the second read command (or the first read command) in the first priority index number of the third read command queue223′. The first physical address and the second physical address may be combined in the third read command queue223′. The read command BRCMD1and the physical address, which are stored in the first read command queue221, may be discarded. FIG.17is a diagram illustrating an embodiment of a read operation performed in accordance with the embodiment shown inFIG.16. Referring toFIGS.16and17, the read command RCMD1, the first physical address, and the second physical address, which are stored in the index number0Index0of the third read command queue223′, may be provided in the memory device100. The read command RCMD may include a read command (e.g., a first read command) instructing the memory device100to read a page of a first plane having the first physical address and a read command (e.g., a second read command) instructing the memory device100to read a page of a second plane having the second physical address. The memory device100may simultaneously perform a read operation on the page of the first plane having the first physical address and the page of the second plane having the second physical address in response to the read command RCMD1. For example, the memory device100may perform a multi-plane read operation MP read on the first plane having the first physical address and the second plane having the second physical address. When the multi-plane read operation MP read is performed, the memory device100may sequentially provide the memory controller200with data stored in the page of the first plane and data stored in the page of the second plane. For example, when the multi-plane read operation MP read performed on the zeroth plane101and the first plane102is completed, data stored in a page of the first plane102may be output (DATA OUT), and then data stored in a page of the zeroth plane101may be output (DATA OUT). For example, there may occur a different ΔT between a time at which the data is output by a read operation on the zeroth plane101and a time at which the data is output by a read operation on the first plane102. This is because a response to the read request provided from the host400is prior to the background operation. As described above, since a read command queue is realigned, the number of times all sensing operations to perform a read operation on a read request of host400are performed can be decreased. As a result, a phenomenon in which a response to the read request of the host400is delayed can be reduced or prevented, which, in turn, may increase performance of the read operation. FIG.18is a diagram illustrating an embodiment of a method of operating the memory controller200. Referring toFIG.18, the method includes, at S110, memory controller200receiving a read request and a logical address from host400. At S120, memory controller200generates a read command, and translates the logical address into a physical address. At S130, memory controller200determines whether the translated physical address and a physical address pre-stored in a read command queue are combinable. As described above, whether the physical addresses are combinable may be obtained by determining whether plane numbers of the physical addresses are different from each other and page numbers of the physical addresses are the same, by determining whether the physical addresses all have the same plane number, the same block number and the same page number, or by determining whether a read operation is to be performed using the plane interleaving scheme. At S140, when the physical addresses are combinable (S130, YES), the memory controller200realigns the read command queue by combining the physical addresses. As described above, realigning the read command queue by combining the physical addresses may include scheduling the translated physical address in a standby column of the physical address pre-stored in the read command queue, such that a multi-plane read operation MP read or a single plane read operation SP read can be performed. At S150, when the physical addresses are not combinable (S130, NO), the memory controller200stores the generated read command and the translated physical address in the read command queue. For example, the memory controller200stores the generated read command and the translated physical address in an empty standby column in the read command queue. This may be the same as described above. At S160, the memory controller200may sequentially output the read command and the physical address, which are stored in the read command queue, to the memory device100according to a scheduled sequence. FIG.19is a diagram illustrating an embodiment of memory controller200, which may include a processor201, a RAM202, an error correction code (ECC) circuit203, a host interface204, a ROM205, and a flash interface206. The processor201may control overall operation of the memory controller200. The RAM202may be used as a buffer memory, a cache memory, a working memory, etc. of the memory controller200. Exemplarily, the RAM202may be a buffer memory. The ECC circuit203may generate an ECC for correcting a fail bit or error bit of data received from the memory device100. The ECC circuit203may generate data to which a parity bit is added by performing ECC encoding of data provided to the memory device100. The parity bit may be stored in the memory device100. The ECC circuit203may perform ECC decoding on data output from the memory device100. The ECC circuit203may correct an error using parity. For example, the ECC circuit203may correct an error using various coded modulations such as an LDPC code, a BCH code, a turbo code, a Reed-Solomon code, a convolution code, an RSC, a TCM, and a BCM. The ECC circuit203may calculate an ECC value of data to be programmed to the memory device100in a program operation. The ECC circuit203may perform an error correction operation on data read from the memory device100in a read operation, based on the ECC value. The ECC circuit203may perform an error correction operation of data recovered from the memory device100in a recovery operation of data which fails. The memory controller200may communicate with an external device (e.g., the host400, an application processor, or the like) through the host interface204. The ROM205may store, in the form of firmware, various information for an operation of the memory controller200. The memory controller200may communicate with the memory device100through the flash interface206. The memory controller200may transmit a command CMD, an address ADDR, a control signal CTRL, and the like to the memory device100through the flash interface206, and receive data DATA. The flash interface206may include, for example, a NAND interface. FIG.20is a block diagram illustrating an embodiment of a memory card system2000, which may include a memory device2100, a memory controller2200, and a connector2300. The memory device2100may be implemented with various nonvolatile memory devices, such as an Electrically Erasable and Programmable ROM (EEPROM), a NAND flash memory, a NOR flash memory, a Phase-change RAM (PRAM), a Resistive RAM (ReRAM), a Ferroelectric RAM (FRAM), and a Spin Torque Transfer magnetic RAM (STT-MRAM). The memory controller2200is connected to and may access the memory device2100. For example, the memory controller2200may control read, write, erase, and background operations of the memory device2100. The memory controller2200provides an interface between the memory device2100and a Host. The memory controller2200may execute instructions (e.g., drive firmware) for controlling the memory device2100. The memory controller2200may be implemented, for example, as memory controller200described with reference toFIG.1. The memory controller2200may include components such as a Random Access Memory (RAM), a processing unit, a host interface, a memory interface, and an error corrector. Also, the memory controller2200may communicate with an external device through the connector2300. The memory controller2200may communicate with the external device (e.g., the host400) according to a specific communication protocol. Exemplarily, the memory controller2200may communicate with the external device through at least one of various communication protocols. Examples include a Universal Serial Bus (USB), a Multi-Media Card (MMC), an embedded MMC (eMMC), a Peripheral Component Interconnection (PCI), a PCI express (PCIe), an Advanced Technology Attachment (ATA), a Serial-ATA (SATA), a Parallel-ATA (PATA), a Small Computer System Interface (SCSI), an Enhanced Small Disk Interface (ESDI), an Integrated Drive Electronics (IDE), firewire, a Universal Flash Storage (UFS), Wi-Fi, Bluetooth, and NVMe. Exemplarily, the connector2300may be defined by at least one of the above-described various communication protocols. The memory device2100and the memory controller2200may be integrated into a single semiconductor device, to constitute a memory card. Examples of the memory card include a PC card (Personal Computer Memory Card International Association (PCMCIA)), a Compact Flash (CF) card, a Smart Media Card (SM and SMC), a memory stick, a Multi-Media Card (MMC, RS-MMC, MMCmicro and eMMC), an SD card (SD, miniSD, microSD and SDHC), and a Universal Flash Storage (UFS). FIG.21is a block diagram illustrating an embodiment of a Solid State Drive (SSD) system, which may include a host400and an SSD3000. The SSD3000exchanges a signal SIG with the host400through a signal connector3001and receives power PWR through a power connector3002. The SSD3200may include an SSD controller3200, a plurality of flash memories3100_1,3100_2, and3100_n, an auxiliary power supply3300, and a buffer memory3400. In accordance with an embodiment, the SSD controller3200may perform the same function as the memory controller200described with reference toFIG.1. Also, the SSD controller3200may control the plurality of flash memories3100_1,3100_2, and3100_nin response to a signal SIG received from the host400. For example, the signal SIG may be a signal based on an interface between the host400and the SSD3000. For example, the signal SIG may be a signal defined by at least one of interfaces, such as a Universal Serial Bus (USB), a Multi-Media Card (MMC), an embedded MMC (eMMC), a Peripheral Component Interconnection (PCI), a PCI express (PCIe), an Advanced Technology Attachment (ATA), a Serial-ATA (SATA), a Parallel-ATA (PATA), a Small Computer System Interface (SCSI), an Enhanced Small Disk Interface (ESDI), an Integrated Drive Electronics (IDE), a firewire, a Universal Flash Storage (UFS), a WI-FI, a Bluetooth, and an NVMe. The auxiliary power supply3300is connected to the host400through the power connector3002. The auxiliary power device3300may receive the power PWR input from the host400and using power PWR to perform a charging operation. When the supply of power from the host400is not smooth, the auxiliary power supply3300may provide power of the SSD3000. Exemplarily, the auxiliary power supply3300may be located in the SSD3000, or be located at the outside of the SSD3000. For example, the auxiliary power supply3300may be located on a main board, and provide auxiliary power to the SSD3000. The buffer memory3400may temporarily store data. For example, the buffer memory3400may temporarily store data received from the host400or data received from the plurality of flash memories3100_1,3100_2, and3100_n, or temporarily store meta data (e.g., a mapping table) of the flash memories3100_1,3100_2, and3100_n. The buffer memory3400may include volatile memories such as a DRAM, an SDRAM, a DDR SDRAM, an LPDDR SDRAM, and a GRAM or nonvolatile memories such as a FRAM, a ReRAM, an STT-MRAM, and a PRAM. FIG.22is a block diagram illustrating an embodiment of a user system4000to which any of the embodiments of the storage device described herein may be applied. Referring toFIG.22, the user system4000includes an application processor4100, a memory module4200, a network module4300, a storage module4400, and a user interface4500. The application processor4100may drive components included in the user system4000, an operating system (OS), a user program, or the like. For example, the application processor4100may include controllers for controlling components included in the user system4000, interfaces, a graphic engine, and the like. The application processor4100may be provided as a System-on-Chip (SoC). The memory module4200may operate as a main memory, working memory, buffer memory or cache memory of the user system4000. The memory module4200may include one or more volatile random access memories, such as a DRAM, an SDRAM, a DDR SDRAM, a DDR2 SDRM, a DDR3 SDRAM, an LPDDR SDRAM, an LPDDR2 SDRAM, and an LPDDR3 SDRAM or nonvolatile random access memories such as a PRAM, a ReRAM, an MRAM, and a FRAM. The application processor4100and the memory module4200may be provided, for example, as one semiconductor package by being packaged based on a Package on Package (PoP). The network module4300may communicate with external devices. For example, the network module4300may support wireless communications such as Code Division Multiple Access (CDMA), Global System for Mobile communication (GSM), Wideband CDMA (WCDMA), CDMA-2000, Time Division Multiple Access (TDMA), Long Term Evolution (LTE), Wimax, WLAN, UWB, Bluetooth, and Wi-Fi. The network module4300may be included, for example, in the application processor4100. The storage module4400may store data. For example, the storage module4400may store data received from the application processor4100. Alternatively, the storage module4400may transmit data stored therein to the application processor4100. For example, the storage module4400may be implemented with a nonvolatile semiconductor memory device, such as a Phase-change RAM (PRAM), a Magnetic RAM (MRAM), a Resistive RAM (RRAM), a NAND flash, a NOR flash, or a NAND flash having a three-dimensional structure. Exemplarily, the storage module4400may be provided as a removable drive such as a memory card of the user system4000or an external drive. The storage module4400may operate, for example, as storage device1000described, for example, with reference toFIG.1. The storage module4400may include a plurality of nonvolatile memory devices, which, for example, may operate as memory device100described with reference toFIG.1. The user interface4500may include one or more interfaces for inputting data or commands to the application processor4100and/or outputting data to an external device. For example, the user interface4500may include user input interfaces such as a keyboard, a keypad, a button, a touch panel, a touch screen, a touch pad, a touch ball, a camera, a microphone, a gyroscope sensor, a vibration sensor and a piezoelectric element. The user interface4500may include user output interfaces such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display device, an Active Matrix OLED (AMOLED) display device, an LED, a speaker, and a monitor. In accordance with one embodiment, an apparatus includes a controller that is configured to perform a realignment operation for information stored in a first read command queue. The controller may be, for example, the command schedule controller as described in accordance with the embodiments described herein, and the first read command queue may also be in accordance with any of the embodiments. In operation, the controller may execute instructions stored in a non-transitory computer-readable medium to perform the realignment operation relative to the first read command queue and a second read command queue. For example, the controller may execute the instructions to receive a first physical address corresponding to a read command and a first plane number of a memory device, determine a first physical address group in the first read command queue having a second plane number, and store information in a second read command queue that represents realigned information stored in the first read command queue. The realigned information grouping the first physical address into the first physical address group. In addition, the controller may execute the instructions to schedule execution of read operations for physical addresses in the first physical address group including the first physical address. In one embodiment, the first physical address in the first physical address group may be scheduled before a second physical address group that is included in both the first read command queue and the second read command queue. Also, the first plane number is equal to the second plane number. In accordance with one or more embodiments, a memory controller is provided which is capable of improving performance of a read operation, and a storage device is also provided to include such a memory controller. The methods, processes, and/or operations described herein may be performed by code or instructions to be executed by a computer, processor, controller, or other signal processing device. The computer, processor, controller, or other signal processing device may be those described herein or one in addition to the elements described herein. Because the algorithms that form the basis of the methods (or operations of the computer, processor, controller, or other signal processing device) are described in detail, the code or instructions for implementing the operations of the method embodiments may transform the computer, processor, controller, or other signal processing device into a special-purpose processor for performing the methods herein. When implemented in at least partially in software, the controllers, processors, devices, modules, schedulers, units, multiplexers, generators, logic, interfaces, decoders, drivers, generators and other signal generating and signal processing features may include, for example, a memory or other storage device for storing code or instructions to be executed, for example, by a computer, processor, microprocessor, controller, or other signal processing device. The computer, processor, microprocessor, controller, or other signal processing device may be those described herein or one in addition to the elements described herein. Because the algorithms that form the basis of the methods (or operations of the computer, processor, microprocessor, controller, or other signal processing device) are described in detail, the code or instructions for implementing the operations of the method embodiments may transform the computer, processor, controller, or other signal processing device into a special-purpose processor for performing the methods described herein. While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. Therefore, the scope of the present disclosure should not be limited to the above-described exemplary embodiments but should be determined by not only the appended claims but also the equivalents thereof. In the above-described embodiments, all steps may be selectively performed or part of the steps and may be omitted. In each embodiment, the steps are not necessarily performed in accordance with the described order and may be rearranged. The embodiments in this specification and drawings are only examples to facilitate an understanding of the present disclosure, and the present disclosure is not limited thereto. That is, it should be apparent to those skilled in the art that various modifications can be made on the basis of the technological scope of the present disclosure. Meanwhile, the exemplary embodiments of the present disclosure have been described in the drawings and specification. Although specific terminologies are used here, those are only to explain the embodiments of the present disclosure. Therefore, the present disclosure is not restricted to the above-described embodiments and many variations are possible within the spirit and scope of the present disclosure. It should be apparent to those skilled in the art that various modifications can be made on the basis of the technological scope of the present disclosure in addition to the embodiments disclosed herein. The embodiments may be combined to form additional embodiments. | 86,530 |
11861224 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specific described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). The present disclosure generally relates to efficient data transfer. Rather than processing each command, the data storage device fetches part of the host buffers and then makes a determination regarding the attributes of the fetched buffers. Upon the determination, the command is classified as optimized, not-optimized, or somewhere in between. Optimized commands are permitted to retrieve data out of order while non-optimized commands remain in a strict in order data retrieval process. In between commands can be processed with some out of order data retrieval. In so doing, data transfers are effectively managed and optimized data by taking into account the current attributes of the host buffers per command. FIG.1is a schematic block diagram illustrating a storage system100in which a host device104is in communication with a data storage device106, according to certain embodiments. For instance, the host device104may utilize a non-volatile memory (NVM)110included in data storage device106to store and retrieve data. The host device104comprises a host DRAM138. In some examples, the storage system100may include a plurality of storage devices, such as the data storage device106, which may operate as a storage array. For instance, the storage system100may include a plurality of data storage devices106configured as a redundant array of inexpensive/independent disks (RAID) that collectively function as a mass storage device for the host device104. The host device104may store and/or retrieve data to and/or from one or more storage devices, such as the data storage device106. As illustrated inFIG.1, the host device104may communicate with the data storage device106via an interface114. The host device104may comprise any of a wide range of devices, including computer servers, network-attached storage (NAS) units, desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, so-called “smart” pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or other devices capable of sending or receiving data from a data storage device. The data storage device106includes a controller108, NVM110, a power supply111, volatile memory112, the interface114, and a write buffer116. In some examples, the data storage device106may include additional components not shown inFIG.1for the sake of clarity. For example, the data storage device106may include a printed circuit board (PCB) to which components of the data storage device106are mechanically attached and which includes electrically conductive traces that electrically interconnect components of the data storage device106or the like. In some examples, the physical dimensions and connector configurations of the data storage device106may conform to one or more standard form factors. Some example standard form factors include, but are not limited to, 3.5″ data storage device (e.g., an HDD or SSD), 2.5″ data storage device, 1.8″ data storage device, peripheral component interconnect (PCI), PCI-extended (PCI-X), PCI Express (PCIe) (e.g., PCIe x1, x4, x8, x16, PCIe Mini Card, MiniPCI, etc.). In some examples, the data storage device106may be directly coupled (e.g., directly soldered or plugged into a connector) to a motherboard of the host device104. Interface114may include one or both of a data bus for exchanging data with the host device104and a control bus for exchanging commands with the host device104. Interface114may operate in accordance with any suitable protocol. For example, the interface114may operate in accordance with one or more of the following protocols: advanced technology attachment (ATA) (e.g., serial-ATA (SATA) and parallel-ATA (PATA)), Fibre Channel Protocol (FCP), small computer system interface (SCSI), serially attached SCSI (SAS), PCI, and PCIe, non-volatile memory express (NVMe), OpenCAPI, GenZ, Cache Coherent Interface Accelerator (CCIX), Open Channel SSD (OCSSD), or the like. Interface114(e.g., the data bus, the control bus, or both) is electrically connected to the controller108, providing an electrical connection between the host device104and the controller108, allowing data to be exchanged between the host device104and the controller108. In some examples, the electrical connection of interface114may also permit the data storage device106to receive power from the host device104. For example, as illustrated inFIG.1, the power supply111may receive power from the host device104via interface114. The NVM110may include a plurality of memory devices or memory units. NVM110may be configured to store and/or retrieve data. For instance, a memory unit of NVM110may receive data and a message from controller108that instructs the memory unit to store the data. Similarly, the memory unit may receive a message from controller108that instructs the memory unit to retrieve data. In some examples, each of the memory units may be referred to as a die. In some examples, the NVM110may include a plurality of dies (i.e., a plurality of memory units). In some examples, each memory unit may be configured to store relatively large amounts of data (e.g., 128 MB, 256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB, 16 GB, 32 GB, 64 GB, 128 GB, 256 GB, 512 GB, 1 TB, etc.). In some examples, each memory unit may include any type of non-volatile memory devices, such as flash memory devices, phase-change memory (PCM) devices, resistive random-access memory (ReRAM) devices, magneto-resistive random-access memory (MRAM) devices, ferroelectric random-access memory (F-RAM), holographic memory devices, and any other type of non-volatile memory devices. The NVM110may comprise a plurality of flash memory devices or memory units. NVM Flash memory devices may include NAND or NOR-based flash memory devices and may store data based on a charge contained in a floating gate of a transistor for each flash memory cell. In NVM flash memory devices, the flash memory device may be divided into a plurality of dies, where each die of the plurality of dies includes a plurality of physical or logical blocks, which may be further divided into a plurality of pages. Each block of the plurality of blocks within a particular memory device may include a plurality of NVM cells. Rows of NVM cells may be electrically connected using a word line to define a page of a plurality of pages. Respective cells in each of the plurality of pages may be electrically connected to respective bit lines. Furthermore, NVM flash memory devices may be 2D or 3D devices and may be single level cell (SLC), multi-level cell (MLC), triple level cell (TLC), or quad level cell (QLC). The controller108may write data to and read data from NVM flash memory devices at the page level and erase data from NVM flash memory devices at the block level. The power supply111may provide power to one or more components of the data storage device106. When operating in a standard mode, the power supply111may provide power to one or more components using power provided by an external device, such as the host device104. For instance, the power supply111may provide power to the one or more components using power received from the host device104via interface114. In some examples, the power supply111may include one or more power storage components configured to provide power to the one or more components when operating in a shutdown mode, such as where power ceases to be received from the external device. In this way, the power supply111may function as an onboard backup power source. Some examples of the one or more power storage components include, but are not limited to, capacitors, super-capacitors, batteries, and the like. In some examples, the amount of power that may be stored by the one or more power storage components may be a function of the cost and/or the size (e.g., area/volume) of the one or more power storage components. In other words, as the amount of power stored by the one or more power storage components increases, the cost and/or the size of the one or more power storage components also increases. The volatile memory112may be used by controller108to store information. Volatile memory112may include one or more volatile memory devices. In some examples, controller108may use volatile memory112as a cache. For instance, controller108may store cached information in volatile memory112until the cached information is written to the NVM110. As illustrated inFIG.1, volatile memory112may consume power received from the power supply111. Examples of volatile memory112include, but are not limited to, random-access memory (RAM), dynamic random access memory (DRAM), static RAM (SRAM), and synchronous dynamic RAM (SDRAM (e.g., DDR1, DDR2, DDR3, DDR3L, LPDDR3, DDR4, LPDDR4, and the like)). Controller108may manage one or more operations of the data storage device106. For instance, controller108may manage the reading of data from and/or the writing of data to the NVM110. In some embodiments, when the data storage device106receives a write command from the host device104, the controller108may initiate a data storage command to store data to the NVM110and monitor the progress of the data storage command. Controller108may determine at least one operational characteristic of the storage system100and store at least one operational characteristic in the NVM110. In some embodiments, when the data storage device106receives a write command from the host device104, the controller108temporarily stores the data associated with the write command in the internal memory or write buffer116before sending the data to the NVM110. FIG.2is a block diagram illustrating a method200of operating a storage device to execute a read, write, or compare command, according to one embodiment. Method200may be used with the storage system100having a host device104and a data storage device106comprising a controller108. Method200may be used with the device a host device and a data storage device comprising a command processor. Method200begins at operation250, where the host device writes a command into a submission queue as an entry. The host device may write one or more commands into the submission queue at operation250. The commands may be read commands or write commands or compare commands. The host device may comprise one or more submission queues. The host device may write one or more commands to the submission queue in any order (i.e., a submission order), regardless of the sequential write order of the one or more commands (i.e., a sequential processing order). In operation252, the host device writes one or more updated submission queue tail pointers and rings a doorbell or sends an interrupt signal to notify or signal the storage device of the new command that is ready to be executed. The host may write an updated submission queue tail pointer and send a doorbell or interrupt signal for each of the submission queues if there are more than one submission queues. In operation254, in response to receiving the doorbell or interrupt signal, a controller of the storage device fetches the command from the one or more submission queue, and the controller receives or DMA reads the command. In operation256, the controller processes the command and writes, transfers data associated with a read command to the host device memory, or retrieves data for a compare command. The controller may process more than one command at a time. The controller may process one or more commands in the submission order or in the sequential order. Processing a write command may comprise identifying a stream to write the data associated with the command to and writing the data to one or more logical block address (LBA) of the stream. In operation258, once the command has been fully processed, the controller writes a completion entry corresponding to the executed command to a completion queue of the host device and moves or updates the CQ head pointer to point to the newly written completion entry. In operation260, the controller generates and sends an interrupt signal or doorbell to the host device. The interrupt signal indicates that the command has been executed and data associated with the command is available in the memory device. The interrupt signal further notifies the host device that the completion queue is ready to be read or processed. In operation262, the host device processes the completion entry. In operation264, the host device writes an updated CQ head pointer to the storage device and rings the doorbell or sends an interrupt signal to the storage device to release the completion entry. FIG.3is a schematic illustration of host buffers using physical region page (PRP) entries according to one embodiment. When using PRP entries, it is assumed that each host buffer has the same fixed size, except for the first host buffer. PRP1points to the first host buffer302, which, as noted, may be less than 4 KB, exactly 4 KB or more than 4 KB. Additionally, the first host buffer302may be the exact same size as the remaining host buffers308, but may also be different. PRP2, however, may point to a PRP list304. In the scenario shown inFIG.3, there are two PRP lists, but it is to be understood that more or less PRP lists may be present. The number of PRP lists depends upon the data transfer size of the relevant command. As shown inFIG.3, the last pointer in the PRP list304points to another PRP list306. Each additional host buffer308is the same size, which in the embodiment shown inFIG.3, is 4 KB. It is to be understood that while 4 KB is shown at the example size of the host buffers308, other sizes are contemplated. It is also to be noted that host buffers308will be the same size, regardless of what the size happens to be. FIG.4is a schematic illustration of host buffers using scatter gather lists (SGL) entries according to one embodiment. When using SGL entries, there is no assumption about the size of the host buffers, in contrast to PRP entries. Generally, each host buffer may have any size, as illustrated inFIG.4. Each host buffer is described by a dedicated SGL data block descriptor in which the address and the size of the host buffer are specified. The SGL is a more advanced method for describing host buffers while not making any assumptions. As shown inFIG.4, the SGL pointer points to SGL descriptors that includes the physical addresses and sizes of the host buffers. The host buffers are in the host physical memory, with each host buffer having a different size as illustrated inFIG.4. The host buffers in the host physical memory correspond to host buffers that are disposed in the process virtual memory. The NVMe standard defines a few types of SGL descriptors. The most basic descriptor is the SGL data block descriptor which describes a single host buffer. The SGL descriptors may be stored non-continuously in the host memory. Each bunch of SGL descriptors is defined as a SGL segment and described by an SGL segment descriptor. FIG.5is a schematic illustration of a SGL method according to one embodiment. The NVMe command contains an SGL segment descriptor which points to a bunch of SGL data block descriptors. The last descriptor in this chunk is an SGL last segment descriptor which points to the last bunch of SGL data block descriptors. As discussed herein, a method for host buffers and data transfer management that makes the tradeoff between simplicity, performance requirements, and other parameters is disclosed. The data transfer of each command depends on the host buffer list provided by the host device. The more optimized the host device buffer list, the more non-constraints data transfer. When the host device provides an optimized host buffer list, the data transfer can be unlimited and provides the best performance and quality of service (QoS). When the host device provides non-optimizes buffer lists, the data transfer will be limited and in the worst case scenario the data transfer would be in order. As will be additionally discussed below, the method involves partial fetching of host buffers. In one example, the data storage device fetches the host buffers associated with the first 16 KB data transfer associated with each command. The command is classified based upon the attributes of the fetched host buffers. When the host buffers are fully optimized (i.e., PRP or PRP like), then the command is classified as an optimized command. When the host buffers are non-optimized (i.e., too many unaligned host buffers and byte granularity), the command is classified as a non-optimized command. Several thresholds may be defined between those scenarios. Based upon the command classification, the data transfer attribute for each command is defined so that order enforcement in the data transfer occurs. For instance, no-order rules, partial order rules, or order enforcement (in-order transfers) occurs. In order transfers simplify the transfers when having non-optimized host buffers and eliminates the search operations because everything is in order. FIG.6is schematic illustration of host buffers and data transfer management according to one embodiment. The command is initially fetched from the host memory. The logic parses the command and determines the type of the command and the host buffer method used (i.e., PRP or SGL). Part of the host buffers are fetched from the host memory in the SGL scenario. In one example, the host buffers associated with the first 16 KB of the data transfer are fetched, but the amount fetched may depend upon the size of each SGL entry. The command is classified based on the fetched host buffers. The command could be PRP, optimized SGL, or non-optimized SGL commands. In some embodiments, more than the PRP, optimized SGL and non-optimized SGL command thresholds is contemplated. The order of the data transfer is defined based on the classification. In PRP or optimized SGL, the data transfer may not be limited and full out of order data transfer is supported. Full out of order data transfer can be done because the host buffers are optimized and no special search is necessary. In the worst case scenario, in order data transfer is enforced. In order data transfer simplifies the data path and buffer management because the search operation is not required. The drawback is in performance and QoS, but performance and QoS are less important because the host buffers are not optimized for performance and QoS. In other scenarios, the data transfer might be limited (e.g., out of order within 128 KB but not between two consecutive 128 KB). The data transfer is enforced based upon the above decision, but the decision might be changed during the fetching of the next SGL host buffers in some embodiments. A completion message is posted to the host device along with the interrupt signal once completing the data transfer of the command. FIG.7is a flowchart illustrating command classification according to one embodiment. For PRP or PRP like situations, the data transfer is not limited and full out of order data transfer is permitted. When having optimized SGL, some constraints are added to the data transfer. Otherwise, the data transfer is in order only. SGL classification considers several parameters such as number of host buffers per X-MB, host buffer alignment, and SGL attributes such as number of segments, SGL types, etc. It is contemplated that more thresholds can be defined for the host buffer attributes considering buffer size, buffer alignment, number of SGL segments, etc. Such thresholds determine the data transfer ordering and priority. The more optimal the host buffers, the less limitation is placed on the data transfer. FIG.8is a high level block diagram of a storage system according to one embodiment. As shown inFIG.8, a host buffer manager and an ordering enforcement (per command) module are disposed in the host interface module (HIM). The host buffer manager is responsible for fetching the host buffers for all commands. Based on the first host buffers fetched for a command, the command is classified and the order rules for the data transfer are set. Both the command scheduler and the ordering enforcement module receive the decision and act accordingly. In one embodiment, the order enforcement module and the host buffer manager are logic. The command scheduler limits the sense requests towards the flash interface module (FIM) and the ordering enforcement module reorder the transfer accordingly. FIG.9is a flowchart900illustrating efficient data transfer according to one embodiment. Initially, a doorbell is rung and the data storage device then fetches commands from the submission queue of the host device at902. Before retrieving any data associated with the command, the data storage device determines whether the command retrieved is a PRP command at904. If the command is a PRP command, then the data associated with the command can be transferred in any order as a full out of order data transfer is permitted at906. With the permission of full out of order data transfer, data may be transferred in order, in a completely random order, or a hybrid thereof. If the command is not a PRP command, the data storage device endeavors to determine whether there is any way in which data can be transferred out of order. Therefore, a predetermined amount of host buffers is fetched at908. In one embodiment, the predetermined amount of 16 KB from each host buffer. In such a scenario, the first 16 KB associated with the host buffer(s) of the command is fetched. A determination is then made regarding the host buffers. Specifically, whether the host buffers are aligned and have substantially the same size. If the host buffers are aligned and have substantially the same size, then even though the command is an SGL entry, the SGL entry is PRP like (e.g., an SGL entry that is arranged as a PRP entry would be arranged) and thus can be treated in the same manner as a PRP entry such that full out of order data transfer is permitted at906. If the host buffers are not aligned or the host buffers are not the substantially the same size at910, then a determination is made regarding whether the number of host buffers is below a predetermined threshold at912. In one embodiment, the threshold is 20 SGL entries per 4 KB of data transfer. It is to be understood that 20 SGL entries per 4 KB is just one example as other possibilities are contemplated such as 20 SGL entries per 4 KB on average. If the number of host buffers is below the predetermined threshold, then a partial out of order data transfer is permitted at914. As an example of a partial out of order data transfer, data may be fetched in any order form the memory device (i.e., NAND device) but then reordered when the data is transferred to the host device. As another example of partial out of order data transfer, the number of requests issued towards the memory device (i.e., NAND device) is limited, and all data is then retrieved in order. However, if the number of buffers is at or above the predetermined threshold at912, then a determination is made regarding whether the host buffers are aligned at916. If the host buffers are aligned, then a partial out of order data transfer is permitted at914. However, if the host buffers are not aligned, then a determination is made regarding whether the number of SGL segments is below a predetermined threshold at918. In one embodiment, the threshold is 8 SGL segments per command. If the number of SGL segments is below the predetermined threshold, then a partial out of order data transfer is permitted at914. However, if the number of SGL segments is at or above the predetermined threshold at918, then a strict in order data transfer is enforced at920. By taking into account the attributes of host buffers for a command, data transfers are effectively managed. Optimized commands are permitted to be retrieve data out of order while non-optimized commands remain in a strict in order data retrieval. In between commands can be processed with some out of order data retrieval. In one embodiment, a data storage device comprises: a memory device; and a controller coupled to the memory device, wherein the controller is configured to: fetch a command and at least a portion of one or more host buffers from a host device; determine whether the command corresponds to a physical region page (PRP) entry in a host buffer; and either: permit full out of order data retrieval from the memory device; and permit partial out of order data retrieval from the memory device; or enforce full in order data retrieval from the memory device. The determining comprises determining that the command does not correspond to a PRP entry. The controller is configured to determine that the command corresponds to a scatter-gather list (SGL) command. The controller is further configured to determine whether the SGL command corresponds to aligned buffers. The controller is further configured to permit partial out of order data retrieval upon determining that the SGL command corresponds to aligned buffers. The controller is further configured to determine whether the SGL command corresponds to buffers having substantially the same size. The controller is further configured to enforce full in order data retrieval upon determining that the SGL command corresponds to buffers having different sizes. The controller is further configured to determine that the command is corresponds to a scatter-gather list (SGL) command, wherein the controller is further configured to determine whether the SGL command corresponds to aligned buffers and buffers of substantially the same size, and wherein the controller is further configured to permit full out of order data retrieval upon determining that the SGL command corresponds to aligned buffers and buffers of substantially the same size. In another embodiment, a data storage device comprises: a memory device; and a controller coupled to the memory device, wherein the controller includes: a host interface module (HIM) having: a host buffer manager; and an order enforcing module; a command scheduler; and a flash interface module (FIM). The order enforcing module is logic contained within the HIM. The host buffer manager is configured to fetch host buffers for commands received from a host device. The host buffer manager is configured to fetch less than all host buffers for the commands. The order enforcing module is configured to enforce data transfer as full out of order, partial out of order, or full in order based upon buffers fetched by the host buffer manager. The controller is configured to classify commands received by a host device as physical region page (PRP), scatter-gather list (SGL) optimized, or SGL non-optimized. The controller is further configured to set a data transfer order for sending data to the host device based upon the classifying. In another embodiment, a data storage device comprises: memory means; and a controller coupled to the memory means, wherein the controller is configured to: fetch a command from a host submission queue; fetch a portion of data associated with the host command; determine whether the portion of data corresponds to a physical region page (PRP) entry, an optimized scatter-gather list (SGL) entry, or a not-optimized SGL entry; classify the command according to the determination; enforce a data retrieval order based upon the classification; transfer data to a host device; and post a completion entry to a host completion queue. The enforcing comprises retrieving data in any order, fully in order, or partially out of order based upon the classifying. The controller is further configured to determine a size of host buffers associated with the portion of data. The controller is further configured to determine an alignment of host buffers associated with the portion of data. The controller comprises a command scheduler, wherein the command scheduler limits sense requests towards a flash interface module (FIM) based upon the enforcing. While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 29,724 |
11861225 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to performing management unit based media management operations in memory devices. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. A non-volatile memory device is a package of one or more dies. Each die can include two or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane includes of a set of physical blocks. In some implementations, each block can include multiple sub-blocks. Each plane carries a matrix of memory cells formed onto a silicon wafer and joined by conductors referred to as wordlines and bitlines, such that a wordline joins multiple memory cells forming a row of the matric of memory cells, while a bitline joins multiple memory cells forming a column of the matrix of memory cells. Depending on the cell type, each memory cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. A set of memory cells referred to as a memory page can be programmed together in a single operation, e.g., by selecting consecutive bitlines. The non-volatile memory devices can include three-dimensional cross-point (“3D cross-point”) memory devices that are a cross-point array of non-volatile memory that can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Such non-volatile memory devices can be divided into multiple management units (MUs), such as pages or blocks of the memory device. An MU can be a group of pages across dice and/or channels. An MU can represent an individual segment of the memory device that can be written or erased in a single operation. In one example, an MU can correspond to a logical block size (e.g., a data transfer size of a host and/or a data management size of a memory system), which can be, for example, 4 KB. The MU can be mapped to a physical set of memory cells. However, embodiments are not so limited. For example, an MU can correspond to more than a logical block size when a group of memory cells storing user data and overhead data (e.g., data informative of other data stored within the group of memory cells) corresponds to more than a logical block size. Although memory devices such as 3D cross-point type memory are described, an MU can be defined for other type of memory, such as negative-and (NAND) and random access memory (RAM), For example, an MU can be a page of data in NAND media or a logical block of data in RAM. MUs can be grouped into larger groups of data management units referred to herein as a super management unit (SMU). While an MU can be the unit of media that controls decoding and storing of data, an SMU can be used to perform wear leveling features, refresh operations, and other larger scale management of the memory device. An SMU can be used for these larger scale operations because large amounts of resources could be required to perform these operations on each individual MU. In some memory sub-systems, an SMU memory access operation is split into multiple MU memory access operations. Memory access operations can be performed by the memory sub-system. The memory access operations can be host-initiated operations or memory sub-system controller initialed. For example, the host system can initiate a memory access operation (e.g., write operation, read operation, erase operation, etc.) on a memory sub-system. The host system can send memory access commands (e.g., write command, read command) to the memory sub-system, such as to store data on a memory device at the memory sub-system and to read data from the memory device on the memory sub-system. The data to be read or written, as specified by a host request, is hereinafter referred to as “host data”. A host request can include logical address information (e.g., logical block address (LBA), namespace) for the host data, which is the location the host system associates with the host data. The logical address information (e.g., LBA, namespace) can be part of metadata for the host data. Metadata can also include error handling data (e.g., ECC codeword, parity code), data version (e.g. used to distinguish age of data written), valid bitmap (which LBAs or logical transfer units contain valid data), etc. Memory access operations initiated by the memory sub-system controller can relate to maintenance operations, such as garbage collection, wear leveling, bad block management, block refresh operations, etc. When a SMU memory access command is split into multiple MU memory access commands, one or more of the MU memory access command can experience an error, causing the SMU memory access operation to timeout. For example, a two-kilobyte (KiB) SMU memory access command can be split into 2047 MU memory access commands. The memory sub-system controller can issue all 2047 MU memory access command, and three of the MU memory access commands can experience an error. Current memory sub-systems may be unable to determine that a MU memory access command experienced the error until a time-out command is issued after a significant delay. Furthermore, current memory sub-systems cannot determine which MU memory access command experienced the error without performing media scan on the memory device, which further increases the memory sub-system system latency. Aspects of the present disclosure address the above and other deficiencies by implementing media management operations, in memory devices, based on an MU index value and operation window. In some embodiments, responsive to generating a SMU memory access command, the memory sub-system controller can split the SMU memory access command into multiple MU memory access commands (based on, for example, memory size, logical addresses, physical addresses, etc.). For example, a two-kilobyte (KiB) SMU memory access command can be split into 2047 MU memory access commands. Each MU memory access command can be indexed, by the memory sub-system controller, in a data structure (e.g., a metadata table). In one example, each MU memory access command can be sequentially numbered in the metadata table. The MU memory access commands can be ordered based on logical addresses, physical addresses, positions in the SMU, priority, etc. The memory sub-system controller can include a control register to track the status of the MU memory access command issued to the memory device. In some embodiments, the control register can use status bits to indicate whether a MU memory access command is pending, completed, has yet to be issued, etc. For example, the control register can use a binary value of 1 to indicate that a MU memory access command is pending, and a binary value of 0 to indicate that a MU memory access command is completed or has yet to be issued. The memory sub-system controller can track, using the control register, pending MU memory access commands (e.g., MU memory access commands issued to the memory device) and issue a new MU memory access command. The amount of MU memory access commands issued, by the memory sub-system controller to the memory device, can be limited by an operation window. The operation window can be a value that defines the maximum number of MU memory access commands that the memory sub-system controller can issue based on the lowest index of a pending MU memory access command. Specifically the memory sub-system controller can issue a certain number of indexed memory access commands from the lowest indexed pending MU memory access command. For example, if the value of the operation window is 10, and the lowest pending index MU memory access command is 22, then the memory sub-system controller can issue 9 additional memory access command indexed 23-31, for a total of 10 pending MU memory access commands. Each of the ten pending memory access commands can be correlated, by the memory sub-system controller, to a status bit in the control register, where the status bits are set to the value 1 (indicating that a MU memory access command is pending). The value of the lowest pending indexed MU memory access command can limit additional MU memory access command regardless of whether any additional MU memory access command is completed. For example, if the value of the operation window is 10, the lowest pending index MU memory access command is 22, and MU memory access commands indexed 23-31 are completed, the memory sub-system controller will not issue MU memory access command indexed 32 (or any other higher indexed MU memory access commands) until the MU memory access command indexed as 22 is completed. Responsive to the memory device experiencing an error processing an issued MU memory access command, the memory sub-system controller can refer to the control register to determine which MU memory access command experienced the error. The memory sub-system controller can then perform error-correcting techniques (e.g., error correction code (ECC), etc.) or reissue the MU memory access command. Advantages of the present disclosure include, but are not limited to, an improved performance of the memory device and an improved quality of service for the host system by tracking MU memory access commands and quickly determining which MU memory access command experienced an error. This allows the memory device to process multiple memory access commands without the latency produced by scanning the memory device to determine the MU memory access command responsive for the error. Thus, embodiments of the present disclosure reduce the amount of time memory device can process multiple MU memory access commands, which improves the performance of the memory device. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DEVIM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to different types of memory sub-system110.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DEVIM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Some types of memory, such as 3D cross-point, can group pages across dice and channels to form management units (MUs). MUs are one example of a data unit. Super management units (SMUs) are another example of data units, and can include a set of multiple MUs (e.g. 1000 MUs, 2000 MUs, etc.) The memory device130can include one or more decks. A deck can be defined as an array of memory cells with electronically conductive access lines. Multiple decks can be stacked within memory device130. Each deck can have inherently different levels of endurance (e.g., an indication of approximately how many times the deck can be written to, read, and/or erased before physical wear causes the deck to fail). Although non-volatile memory components such as 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g. 2D NAND, 3D NAND) are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRA1V1), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRA1V1), oxide based RRA1V1 (OxRA1V1), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM). The memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can be a processing device, which includes one or more processors (e.g., processor117), configured to execute instructions stored in local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as media management operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., logical block address (LBA), namespace) and a physical address (e.g., physical MU address, physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which includes a raw memory device130having control logic (e.g., local controller132) on the die and a controller (e.g., memory sub-system controller115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The memory sub-system110includes a media management component113that can be used to implement media management operations, in memory device130, based on an MU operation window. In some embodiments, the memory sub-system controller115includes at least a portion of the media management component113. In some embodiments, the media management component113is part of the host system110, an application, or an operating system. In other embodiments, local media controller135includes at least a portion of media management component113and is configured to perform the functionality described herein. The media management component113can generate SMU memory access commands. For example, the media management component113can generate a SMU memory access command for media management purposes. Media management component113can split the generated SMU memory access command into multiple MU memory access commands based on, for example, memory size, logical addresses, physical addresses, etc. In one example, media management component113can split the SMU memory access command into multiple MU memory access commands of a predetermined byte size (e.g., one-byte MU memory access commands, two-byte MU memory access commands, etc.). Once split, media management component113can index each MU memory access command in, for example, a data structure. For example, media management component113can maintain a data structure (e.g., a metadata table) composed of multiple records, where each record correlates a MU memory access command (via an identifier) to a corresponding index value. The index values can be a sequential list of numbers. An example of the index metadata table is shown inFIG.3. The memory sub-system controller115can include control register114to track the status of the MU memory access command issued (or scheduled to issue) to the memory device. In other embodiments, local media controller135includes at least a portion of control register114and is configured to perform the functionality described herein. In some embodiments, control register114can use status bits to indicate whether a MU memory access command is pending, completed, or is to be issued. In one embodiments, control register114can use a binary value of 1 to indicate that a MU memory access command is pending, and a binary value of 0 to indicate that a MU memory access command is completed or is to be issued. The media management component113use a control register metadata table to correlate an MU memory access command to a bit of the control register114. An example of the control register is shown inFIG.4Aand an example of the control register metadata table is shown inFIG.4B. Responsive to memory device130,140experiencing an error processing an issued MU memory access command, the media management component113can refer to control register114to determine which MU memory access command experienced the error. Media management component113can then perform error-correcting techniques (e.g., error correction code (ECC), etc.) or reissue the MU memory access command. The media management component113can track, using the control register114, pending MU memory access commands. Media management component113can further use an operation window value and the value of the lowest indexed pending memory access command to determine which additional MU memory access commands to issue to memory device130. Specifically, the highest indexed memory access command that the media management component113can issue is defined by the lowest indexed pending MU memory access command plus the value of the operation window, minus one. The operation window value can a programmable value configurable by the media management component113. In some embodiments, the operation window value can be equal to or less than the number of bits managed by control register114. FIG.2is a flow diagram of an example method200illustrating processes performed for media management operations, in accordance with some embodiments of the present disclosure. The method200can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method200is performed by the media management component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation210, the processing logic generates an SMU memory access command. For example, the processing logic can receive one or more write commands, read commands, erase commands, etc, and generate the SMU memory access command. The SMU memory access command can specify a logical address. At operation220, the processing logic splits the SMU memory access command into multiple MU memory access commands. In some embodiments, the processing logic can split the SMU memory access command based on, for example, memory size, logical addresses, physical addresses, etc. In one example, the processing logic can split the SMU memory access command into MU memory access commands of a predetermined byte size. Each MU memory access command can specify a logical address. The processing logic can determine a physical address associated with the logical address. In some embodiments, the physical address can be located in an address space of one or more dies a memory device. The processing logic can determine a portion of the memory device that is referenced by the physical address. For example, the processing logic can use a table to determine which portion of the memory device includes the address space. At operation230, the processing logic indexes each MU memory access command in a data structure (e.g., a metadata table). The index metadata table can be composed of multiple records, where each record correlates a MU memory access command (via an identifier) to a corresponding index value, where the index values are a sequential list of numbers. At operation240, the processing logic issues a MU memory access command. In some embodiments, the processing logic can issue an available MU memory access command with the lowest index value in the metadata table. In one example, where no MU memory access commands have been issued yet, the processing logic can issue the MU memory access command corresponding to index value 0 in the metadata table. In another example, where at least one MU memory access command has been issued, the processing logic can determine whether to issue the next available MU memory access command based on whether the operation window value and the index value of the lowest indexed pending MU memory access command satisfy a criterion. For example, the processing logic can determine whether the index value of the next available MU memory access command is equal to or less than the sum of the lowest indexed pending MU memory access command plus the value of the operation window, minus one. At operation250, the processing logic indicates the status of the issued MU memory access command in the control register (e.g., control register114). For example, the processing logic can, using a control register metadata table, assign the identifier of the issued MU memory access command to a status bit, and set the corresponding status bit of the control register to the value of 1. At operation260, the processing logic determines whether to issue a next available MU memory access command. In some embodiments, the processing logic can use an operation window value and the value of the lowest indexed pending MU memory access command to determine whether to issue the next available MU memory access commands to issue to memory device130. For example, the processing logic can determine, using the control register and the metadata tables, the value of the lowest pending indexed memory access command. The processing logic can then add the operation window value to the index value of the memory access command, subtract one from the sum, and, based on the calculated value, determine whether the MU memory access command correlating to the determined value is issued and/or completed. Responsive to the MU memory access command being unissued, the processing logic proceeds to operation240and issues the MU memory access command to the memory device. Responsive to the MU memory access command being issued and/or completed, the processing logic proceeds to operation270. At operation270, the processing logic receives data, from the memory device, relating to a previously issued MU memory access command. The processing logic can further indicate the status of the completed MU memory access command in the control register. For example, the processing logic can set the status bit assigned the identifier of the completed MU memory access command to the value of 0. The processing logic then proceeds to operation260, to determine whether to issue the next available MU memory access command based on whether the completed MU memory access command satisfies a criterion. For example, the processing logic can determine whether the index value of the next available MU memory access command equal to or less than the sum of the lowest indexed pending MU memory access command plus the value of the operation window, minus one. In some embodiments, the processing logic can clear, from the control register metadata table, the assigned identifier of the completed MU memory access command after completing an iteration of operation260. In other embodiments, the processing logic can clear, from the control register metadata table, the assigned identifier of the completed MU memory access command after completing a full cycle of assigning each other status bit in the control register. FIG.3schematically illustrates example metadata maintained by the memory sub-system controller, in accordance with some embodiments of the present disclosure. In some embodiments, media management component113can maintain an index metadata table300. In some embodiments, metadata table300can be stored in memory of the memory sub-system (e.g., at memory device130,140, local memory119, etc.) and can be referenced by media management component113to determine the index value of a referenced MU memory access command identifier. As illustrated inFIG.3, metadata table300, by way of exemplary example, maintains entries that that correlate each MU memory access command of an SMU memory access command to an index value. Each MU memory access command can include an identifier. Media management component113can determine whether to issue, to the memory device, subsequent MU memory access commands using metadata table300. FIG.4Aschematically illustrates an example control register400maintained by the memory sub-system controller, in accordance with some embodiments of the present disclosure.FIG.4Bschematically illustrates example metadata maintained by the memory sub-system controller, in accordance with some embodiments of the present disclosure. In some embodiments, media management component113can maintain a control register metadata table410. In some embodiments, media management component113can manage control register400. As illustrated inFIG.4A, control register400, by way of exemplary example, maintains a bit that indicates the status of an assigned MU memory access command. In some embodiments, control register400can be a hardware element referenced by media management component113to determine the status of a corresponding MU memory access command. In some embodiments, control register metadata table410can be stored in memory of the memory sub-system (e.g., at memory device130,140, local memory119, etc.) and can be referenced by media management component113to determine the MU memory access command (via an identifier) that correlates to a bit of control register400. As illustrated inFIG.4B, control register metadata table410, by way of exemplary example, maintains entries that that correlate each bit of the control register400to an assigned MU memory access command of an SMU memory access command. A MU memory access command can be correlated to a bit of the control register410via an identifier. Media management component113can determine whether to issue, to the memory device, subsequent MU memory access commands using control register400and control register metadata table410. FIG.5is a flow diagram of an example method500illustrating processes performed for determining which MU memory access command experienced an error, in accordance with some embodiments of the present disclosure. The method500can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method500is performed by the media management component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation510, the processing logic can determine that the memory device experienced an error processing a MU memory access command. For example, the processing logic can determine that a timer, activated when the MU memory access command was issued, has expired (e.g., a timeout error). At operation520, the processing logic can determine which MU memory access command experienced the error. For example, the processing logic can look up, using the control register and metadata table(s), which pending memory access command has the lowest index value. At operation530, the processing logic can perform error-correcting techniques (e.g., error correction code (ECC), etc.) or reissue the MU memory access command. FIG.6illustrates an example machine of a computer system600within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system600can correspond to a host system (e.g., the host system120ofFIG.1) that includes or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to media management component113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system600includes a processing device602, a main memory604(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory606(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system618, which communicate with each other via a bus630. Processing device602represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device602can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device602is configured to execute instructions626for performing the operations and steps discussed herein. The computer system600can further include a network interface device608to communicate over the network620. The data storage system618can include a machine-readable storage medium624(also known as a computer-readable medium) on which is stored one or more sets of instructions626or software embodying any one or more of the methodologies or functions described herein. The instructions626can also reside, completely or at least partially, within the main memory604and/or within the processing device602during execution thereof by the computer system600, the main memory604and the processing device602also constituting machine-readable storage media. The machine-readable storage medium624, data storage system618, and/or main memory604can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions626include instructions to implement functionality corresponding to media management component113ofFIG.1. While the machine-readable storage medium624is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 45,415 |
11861226 | DETAILED DESCRIPTION A semiconductor memory device according to an embodiment comprises: a first pad configured to receive a first signal; a second pad configured to receive a second signal; a first memory cell array; a first sense amplifier connected to the first memory cell array; a first data register connected to the first sense amplifier and configured to store data read from the first memory cell array; and a control circuit configured to execute an operation targeting the first memory cell array. The first memory cell array comprises a plurality of first memory strings. The first memory strings each comprise a plurality of first memory cell transistors. Moreover, in a first mode of this semiconductor memory device, a command set instructing the operation is inputted via the first pad. Moreover, in a second mode of this semiconductor memory device, the command set is inputted via the second pad. Next, semiconductor memory devices according to embodiments will be described in detail with reference to the drawings. Note that the following embodiments are merely examples, and are not shown with the intention of limiting the present invention. Moreover, when a “semiconductor memory device” is referred to in the present specification, it will sometimes mean a memory die (a memory chip), and will sometimes mean a memory system including a controller die, of the likes of a memory card or an SSD. Furthermore, it will sometimes mean a configuration including a host computer, of the likes of a smartphone, a tablet terminal, or a personal computer. Moreover, in the present specification, when a first configuration is said to be “electrically connected” to a second configuration, the first configuration may be connected to the second configuration directly, or the first configuration may be connected to the second configuration via the likes of a wiring, a semiconductor member, or a transistor. For example, even when, in the case of three transistors having been serially connected, the second transistor is in an OFF state, the first transistor is still “electrically connected” to the third transistor. Moreover, in the present specification, when a first configuration is said to be “connected between” a second configuration and a third configuration, it will sometimes mean that the first configuration, the second configuration, and the third configuration are serially connected, and the second configuration is connected to the third configuration via the first configuration. Moreover, in the present specification, when a circuit, or the like, is said to “make electrically continuous” two wirings, or the like, this will sometimes mean, for example, that this circuit, or the like, includes a transistor, or the like, that this transistor, or the like, is provided in a current path between the two wirings, and that this transistor, or the like, is in an ON state. First Embodiment [Memory System10] FIG.1is a schematic block diagram showing a configuration of a memory system10according to a first embodiment. The memory system10performs read, write, erase, and so on, of user data, according to a signal transmitted from a host computer20. The memory system10is a memory card, an SSD, or another system configured to store user data, for example. The memory system10comprises: a plurality of memory dies MD storing user data; and a controller die CD connected to these plurality of memory dies MD and to the host computer20. The controller die CD comprises the likes of a processor and a RAM, for example, and performs processing, such as conversion of a logical address and a physical address, bit error detection/correction, garbage collection (compaction), and wear leveling. FIG.2is a schematic side view showing a configuration example of the memory system10according to the present embodiment.FIG.3is a schematic plan view showing same configuration example. For convenience of description, some configurations are omitted inFIGS.2and3. As shown inFIG.2, the memory system10according to the present embodiment comprises: a mounting substrate MSB; a plurality of the memory dies MD stacked on the mounting substrate MSB; and the controller die CD stacked on the memory dies MD. A region of an end portion in a Y direction, of an upper surface of the mounting substrate MSB is provided with a pad electrode P, and some of another region of the upper surface of the mounting substrate MSB is adhered to a lower surface of the memory die MD, via an adhesive agent, or the like. A region of an end portion in the Y direction, of an upper surface of the memory die MD is provided with the pad electrode P, and another region of the upper surface of the memory die MD is adhered to a lower surface of another memory die MD or of the controller die CD, via an adhesive agent, or the like. A region of an end portion in the Y direction, of an upper surface of the controller die CD is provided with the pad electrode P. As shown inFIG.3, the mounting substrate MSB, the plurality of memory dies MD, and the controller die CD each comprise a plurality of the pad electrodes P aligned in an X direction. Pluralities of the pad electrodes P provided to the mounting substrate MSB, the plurality of memory dies MD, and the controller die CD are respectively connected to each other via bonding wires B. Note that the configuration shown inFIGS.2and3is merely an exemplification, and that a specific configuration is appropriately adjustable. For example, in the example shown inFIGS.2and3, the controller die CD is stacked on the plurality of memory dies MD, and these configurations are connected by the bonding wires B. In such a configuration, the plurality of memory dies MD and the controller die CD are included in a single package. However, the controller die CD may be included in a separate package from the memory dies MD. Moreover, the plurality of memory dies MD and the controller die CD may be connected to each other via through-electrodes, or the like, not the bonding wires B. [Configuration of Memory Die MD] FIG.4is a schematic block diagram showing a configuration of the memory die MD according to the first embodiment.FIG.5is a schematic circuit diagram showing a configuration of part of the memory die MD.FIG.6is a schematic perspective view showing a configuration of part of the memory die MD.FIGS.7and8are schematic circuit diagrams showing configurations of parts of the memory die MD. For convenience of description, some configurations are omitted inFIGS.4to8. Note that inFIG.4, a plurality of control terminals, and so on, are illustrated. These plurality of control terminals are in some cases indicated as a control terminal corresponding to a high active signal (a positive logic signal), in some cases indicated as a control terminal corresponding to a low active signal (a negative logic signal), and in some cases indicated as a control terminal corresponding to both a high active signal and a low active signal. InFIG.4, a symbol of a control terminal corresponding to a low active signal includes an overline. In the present specification, a symbol of a control terminal corresponding to a low active signal includes a slash (“/”). Note that description ofFIG.4is an exemplification, and that a specific mode is appropriately adjustable. For example, it is possible too for some or all of the high active signals to be configured as low active signals, or for some or all of the low active signals to be configured as high active signals. Moreover, arrows indicating input/output directions are illustrated alongside the plurality of control terminals shown inFIG.4. InFIG.4, a control terminal assigned with an arrow from left to right is usable in input of data or another signal from the controller die CD to the memory die MD. InFIG.4, a control terminal assigned with an arrow from right to left is usable in output of data or another signal from the memory die MD to the controller die CD. InFIG.4, a control terminal assigned with an arrow in both left and right directions is usable in both input of data or another signal from the controller die CD to the memory die MD and output of data or another signal from the memory die MD to the controller die CD. As shown inFIG.4, the memory die MD comprises: memory cell arrays MCA0, MCA1that store user data; and a peripheral circuit PC connected to the memory cell arrays MCA0, MCA1. Note that in the description below, the memory cell arrays MCA0, MCA1will sometimes be called memory cell arrays MCA. Moreover, the memory cell arrays MCA0, MCA1will sometimes be called planes PLN0, PLN1. [Configuration of Memory Cell Array MCA] As shown inFIG.5, the memory cell array MCA comprises a plurality of memory blocks BLK. These plurality of memory blocks BLK each comprise a plurality of string units SU. These plurality of string units SU each comprise a plurality of memory strings MS. One ends of these plurality of memory strings MS are respectively connected to the peripheral circuit PC via bit lines BL. Moreover, the other ends of these plurality of memory strings MS are each connected to the peripheral circuit PC via a common source line SL. The memory string MS comprises a drain side select transistor STD, a plurality of memory cells MC (memory cell transistors), a source side select transistor STS, and a source side select transistor STSb that are connected in series between the bit line BL and the source line SL. Hereafter, the drain side select transistor STD, the source side select transistor STS, and the source side select transistor STSb will sometimes simply be called select transistors (STD, STS, STSb). The memory cell MC is a field effect type transistor comprising a semiconductor layer, a gate insulating film, and a gate electrode. The semiconductor layer functions as a channel region. The gate insulating film includes a charge accumulating film. A threshold voltage of the memory cell MC changes according to an amount of charge in the charge accumulating film. The memory cell MC stores one bit or a plurality of bits of user data. Note that the gate electrodes of the plurality of memory cells MC corresponding to one memory string MS are respectively connected with word lines WL. These word lines WL are respectively commonly connected to all of the memory strings MS in one memory block BLK. The select transistors (STD, STS, STSb) are field effect type transistors each comprising a semiconductor layer, a gate insulating film, and a gate electrode. The semiconductor layer functions as a channel region. The gate electrodes of the select transistors (STD, STS, STSb) are respectively connected with select gate lines (SGD, SGS, SGSb). A drain side select gate line SGD, which is provided correspondingly to the string unit SU, is commonly connected to all of the memory strings MS in one string unit SU. A source side select gate line SGS is commonly connected to all of the memory strings MS in the memory block BLK. A source side select gate line SGSb is commonly connected to all of the memory strings MS in the memory block BLK. As shown inFIG.6, for example, the memory cell array MCA is provided above a semiconductor substrate100. Note that in the example ofFIG.6, a plurality of transistors Tr configuring the peripheral circuit PC are provided between the semiconductor substrate100and the memory cell array MCA. The memory cell array MCA comprises a plurality of the memory blocks BLK aligned in the Y direction. Moreover, an inter-block insulating layer ST of the likes of silicon oxide (SiO2) is provided between two memory blocks BLK adjacent in the Y direction. As shown inFIG.6, for example, the memory block BLK comprises: a plurality of conductive layers110aligned in a Z direction; a plurality of semiconductor columns120extending in the Z direction; and a plurality of gate insulating films130respectively provided between the plurality of conductive layers110and the plurality of semiconductor columns120. The conductive layer110is a substantially plate-like conductive layer extending in the X direction. The conductive layer110may include a stacked film of a barrier conductive film of the likes of titanium nitride (TiN) and a metal film of the likes of tungsten (W), or the like. Moreover, the conductive layer110may include the likes of polycrystalline silicon including an impurity such as phosphorus (P) or boron (B), for example. Insulating layers101of the likes of silicon oxide (SiO2) are provided between the plurality of conductive layers110aligned in the Z direction. Moreover, two or more of the conductive layers110positioned in a lowermost layer, of the plurality of conductive layers110function as the source side select gate lines SGS, SGSb (FIG.5) and as the gate electrodes of the pluralities of source side select transistors STS, STSb (FIG.5) connected to these source side select gate lines SGS, SGSb. These conductive layers110are electrically independent every memory block BLK. Moreover, a plurality of the conductive layers110positioned more upwardly than these lowermost layer-positioned conductive layers110function as the word lines WL (FIG.5) and as the gate electrodes of the pluralities of memory cells MC (FIG.5) connected to these word lines WL. These conductive layers110are each electrically independent every memory block BLK. Moreover, one or a plurality of the conductive layers110positioned more upwardly than these word line WL-functioning conductive layers110function as the drain side select gate line SGD (FIG.5) and as the gate electrodes of the plurality of drain side select transistors STD (FIG.5) connected to this drain side select gate line SGD. These conductive layers110have a smaller width in the Y direction than the other conductive layers110. A semiconductor layer112is provided below the conductive layers110. The semiconductor layer112may include the likes of polycrystalline silicon including an impurity such as phosphorus (P) or boron (B), for example. Moreover, the insulating layer101of the likes of silicon oxide (SiO2) is provided between the semiconductor layer112and the conductive layers110. The semiconductor layer112functions as the source line SL (FIG.5). The source line SL is commonly provided for all of the memory blocks BLK included in the memory cell array MCA, for example. As shown inFIG.6, for example, the semiconductor columns120are aligned in a certain pattern in the X direction and the Y direction. The semiconductor column120functions as the channel regions of the plurality of memory cells MC and the select transistors (STD, STS, STSb) included in one memory string MS (FIG.5). The semiconductor column120is a semiconductor layer of the likes of polycrystalline silicon (Si), for example. As shown inFIG.6, for example, the semiconductor column120has a substantially bottomed cylindrical shape, and has its central portion provided with an insulating layer125of the likes of silicon oxide. Moreover, an outer peripheral surface of the respective semiconductor column120is surrounded by the plurality of conductive layers110, and faces the plurality of conductive layers110. An upper end portion of the semiconductor column120is provided with an impurity region121that includes an N type impurity of the likes of phosphorus (P). The impurity region121is connected to the bit line BL via a contact Ch and a contact Cb. The gate insulating film130has a substantially bottomed cylindrical shape covering the outer peripheral surface of the semiconductor column120. The gate insulating film130comprises a tunnel insulating film, a charge accumulating film, and a block insulating film that are stacked between the semiconductor column120and the conductive layers110. The tunnel insulating film and the block insulating film are insulating films of the likes of silicon oxide (SiO2), for example. The charge accumulating film is a film capable of accumulating a charge, of the likes of silicon nitride (Si2N4), for example. The tunnel insulating film, the charge accumulating film, and the block insulating film have substantially cylindrical shapes, and extend in the Z direction along the outer peripheral surface of the semiconductor column120excluding a contacting portion of the semiconductor column120and the semiconductor layer112. Note that the gate insulating film130may comprise a floating gate of the likes of polycrystalline silicon including an N type or P type impurity, for example. End portions in the X direction of the plurality of conductive layers110are provided with a plurality of contacts CC. The plurality of conductive layers110are connected to the peripheral circuit PC via these plurality of contacts CC. As shown inFIG.6, these plurality of contacts CC extend in the Z direction, and have their lower ends connected to the conductive layers110. The contact CC may include, for example, a stacked film of a barrier conductive film of the likes of titanium nitride (TiN) and a metal film of the likes of tungsten (W), or the like. [Configuration of Peripheral Circuit PC] As shown inFIG.4, for example, the peripheral circuit PC comprises row decoders RD0, RD1and sense amplifiers SA0, SA1that are respectively connected to the memory cell arrays MCA0, MCA1. In addition, the peripheral circuit PC comprises a voltage generating circuit VG and a sequencer SQC. Moreover, the peripheral circuit PC comprises an input/output control circuit I/O, a logic circuit CTR, an address register ADR, a command register CMR, a status register STR, and a data output timing adjustment unit TCT. Note that in the description below, the row decoders RD0, RD1will sometimes be called row decoders RD, and the sense amplifiers SA0, SA1will sometimes be called sense amplifiers SA. [Configuration of Row Decoder RD] As shown inFIG.5, for example, the row decoder RD (FIG.4) comprises: an address decoder22that decodes address data Add (FIG.4); and a block select circuit23and voltage select circuit24that transfer an operation voltage to the memory cell array MCA in response to an output signal of the address decoder22. The address decoder22comprises a plurality of block select lines BLKSEL and a plurality of voltage select lines33. The address decoder22sequentially refers to a row address RA of the address register ADR (FIG.4) in accordance with a control signal from the sequencer SQC, and decodes this row address RA to set a certain block select transistor35and voltage select transistor37corresponding to the row address RA to an ON state, and set another block select transistor35and voltage select transistor37to an OFF state, for example. For example, voltages of the certain block select line BLKSEL and voltage select line33are set to an “H” state, and voltages of the other block select line BLKSEL and voltage select line33are set to an “L” state. Note that when transistors of P channel type and not N channel type are employed, these wirings are applied with reverse voltages. Note that in the example illustrated, the block select lines BLKSEL are provided one each for each one of the memory blocks BLK, in the address decoder22. However, this configuration may be appropriately changed. For example, the block select lines BLKSEL may be provided one each for every two or more of the memory blocks BLK. The block select circuit23comprises a plurality of block select units34that correspond to the memory blocks BLK. The plurality of block select units34each comprise a plurality of the block select transistors35that correspond to the word lines WL and the select gate lines (SGD, SGS, SGSb). The block select transistor35is a field effect type voltage-withstanding transistor, for example. Drain electrodes of the block select transistors35are each electrically connected to a corresponding one of the word lines WL or select gate lines (SGD, SGS, SGSb). Source electrodes of the block select transistors35are each electrically connected to the voltage supply lines31via a wiring CG and the voltage select circuit24. Gate electrodes of the block select transistors35are commonly connected to a corresponding one of the block select lines BLKSEL. Note that the block select circuit23further comprises an unillustrated plurality of transistors. These plurality of transistors are field effect type voltage-withstanding transistors that are connected between the select gate lines (SGD, SGS, SGSb) and a voltage supply line supplied with a ground voltage VSS. These plurality of transistors supply the ground voltage VSSto the select gate lines (SGD, SGS, SGSb) included in unselected memory blocks BLK. Note that the plurality of word lines WL included in the unselected memory blocks BLK are in a floating state. The voltage select circuit24comprises a plurality of voltage select units36that correspond to the word lines WL and the select gate lines (SGD, SGS, SGSb). These plurality of voltage select units36each comprise a plurality of the voltage select transistors37. The voltage select transistor37is a field effect type voltage-withstanding transistor, for example. Drain terminals of the voltage select transistors37are each electrically connected to a corresponding one of the word lines WL or select gate lines (SGD, SGS, SGSb) via one of the wirings CG and the block select circuit23. Source terminals of the voltage select transistors37are each electrically connected to a corresponding one of the voltage supply lines31. Gate electrodes of the voltage select transistors37are each connected to a corresponding one of the voltage select lines33. [Configuration of Sense Amplifier SA] The sense amplifiers SA0, SA1(FIG.4) respectively comprise sense amplifier modules SAM0, SAM1and cache memories CM0, CM1(data registers). The cache memories CM0, CM1respectively comprise latch circuits XDL0, XDL1. Note that in the description below, the sense amplifier modules SAM0, SAM1will sometimes be called sense amplifier modules SAM, the cache memories CM0, CM1will sometimes be called cache memories CM, and the latch circuits XDL0, XDL1will sometimes be called latch circuits XDL. The sense amplifier module SAM comprises, for example: sense circuits respectively corresponding to the plurality of bit lines BL; and a plurality of latch circuits or the like connected to the sense circuits. The cache memory CM comprises a plurality of the latch circuits XDL. The plurality of latch circuits XDL are respectively connected to the latch circuits within the sense amplifier module SAM. The latch circuits XDL store user data Dat to be written to the memory cell MC or user data Dat that has been read from the memory cell MC, for example. As shown inFIG.7, for example, the cache memory CM is connected with a column decoder COLD. The column decoder COLD decodes a column address CA stored in the address register ADR (FIG.4), and selects the latch circuit XDL corresponding to the column address CA. Note that user data Dat included in these plurality of latch circuits XDL is sequentially transferred to the latch circuits within the sense amplifier module SAM during a write operation. Moreover, user data Dat included in the latch circuits within the sense amplifier module SAM is sequentially transferred to the latch circuits XDL during a read operation. Moreover, user data Dat included in the latch circuits XDL is sequentially transferred to the input/output control circuit I/O via the column decoder COLD and a multiplexer MPX during a later-mentioned data-out operation. [Configuration of Voltage Generating Circuit VG] As shown inFIG.5, for example, the voltage generating circuit VG (FIG.4) is connected to a plurality of the voltage supply lines31. The voltage generating circuit VG includes a step-down circuit such as a regulator, and a booster circuit such as a charge pump circuit32, for example. These step-down circuit and booster circuit are each connected to voltage supply lines supplied with a power supply voltage Vccand the ground voltage Vss(FIG.4). These voltage supply lines are connected to the pad electrodes P described with reference toFIGS.2and3, for example. The voltage generating circuit VG generates and simultaneously outputs to the plurality of voltage supply lines31a plurality of types of operation voltages that are applied to the bit lines BL, the source line SL, the word lines WL, and the select gate lines (SGD, SGS, SGSb) during a read operation, a write operation, and an erase operation on the memory cell array MCA, according to a control signal from the sequencer SQC, for example. The operation voltages outputted from the voltage supply lines31are appropriately adjusted according to the control signal from the sequencer SQC. [Configuration of Sequencer SQC] The sequencer SQC (FIG.4) outputs an internal control signal to the row decoders RD0, RD1, the sense amplifier modules SAM0, SAM1, and the voltage generating circuit VG, in accordance with command data Cmd stored in the command register CMR. In addition, the sequencer SQC appropriately outputs to the status register STR status data Stt indicating a state of the memory die MD. Moreover, the sequencer SQC generates a ready/busy signal, and outputs the ready/busy signal to a terminal RY//BY. The terminal RY//BY is in an “L” state during execution of an operation for supplying a voltage to the memory cell array MCA, such as a read operation, a write operation, or an erase operation, and is in an “H” state in other cases, for example. Note that even if the memory cell array MCA undergoes execution of an operation in which it is not supplied with a voltage, such as the later-mentioned data-out operation, status-read, and so on, the terminal RY//BY will not be in an “L” state. In a period when the terminal RY//BY is in an “L” state (a busy period), access to the memory die MD is basically prohibited. Moreover, in a period when the terminal RY//BY is in an “H” state (a ready period), access to the memory die MD is allowed. Note that the terminal RY//BY is realized by the pad electrode P described with reference toFIGS.2and3, for example. Moreover, the sequencer SQC comprises a feature register FR. The feature register FR is a register holding a value indicating in which mode, of later-mentioned operating mode MODEa and operating mode MODEb operation is being performed. [Configuration of Address Register ADR] As shown inFIG.4, the address register ADR is connected to the input/output control circuit I/O and stores address data Add that has been inputted from the input/output control circuit I/O. The address register ADR comprises a plurality of 8-bit register columns, for example. The register column holds address data Add corresponding to an under-execution internal operation such as a read operation, a write operation, or an erase operation, when the internal operation is executed, for example. Note that the address data Add includes the column address CA (FIG.4) and the row address RA (FIG.4), for example. The row address RA includes, for example: a block address specifying the memory block BLK (FIG.5); a page address specifying the string unit SU and the word line WL; a plane address specifying the memory cell array MCA (plane); and a chip address specifying the memory die MD. [Configuration of Command Register CMR] The command register CMR is connected to the input/output control circuit I/O and stores command data Cmd that has been inputted from the input/output control circuit I/O. The command register CMR comprises at least one set of 8-bit register columns, for example. When command data Cmd is stored in the command register CMR, a control signal is transmitted to the sequencer SQC. [Configuration of Status Register STR] The status register STR is connected to the input/output control circuit I/O and stores status data Stt to be outputted to the input/output control circuit I/O. The status register STR comprises a plurality of 8-bit register columns, for example. The register column holds status data Stt relating to an under-execution internal operation such as a read operation, a write operation, or an erase operation, when the internal operation is executed, for example. Moreover, the register column holds ready/busy information of the memory cell arrays MCA0, MCA1, for example. [Configuration of Data Output Timing Adjustment Unit TCT] The data output timing adjustment unit TCT is connected to a bus wiring DB between the cache memories CM0, CM1and the input/output control circuit I/O. In such cases as when, for example, the cache memories CM0, CM1consecutively undergo execution of the later-mentioned data-out operation, the data output timing adjustment unit TCT adjusts a start timing of the data-out operation on the cache memory CM1in order for the data-out operation of the cache memory CM1to be started without delay after completion of the data-out operation of the cache memory CM0. [Configuration of Input/Output Control Circuit I/O] The input/output control circuit I/O (FIG.4) comprises data signal input/output terminals DQ0-DQ7, data strobe signal input/output terminals DQS, /DQS, a shift register, and a buffer circuit. The data signal input/output terminals DQ0-DQ7and the data strobe signal input/output terminals DQS, /DQS are each realized by the pad electrode P described with reference toFIGS.2and3, for example. Data that has been inputted via the data signal input/output terminals DQ0-DQ7is inputted to the cache memory CM, the address register ADR, or the command register CMR from the buffer circuit, depending on an internal control signal from the logic circuit CTR. Moreover, data to be outputted via the data signal input/output terminals DQ0-DQ7is inputted to the buffer circuit from the cache memory CM or the status register STR, depending on an internal control signal from the logic circuit CTR. Signals that have been inputted via the data strobe signal input/output terminals DQS, /DQS (for example, a data strobe signal and complementary signal thereof) are employed in input of data via the data signal input/output terminals DQ0-DQ7. The data that has been inputted via the data signal input/output terminals DQ0-DQ7is imported into the shift register in the input/output control circuit I/O at a timing of a rising edge of voltage (switching of input signal) of the data strobe signal input/output terminal DQS and falling edge of voltage (switching of input signal) of the data strobe signal input/output terminal /DQS and a timing of a falling edge of voltage (switching of input signal) of the data strobe signal input/output terminal DQS and rising edge of voltage (switching of input signal) of the data strobe signal input/output terminal /DQS. As shown inFIG.8, for example, the data signal input/output terminals DQ0-DQ7and the data strobe signal input/output terminals DQS, /DQS are each connected to an input circuit201and an output circuit202. The input circuit201is a receiver such as a comparator, for example. The output circuit202is a driver such as an OCD (Off Chip Driver) circuit, for example. [Configuration of Logic Circuit CTR] The logic circuit CTR (FIG.4) comprises: a plurality of external control terminals/CE, CLE, ALE, /WE, /RE, RE, /WP; and a logic circuit connected to these plurality of external control terminals/CE, CLE, ALE, /WE, /RE, RE, /WP. The logic circuit CTR receives an external control signal from the controller die CD via the external control terminals/CE, CLE, ALE, /WE, /RE, RE, /WP and outputs an internal control signal to the input/output control circuit I/O depending on this external control signal. As shown inFIG.8, for example, the external control terminals/CE, CLE, ALE, /WE, /RE, RE, /WP are each connected to the input circuit201. Note that the external control terminals/CE, CLE, ALE, /WE, /RE, RE, /WP are each realized by the pad electrode P described with reference toFIGS.2and3, for example. A signal that has been inputted via the external control terminal /CE (for example, a chip enable signal) is employed in selection of the memory die MD. A memory die MD whose external control terminal /CE has been inputted with “L” is in a state where input/output of command data Cmd and address data Add (hereafter, sometimes simply called “data”) thereto/therefrom is possible. A memory die MD whose external control terminal /CE has been inputted with “H” is in a state where input/output of data thereto/therefrom is not possible. Note that as shown inFIG.8, the external control terminal /CE is connected to the input circuit201. A signal that has been inputted via the external control terminal CLE (for example, a command latch enable signal) is employed in use of the command register CMR, and so on. A function, and so on, of the external control terminal CLE will be mentioned later. A signal that has been inputted via the external control terminal ALE (for example, an address latch enable signal) is employed in use of the address register ADR, and so on. A function, and so on, of the external control terminal ALE will be mentioned later. A signal that has been inputted via the external control terminal /WE (for example, a write enable signal) is employed in input of data from the controller die CD to the memory die MD, and so on. A function, and so on, of the external control terminal /WE will be mentioned later. Signals that have been inputted via the external control terminals/RE, RE (for example, a read enable signal and complementary signal thereof) are employed in output of data via the data signal input/output terminals DQ0-DQ7. Data to be outputted from the data signal input/output terminals DQ0-DQ7is switched at a timing of a falling edge of voltage (switching of input signal) of the external control terminal /RE and rising edge of voltage (switching of input signal) of the external control terminal RE and a timing of a rising edge of voltage (switching of input signal) of the external control terminal /RE and falling edge of voltage (switching of input signal) of the external control terminal RE. A signal that has been inputted via the external control terminal /WP (for example, a write protect signal) is employed in restriction of input of user data Dat from the controller die CD to the memory die MD, and so on. [Operating Mode MODEa and Operating Mode MODEb] The semiconductor memory device according to the present embodiment is capable of being operated in operating mode MODEa and operating mode MODEb. Operating mode MODEa and operating mode MODEb will be described below with reference toFIGS.9to19. [Roles of External Terminals in Each Mode] FIG.9is a schematic view for explaining roles of the signal input/output terminals and the external control terminals in operating mode MODEa.FIG.10is a schematic view for explaining roles of the signal input/output terminals and the external control terminals in operating mode MODEb. Note that in the description below, the data signal input/output terminals DQ0-DQ7will sometimes be notated as data signal input/output terminals DQ<7:0>. In operating mode MODEa, as shown inFIG.9, for example, the data signal input/output terminals DQ<7:0> are used in input of command data Cmd and address data Add, as well as in input/output of user data Dat. On the other hand, in operating mode MODEb, as shown inFIG.10, for example, although the data signal input/output terminals DQ<7:0> are used in input/output of user data Dat, they are not used in input of command data Cmd and address data Add. In operating mode MODEb, the external control terminals CLE, ALE are used in input of command data Cmd and address data Add. [Roles of External Terminals in Operating Mode MODEa] FIG.11is a truth table for explaining roles of the external terminals in operating mode MODEa. Note that inFIG.11, “Z” indicates a case where either of “H” and “L” may be inputted. “X” indicates a case where an inputted signal is fixed at “H” or “L”. “Input” indicates a case where input of data is performed. “Output” indicates a case where output of data is performed. When command data Cmd is inputted in operating mode MODEa, the controller die CD raises the external control terminal /WE from “L” to “H” in a state where voltages of the data signal input/output terminals DQ<7:0> have been set to “H” or “L” depending on each of bits of the 8-bit command data Cmd, the external control terminal CLE has been inputted with “H”, and the external control terminal ALE has been inputted with “L”, for example. When the external control terminals CLE, ALE are being inputted with “H, L”, data that has been inputted via the data signal input/output terminals DQ<7:0> is stored in a buffer memory in the input/output control circuit I/O as command data Cmd, and transferred to the command register CMR (FIG.4). Moreover, when address data Add is inputted, the controller die CD raises the external control terminal /WE from “L” to “H” in a state where voltages of the data signal input/output terminals DQ<7:0> have been set to “H” or “L” depending on each of bits of 8-bit data configuring the address data Add, the external control terminal CLE has been inputted with “L”, and the external control terminal ALE has been inputted with “H”, for example. When the external control terminals CLE, ALE are being inputted with “L, H”, data that has been inputted via the data signal input/output terminals DQ<7:0> is stored in the buffer memory in the input/output control circuit I/O as address data Add, and transferred to the address register ADR (FIG.4). Moreover, when user data Dat is inputted, the controller die CD switches (toggles) input signals of the data strobe signal input/output terminals DQS, /DQS in a state where voltages of the data signal input/output terminals DQ<7:0> have been set to “H” or “L” depending on each of bits of 8-bit data configuring the user data Dat, the external control terminal CLE has been inputted with “L”, and the external control terminal ALE has been inputted with “L”, for example. When the external control terminals CLE, ALE are both being inputted with “L, data that has been inputted via the data signal input/output terminals DQ<7:0> is stored in the buffer memory in the input/output control circuit I/O as user data Dat, and transferred to the cache memory CM (FIG.4) via the bus wiring DB. Moreover, when user data Dat or status data Stt is outputted, the controller die CD switches (toggles) input signals of the external control terminals/RE, RE, for example. As a result, the eight bits of outputted user data Dat or status data Stt are outputted to the data signal input/output terminals DQ<7:0>. In addition, output signals of the data strobe signal input/output terminals DQS, /DQS are switched. Moreover, when the memory die MD is set to a standby state, the controller die CD inputs “H” to the external control terminal /CE, for example. Moreover, when the memory die MD is set to a bus idle state, the controller die CD inputs “H” to the external control terminal /WE, for example. [Roles of External Terminals in Operating Mode MODEb] FIGS.12and13are truth tables for explaining roles of the external terminals in operating mode MODEb. Note that inFIGS.12and13, “Z” indicates a case where either of “H” and “L” may be inputted. “X” indicates a case where an inputted signal is fixed at “H” or “L”. “Input” indicates a case where input of data is performed. “Output” indicates a case where output of data is performed. As mentioned above, in operating mode MODEb, the external control terminals CLE, ALE are used in input of command data Cmd and address data Add. Now, as will be mentioned later with reference toFIG.15, when input of command data Cmd or address data Add is performed in operating mode MODEb, the controller die CD inputs the memory die MD with a signal indicating whether next-to-be-inputted data will be command data Cmd or will be address data Add. Hereafter, such a signal will be called an input/output data select signal. FIG.12shows roles of the external control terminals in a period MSel (FIG.15) when the input/output data select signal is inputted.FIG.13shows roles of the external control terminals in a period S_In (FIG.15) after input of the input/output data select signal. In the period MSel, when an input/output data select signal to the effect that command data Cmd is to be inputted is inputted, the controller die CD raises the external control terminal /WE from “L” to “H” in a state where the external control terminal CLE has been inputted with “H”, and the external control terminal ALE has been inputted with “L”, for example. In the period MSel, when the external control terminal CLE has been inputted with “H” and the external control terminal ALE has been inputted with “L”, data inputted in the period S_In immediately after this period MSel is stored in the buffer memory in the input/output control circuit I/O as command data Cmd, and transferred to the command register CMR (FIG.4). Moreover, in period MSel, when an input/output data select signal to the effect that address data Add is to be inputted is inputted, the controller die CD raises the external control terminal /WE from “L” to “H” in a state where the external control terminal CLE has been inputted with “L”, and the external control terminal ALE has been inputted with “H”, for example. In the period MSel, when the external control terminal CLE has been inputted with “L” and the external control terminal ALE has been inputted with “H”, data inputted in the period S_In immediately after this period MSel is stored in the buffer memory in the input/output control circuit I/O as address data Add, and transferred to the address register ADR (FIG.4). In the period S_In, when command data Cmd or address data Add is inputted, the controller die CD sets voltages of the external control terminals CLE, ALE to “H” or “L” depending on each of bits of 2-bit data configuring the command data Cmd or address data Add, and raises the external control terminal /WE from “L” to “H”, for example. Note that when user data Dat is inputted in operating mode MODEb, the controller die CD switches input signals of the data strobe signal input/output terminals DQS, /DQS in a state where voltages of the data signal input/output terminals DQ<7:0> have been set to “H” or “L” depending on each of bits of 8-bit data configuring the user data Dat, and the external control terminals/RE, RE have been inputted with “H, L”, for example. This operation is executable both in the period MSel and in the period S_In. In operating mode MODEb, data that has been inputted via the data signal input/output terminals DQ<7:0> is stored in the buffer memory in the input/output control circuit I/O as user data Dat, and transferred to the cache memory CM via the bus wiring DB. Moreover, when user data Dat or status data Stt is outputted, the controller die CD switches input signals of the external control terminals/RE, RE, for example. As a result, the eight bits of outputted user data Dat or status data Stt are outputted to the data signal input/output terminals DQ<7:0>. In addition, output signals of the data strobe signal input/output terminals DQS, /DQS are switched. This operation is executable both in the period MSel and in the period S_In. Moreover, when the memory die MD is set to a standby state, the controller die CD inputs “H” to the external control terminal /CE, for example. Moreover, when the memory die MD is set to a bus idle state, the controller die CD inputs “H” to the external control terminal /WE, for example. [Examples of Signal Input/Output in Each Mode] FIGS.14and15are schematic waveform diagrams for explaining operation of the memory die MD according to the first embodiment. FIG.14shows waveforms when command data Cmd and address data Add are inputted in operating mode MODEa. In the example ofFIG.14, at timing t101, the controller die CD is inputting the memory die MD with command data Cmd. Moreover, at timing t102, the controller die CD is inputting the memory die MD with address data Add. Note that although in the example illustrated, five cycles of the 8-bit data configuring the address data Add are being inputted from timing t102to timing t103, the number of cycles may be fewer than or more than five. In addition, at timing t103, the controller die CD is inputting the memory die MD with command data Cmd. Moreover, at timing t104, an operation such as a read operation is started, and voltage of the terminal RY//BY falls from “H” to “L”. FIG.15shows waveforms when command data Cmd and address data Add are inputted in operating mode MODEb. In the example ofFIG.15, the external control terminal /WE is inputted with “L” and “H” at substantially a constant pace. Moreover, a period from when, at a certain timing, an input signal of the external control terminal /WE once falls until when it once again falls is indicated as the above-mentioned period MSel. Moreover, a period from when the input signal of the external control terminal /WE falls upon completion of the period MSel until when the input signal of the external control terminal /WE has fallen a further four times is indicated as the above-mentioned period S_In. In the example ofFIG.15, in period MSel from timing t201to timing t202, the controller die CD is inputting the memory die MD with the input/output data select signal specifying input of command data Cmd. Moreover, in period S_In from timing t202to timing t203, the controller die CD is inputting the memory die MD with the command data Cmd. Now, in the example ofFIG.15, in the period S_In, the controller die CD is inputting the memory die MD with 8-bit command data Cmd two bits at a time divided into four cycles. For example, the 8-bit command data Cmd is assumed to be bits “7” to “0”. First, in a first cycle of data input, the external control terminal /WE is raised from “L” to “H” in a state where voltages of the external control terminals CLE, ALE have been set to “H” or “L” depending on the bits “7” and “6”. Similarly, in second through fourth cycles of data input too, the external control terminal /WE is raised from “L” to “H” in a state where voltages of the external control terminals CLE, ALE have been set to “H” or “L” depending on, respectively, the bits “5” and “4”, the bits “3” and “2”, and the bits “1” and “0”. Moreover, in period MSel from timing t203to timing t204, the controller die CD is inputting the memory die MD with the input/output data select signal specifying input of address data Add. Moreover, in period S_In from timing t204to timing t205, the controller die CD is inputting the memory die MD with the address data Add. Now, in the example ofFIG.15, in the period S_In, the controller die CD is inputting the memory die MD with 8-bit data configuring the address data Add two bits at a time divided into four cycles. Note that similarly, from timing t205to timing t206too, data configuring address data Add is inputted two bits at a time, although illustration of this is omitted. Moreover, in period MSel from timing t206to timing t207, similarly to from timing t201to timing t202, the input/output data select signal specifying input of command data Cmd is being inputted. Moreover, in period S_In from timing t207to timing t208, the controller die CD is inputting the memory die MD with the command data Cmd. Moreover, at the timing t208, an operation such as a read operation is started, and voltage of the terminal RY//BY falls from “H” to “L”. [Operation] Next, operation of the memory die MD will be described. The memory die MD is configured to execute a read operation. The read operation is an operation in which user data Dat is read from the memory cell array MCA by the sense amplifier module SAM, and the read user data Dat is transferred to the latch circuit XDL. In the read operation, the user data Dat that has been read from the memory cell array MCA is transferred to the latch circuit XDL via the bit lines BL and the sense amplifier module SAM. Moreover, the memory die MD is configured to execute a data-out operation. The data-out operation is an operation in which user data Dat included in the latch circuit XDL is outputted to the controller die CD. In the data-out operation, the user data Dat included in the latch circuit XDL is outputted to the controller die CD via the column decoder COLD, multiplexer MPX, bus wiring DB, and input/output control circuit I/O described with reference toFIG.7. Moreover, the memory die MD is configured to execute a status-read. The status-read is an operation in which status data Stt included in the status register STR is outputted to the controller die CD. In the status-read, the status data Stt included in the status register STR is outputted to the controller die CD via the input/output control circuit I/O or the logic circuit CTR. [Read Operation and Data-Out Operation in Operating Mode MODEa] FIG.16is a schematic timing chart showing a situation when the read operation and the data-out operation are executed in operating mode MODEa. In the example ofFIG.16, the memory die MD is set to operating mode MODEa. In the example ofFIG.16, first, command data “00h”, address data Add, and command data “30h” are sequentially inputted via the data signal input/output terminals DQ<7:0>. The command data “00h” is command data Cmd inputted at the start of a command set instructing the read operation. The command data “30h” is command data Cmd inputted at the end of a command set instructing the read operation. Due to input of the command data “00h”, the address data Add, and the command data “30h”, the read operation is started, and voltage of the terminal RY//BY falls from “H” to “L”. In addition, user data Dat is transferred to the latch circuit XDL. Moreover, at a timing when the read operation has ended, the voltage of the terminal RY//BY rises from “L” to “H”. Next, command data “05h”, address data Add, and command data “E0h” are sequentially inputted via the data signal input/output terminals DQ<7:0>. The command data “05h” is command data Cmd inputted at the start of a command set instructing the data-out operation. The command data “E0h” is command data Cmd inputted at the end of a command set instructing the data-out operation. Due to input of the command data “05h”, the address data Add, and the command data “E0h”, the controller die CD switches (toggles) input signals of the external control terminals/RE, RE after a certain standby time. As a result, the data-out operation is started, and the user data Dat is outputted via the data signal input/output terminals DQ. FIG.17is a schematic timing chart showing another situation when the read operation and the data-out operation are executed in operating mode MODEa. In the example ofFIG.17, the memory die MD is set to operating mode MODEa. In the example ofFIG.17, first, command data “00h”, address data Add, and command data “30h” are sequentially inputted via the data signal input/output terminals DQ<7:0>. The address data Add included in this command set includes information of the plane PLN0(FIG.4) to be targeted for the read operation, as the above-described plane address. Due to input of the command data “00h”, the address data Add, and the command data “30h”, the read operation is started on the plane PLN0, and the user data Dat is transferred to the latch circuit XDL0. Next, command data “00h”, address data Add, and command data “30h” are sequentially inputted via the data signal input/output terminals DQ<7:0>. The address data Add included in this command set includes information of the plane PLN1(FIG.4) to be targeted for the read operation, as the above-described plane address. Due to input of the command data “00h”, the address data Add, and the command data “30h”, the read operation is started on the plane PLN1, and the user data Dat is transferred to the latch circuit XDL1. Next, command data “70h” is inputted via the data signal input/output terminals DQ<7:0>. The command data “70h” is command data instructing the status-read. Due to input of the command data “70h”, the status-read is performed, and status data Stt is outputted via the data signal input/output terminals DQ<7:0>. Next, command data “05h”, address data Add, and command data “E0h” are sequentially inputted via the data signal input/output terminals DQ<7:0>. The address data Add included in this command set includes information of the plane PLN0(FIG.4) to be targeted for the data-out operation, as the above-described plane address. Due to input of the command data “05h”, the address data Add, and the command data “E0h”, the controller die CD switches (toggles) input signals of the external control terminals/RE, RE after a certain standby time. As a result, the data-out operation is started on the plane PLN0, and user data “DataOut” is outputted via the data signal input/output terminals DQ<7:0>. After completion of the data-out operation on the plane PLN0, command data “70h” is inputted via the data signal input/output terminals DQ<7:0>. Due to input of the command data “70h”, the status-read is performed again, and status data Stt is outputted via the data signal input/output terminals DQ<7:0>. Next, similarly to in the data-out operation on the plane PLN0, command data “05h”, address data Add, and command data “E0h” are sequentially inputted via the data signal input/output terminals DQ<7:0>. The address data Add included in this command set includes information of the plane PLN1(FIG.4) to be targeted for the data-out operation, as the above-described plane address. After a certain time has elapsed, the controller die CD switches (toggles) input signals of the external control terminals/RE, RE. As a result, the data-out operation is started on the plane PLN1, and user data “DataOut” is outputted via the data signal input/output terminals DQ<7:0>. [Read Operation and Data-Out Operation in Operating Mode MODEb] FIG.18is a schematic timing chart showing a situation when the read operation and the data-out operation are executed in operating mode MODEb. In the example ofFIG.18, the memory die MD is set to operating mode MODEb. In the example ofFIG.18, first, a command set including the command data “00h” is inputted via the external control terminals CLE, ALE. Next, a command set including the command data “05h” is inputted via the external control terminals CLE, ALE. Note that in operating mode MODEb, input/output of data via the data signal input/output terminals DQ<7:0> and input/output of data via the external control terminals CLE, ALE are executable at independent timings. For example, in the example ofFIG.18, input of these command sets is performed during execution of the data-out operation (during a period when input signals of the external control terminals/RE, RE are toggled). FIG.19is a schematic timing chart showing another situation when the read operation and the data-out operation are executed in operating mode MODEb. In the example ofFIG.19, the memory die MD is set to operating mode MODEb. In the example ofFIG.19, first, command data “00h”, address data Add, and command data “30h” are sequentially inputted via the external control terminals CLE, ALE. The address data Add included in this command set includes information of the plane PLN0(FIG.4) to be targeted for the read operation, as the above-described plane address. Next, command data “00h”, address data Add, and command data “30h” are sequentially inputted via the external control terminals CLE, ALE. The address data Add included in this command set includes information of the plane PLN1(FIG.4) to be targeted for the read operation, as the above-described plane address. Next, command data “70h” is inputted via the external control terminals CLE, ALE. Due to input of the command data “70h”, the status-read is performed, and status data Stt is outputted via the data signal input/output terminals DQ<7:0>. Next, command data “05h”, address data Add, and command data “E0h” are sequentially inputted via the external control terminals CLE, ALE. This address data Add includes information of the plane PLN0(FIG.4) to be targeted for the data-out operation, as the above-described plane address. After a certain standby time, the data-out operation is started on the plane PLN0, and user data “DataOut” is outputted via the data signal input/output terminals DQ<7:0>. Moreover, in the example ofFIG.19, command data “70h” is inputted via the external control terminals CLE, ALE while the data-out operation on the plane PLN0is being performed. Due to input of the command data “70h”, the status-read is performed. After completion of the data-out operation on the plane PLN0, status data Stt is outputted via the data signal input/output terminals DQ<7:0>. Next, command data “05h”, address data Add, and command data “E0h” are sequentially inputted via the external control terminals CLE, ALE. This address data Add includes the likes of an address of the plane PLN1(FIG.4) to be targeted for the data-out operation, as the above-described plane address. Now, in operating mode MODEb, unlike in operating mode MODEa, the data output timing adjustment unit TCT (FIG.4) adjusts the timing of start of the data-out operation on the plane PLN1. After completion of the data-out operation on the plane PLN0, the data-out operation is started on the plane PLN1and user data “DataOut” is outputted via the data signal input/output terminals DQ<7:0>, in response to an internal signal generated by the data output timing adjustment unit TCT. Advantages The semiconductor memory device according to the present embodiment is capable of being operated in operating mode MODEb. In operating mode MODEb, as mentioned above, input of command data Cmd and address data Add can be performed via the external control terminals CLE, ALE even while the data-output operation via the data signal input/output terminals DQ<7:0> is being performed. Hence, time required for input of the command set to the memory die MD can be significantly reduced. As a result, speeding-up of operation of the semiconductor memory device can be realized. [Circuits Applicable to Memory Die MD in First Embodiment] In the memory die MD according to the first embodiment, functions of the data signal input/output terminals DQ<7:0>, the external control terminals CLE, ALE, and so on, change according to which of operating modes MODEa, MODEb is selected. Such functions may be realized by circuits of the kinds shown inFIGS.20,22, and23, for example.FIGS.20,22, and23are schematic circuit diagrams showing configuration examples of parts of the memory die MD.FIG.21is a schematic waveform diagram for explaining an operating method of the circuit shown inFIG.20. FIG.20illustrates: the data signal input/output terminals DQ<7:0>; the external control terminals CLE, ALE, /WE; and a circuit unit200connected to these data signal input/output terminals DQ<7:0> and external control terminals CLE, ALE, /WE. The circuit unit200includes a latch circuit210, multiplexers220,230, and a deserializer300, for example. The latch circuit210is a latch circuit included in the command register CMR or address register ADR. InFIG.20, a latch circuit210corresponding to command data “05h” is exemplified as the latch circuit210. In the example illustrated, the latch circuit210stores 1-bit data correspondingly to inputted command data Cmd. The latch circuit210has its data input terminal connected to output terminals DINh<7:0>, CLEh, ALEh of the multiplexer220via a logic circuit, and has its clock input terminal connected to an output terminal /WEh′ of the multiplexer230. Select control terminals of each of the multiplexers220,230are inputted with a select signal SerialCA. The select signal SerialCA will be in a “0” state when operating mode MODEa is selected, and will be in a “1” state when operating mode MODEb is selected. The multiplexer220comprises the 10 output terminals DINh<7:0>, CLEh, ALEh. Of these 10 output terminals, the eight output terminals DINh<7:0> correspond to data configuring command data Cmd or address data Add. Moreover, the remaining two output terminals CLEh, ALEh correspond to input signals of the external control terminals CLE, ALE. In addition, the multiplexer220comprises: 10 input terminals selected when the select signal SerialCA is in a “0” state; and 10 input terminals selected when the select signal SerialCA is in a “1” state. Eight of the 10 input terminals corresponding to a “0” state are connected to the data signal input/output terminals DQ<7:0>. The remaining two are connected to the external control terminals CLE, ALE. The 10 input terminals corresponding to a “1” state are connected to output terminals of the deserializer300. The multiplexer230comprises the one output terminal /WEh′. In addition, the multiplexer230comprises: one input terminal /WEh selected when the select signal SerialCA is in a “1” state; and one input terminal selected when the select signal SerialCA is in a “0” state. The input terminal /WEh corresponding to a “1” state is connected to an output terminal of the deserializer300. The input terminal corresponding to a “0” state is connected to the external control terminal /WE. The deserializer300comprises 10 output terminals connected to the multiplexer220. The deserializer300converts to 8-bit data that has been inputted two bits at a time over four cycles from the external control terminals CLE, ALE, adds two bits of data indicating whether this 8-bit data is command data Cmd or is address data Add, and thereby generates 10-bit data. Moreover, the deserializer300outputs this 10-bit data to the multiplexer220via the 10 output terminals. This 10-bit data may be switched at a timing of start of period MSel, for example. In addition, the deserializer300comprises one output terminal connected to the multiplexer230. The deserializer300outputs “L” to the input terminal /WEh of the multiplexer230during a period from when a first cycle of data, of five cycles of data inputted from the external control terminal /WE, is inputted until the second cycle of data of those five cycles of data is inputted (during period MSel). Moreover, in a period other than this period (during period S_In), the deserializer300outputs “H” to the input terminal /WEh of the multiplexer230. In operating mode MODEa, 8-bit data that has been inputted via the data signal input/output terminals DQ<7:0> is inputted to the logic circuit via the output terminals DINh<7:0> of the multiplexer220. Moreover, enable signals that have been inputted via the external control terminals CLE, ALE are inputted to the logic circuit via the output terminals CLEh, ALEh of the multiplexer220. In the example illustrated, in the case that the 8-bit data that has been inputted via the data signal input/output terminals DQ<7:0> is command data “05h” and the input signals of the external control terminals CLE, ALE are “H, L”, the output signal of the logic circuit will be “H”. In other cases, the output signal of the logic circuit will be “L” Moreover, in operating mode MODEa, a signal that has been inputted from the external control terminal /WE is outputted from the output terminal /WEh′ of the multiplexer230, and inputted to the clock input terminal of the latch circuit210. In operating mode MODEb, four cycles of two-bit data that have been inputted via the external control terminals CLE, ALE and an enable signal that has been inputted via the external control terminal /WE are converted to an 8-bit data signal and enable signals by the deserializer300and inputted to the input terminals of the multiplexer220. These data and signals are inputted to the logic circuit via the output terminals DINh<7:0>, CLEh, ALEh of the multiplexer220. In the example illustrated, in the case that “H, L” have been inputted from the external control terminals CLE, ALE in the period MSel and command data “05h” has been inputted from the external control terminals CLE, ALE in the period S_In, the output signal of the logic circuit will be “H”. In other cases, the output signal of the logic circuit will be “L”. Moreover, in operating mode MODEb, a signal that has been inputted to the input terminal /WEh of the multiplexer230is outputted from the output terminal /WEh′ of the multiplexer230, and inputted to the clock input terminal of the latch circuit210. FIGS.22and23are schematic circuit diagrams showing parts of the deserializer300. The deserializer300includes: a circuit unit310of the kind shown inFIG.22; and a circuit unit320of the kind shown inFIG.23. The circuit unit310comprises: five D flip-flops311; and one D latch circuit312. An output terminal of the first D flip-flop311is connected to a data input terminal of the second D flip-flop311. Similarly, output terminals of the second through fourth D flip-flops311are connected to data input terminals of the third through fifth D flip-flops311. An output terminal of the fifth D flip-flop311is connected to a data input terminal of the D latch circuit312. An output terminal of the D latch circuit312is connected to a data input terminal of the first D flip-flop311. Moreover, clock input terminals of these five D flip-flops311and one D latch circuit312are connected to the external control terminal /WE. In addition, the circuit unit310comprises: five D latch circuits313; and five AND circuits314. Data input terminals of the five D latch circuits313are respectively connected to the output terminals of the five D flip-flops311. Moreover, clock input terminals of the five D latch circuits313are inputted with an inverted signal of the external control terminal /WE. Ones of pairs of input terminals of the five AND circuits314are respectively connected to output terminals of the five D latch circuits313. The others of pairs of input terminals of the five AND circuits314are each connected to the external control terminal /WE. Note that in the example ofFIG.22, output terminals of four of these five AND circuits314are indicated as output terminals WE1-WE4. The remaining one output terminal is connected to the input terminal /WEh of the above-described multiplexer230(FIG.20). Now, initial values of data stored in the five D flip-flops311are assumed to be 0, and an initial value of data stored in the D latch circuit312is assumed to be 1. In such a case, if the external control terminal /WE is inputted with “L” and “H” at substantially a constant pace, then due to a first cycle of input of the external control terminal /WE, a signal of the output terminal WE1will attain an “H” state, and signals of the output terminals WE2, WE3, WE4will attain an “L” state. Moreover, due to a second cycle of input, the signal of the output terminal WE2will attain an “H” state, and the signals of the output terminals WE1, WE3, WE4will attain an “L” state. Moreover, due to a third cycle of input, the signal of the output terminal WE3will attain an “H” state, and the signals of the output terminals WE1, WE2, WE4will attain an “L” state. Moreover, due to a fourth cycle of input, the signal of the output terminal WE4will attain an “H” state, and the signals of the output terminals WE1, WE2, WE3will attain an “L” state. The circuit unit320comprises two each of D latch circuits321-324. Data input terminals of ones of these twos of D latch circuits321-324are connected to the external control terminal CLE. Data input terminals of the others of these twos of D latch circuits321-324are connected to the external control terminal ALE. Moreover, clock input terminals of the two D latch circuits321are connected to the output terminal WE1of the AND circuit314. Similarly, clock input terminals of the D latch circuits322,323,324are respectively connected to the output terminals WE2, WE3, WE4of the AND circuit314. A first cycle of data of the external control terminals CLE, ALE is stored in the two D latch circuits321. A second cycle of data of the external control terminals CLE, ALE is stored in the two D latch circuits322. A third cycle of data of the external control terminals CLE, ALE is stored in the two D latch circuits323. A fourth cycle of data of the external control terminals CLE, ALE is stored in the two D latch circuits324. Second Embodiment Next, a semiconductor memory device according to a second embodiment will be described with reference toFIGS.24and25.FIG.24is a schematic block diagram showing a configuration of a memory die MD2according to the second embodiment.FIG.25is a schematic circuit diagram showing a configuration of part of the memory die MD2. For convenience of description, some configurations are omitted inFIGS.24and25. As shown inFIGS.24and25, the semiconductor memory device according to the present embodiment is basically configured similarly to the semiconductor memory device according to the first embodiment. However, the semiconductor memory device according to the second embodiment is capable of outputting status data Stt via the external control terminals CLE, ALE. As shown inFIG.25, the external control terminals CLE, ALE according to the second embodiment are connected to the input circuit201and the output circuit202. In addition, the semiconductor memory device according to the present embodiment is capable of being operated in operating mode MODEc, as well as in operating mode MODEa and operating mode MODEb. FIGS.26and27are truth tables for explaining roles of the external terminals in operating mode MODEc.FIG.26shows roles of the external control terminals in period MSel.FIG.27shows roles of the external control terminals in periods S_In, S_Out. Operation of the memory die MD2in operating mode MODEc is basically similar to operation of the memory die MD in operating mode MODEb. However, in operating mode MODEc, not only can address data Add and command data Cmd be inputted via the external control terminals CLE, ALE, but it is also possible for status data Stt to be outputted via the external control terminals CLE, ALE. As shown inFIG.26, in period MSelin the period MSel of operating mode MODEc, when an input/output data select signal to the effect that status data Stt is to be outputted is inputted, the controller die CD raises the external control terminal /WE from “L” to “H” in a state where the external control terminal CLE has been inputted with “L”, and the external control terminal ALE has been inputted with “L”, for example. Moreover, as shown inFIG.27, in period S_Out of operating mode MODEc, when status data Stt is outputted, the controller die CD lowers an input signal of the external control terminal /WE, for example. As a result, two bits of the status data Stt are outputted from the external control terminals CLE, ALE to the controller die CD by the output circuit202. FIG.28is a schematic waveform diagram for explaining operation of the memory die MD2according to the second embodiment.FIG.28shows waveforms when command data Cmd and address data Add are inputted in operating mode MODEc. In the example ofFIG.28, the external control terminal /WE is inputted with “L” and “H” at substantially a constant pace. Moreover, a period from when, at a certain timing, an input signal of the external control terminal /WE once falls until when it once again falls is indicated as the above-mentioned period MSel. Moreover, a period from when the input signal of the external control terminal /WE falls upon completion of the period MSel until when the input signal of the external control terminal /WE has fallen four times is indicated as the period S_In or period S_Out. In operating mode MODEc, when, in the period MSel, the controller die CD has inputted the memory die MD2with an input/output data select signal specifying input of command data Cmd or address data Add, a period immediately thereafter will be period S_In. On the other hand, when, in the period MSel, the controller die CD has inputted the memory die MD2with an input/output data select signal to the effect that status data Stt is to be outputted, a period immediately thereafter will be period S_Out. Status data Stt outputted in the period S_Out may be 8-bit data similar to the status data Stt outputted when the status-read is executed in operating mode MODEa or operating mode MODEb, for example. In such a case, the status data Stt may be outputted two bits at a time divided into four cycles. FIG.29is a schematic timing chart showing a situation when the read operation and the data-out operation are executed in operating mode MODEc. In the example ofFIG.29, the memory die MD2is set to operating mode MODEc. Operation exemplified inFIG.29is basically similar to operation described with reference toFIG.19. However, in the example ofFIG.29, the external control terminals CLE, ALE are inputted with “L, L” during execution of the status-read. Moreover, the status data Stt is outputted from the external control terminals CLE, ALE, not the data signal input/output terminals DQ<7:0>. Moreover, while the data-out operation on the plane PLN0is being performed, the status-read and output of the status data Stt are performed, and, furthermore, input of a command set to the effect that the data-out operation on the plane PLN1is to be executed, is started. Note that the output circuits202(FIG.25) connected to the external control terminals CLE, ALE are driven in operating mode MODEc. These output circuits202need not be driven in operating modes MODEa, MODEb. [Circuit Applicable to Memory Die MD2in Second Embodiment] In the memory die MD2according to the second embodiment, when operating mode MODEc has been selected, 8-bit status data Stt is outputted converted into four cycles of 2-bit data. Such a function may be realized by a circuit of the kind shown inFIG.30, for example.FIG.30is a schematic circuit diagram showing a configuration example of part of the memory die MD2. The circuit shown inFIG.30comprises: a serializer331; and two switch circuits332. The serializer331comprises: eight first input terminals; and one second input terminal. The first input terminals are each inputted with one bit of 8-bit data FDATA<7:0> configuring the 8-bit status data Stt. The second input terminal is connected to the external control terminal /WE. The serializer331converts the 8-bit data FDATA<7:0> into 2-bit data FDATA2<1:0> and sequentially outputs the converted data over four cycles, in response to input signals of the external control terminal /WE. The two switch circuits332are respectively provided correspondingly to the external control terminals CLE, ALE. Output terminals of the switch circuits332are connected to the external control terminal CLE or the external control terminal ALE. Input terminals of the switch circuits332are connected to an output terminal of the serializer331. The switch circuit332outputs an input signal in response to input of agate signal S332. The gate signal S332may be in an “H” state when, for example, the external control terminal /WE is in an “L” state, it is the first cycle of period S_Out (FIG.28), the external control terminals CLE, ALE are inputted with “L, L” in period MSel, operating mode MODEc is selected, and the memory die MD2is selected. Third Embodiment Next, a semiconductor memory device according to a third embodiment will be described with reference toFIG.31.FIG.31is a schematic block diagram showing a configuration of a memory die MD3according to the third embodiment. For convenience of description, some configurations are omitted inFIG.31. As shown inFIG.31, the semiconductor memory device according to the present embodiment is basically configured similarly to the semiconductor memory device according to the second embodiment. However, as shown inFIG.31, the input/output control circuit I/O according to the present embodiment comprises a compression-decompression circuit C10. The compression-decompression circuit C10extracts required information from status data Stt in the status register STR, and outputs the extracted information. In the third embodiment, status data Stt outputted in period S_Out of operating mode MODEc differs from status data Stt outputted when the status-read has been executed in operating mode MODEa or operating mode MODEb. For example, in the present embodiment, status data Stt outputted during period S_Out may be 2-bit data indicating ready/busy states of the two planes PLN0, PLN1included in a selected memory die MD3, as shown inFIG.32, for example. Moreover, when, for example, the memory die MD3includes four or more planes PLN, the status data Stt may be multiple-bit data indicating ready/busy states of the plurality of planes PLN included in the memory die MD3. In such a case, the status data Stt may be outputted two bits at a time divided into multiple cycles. Moreover, in the present embodiment, status data Stt outputted during period S_Out may be multiple-bit data indicating ready/busy states of all of the memory dies MD3controlled by the controller die CD, for example. When, as exemplified inFIGS.2and3, for example, the pluralities of pad electrodes P of eight memory dies MD3are respectively connected to each other via bonding wires B and the eight memory dies MD3are controlled by the controller die CD, the status data Stt may be 8-bit data indicating the ready/busy states of these eight memory dies MD3, for example. In such a case, the status data Stt of each of the memory dies MD3may be outputted two bits at a time divided into four cycles in an order depending on chip address. More specifically, for example, in the first cycle, the first memory die MD3outputs its ready/busy state from the external control terminal CLE, and the second memory die MD3outputs its ready/busy state from the external control terminal ALE. Similarly, in the second cycle, the third memory die MD3outputs its ready/busy state from the external control terminal CLE, and the fourth memory die MD3outputs its ready/busy state from the external control terminal ALE. In the third cycle, the fifth memory die MD3outputs its ready/busy state from the external control terminal CLE, and the sixth memory die MD3outputs its ready/busy state from the external control terminal ALE. In the fourth cycle, the seventh memory die MD3outputs its ready/busy state from the external control terminal CLE, and the eighth memory die MD3outputs its ready/busy state from the external control terminal ALE. Moreover, in this case, the memory dies MD3are each set to a state where their external control terminal ALE or CLE not outputting status data Stt does not accept a signal from outside. Fourth Embodiment Next, a semiconductor memory device according to a fourth embodiment will be described with reference toFIGS.33and34.FIGS.33and34are schematic block diagrams showing configurations of a memory die MD4according to the fourth embodiment. For convenience of description, some configurations are omitted inFIGS.33and34. As shown inFIG.33, the semiconductor memory device according to the present embodiment is basically configured similarly to the semiconductor memory device according to any of the first through third embodiments. However, as shown inFIG.33, the logic circuit CTR according to the fourth embodiment comprises an internal address switching circuit C20. As shown inFIG.34, for example, the internal address switching circuit C20transfers address data Add stored in a region RADR1in the address register ADR to a region RADR2in the address register ADR, in accordance with input of a trigger signal TGR1or a trigger signal TGR2. Note that the region RADR2may be a region storing address data Add corresponding to data inputted/outputted via the data signal input/output terminals DQ<7:0>, for example. Moreover, the region RADR1may be a region storing address data Add corresponding to data inputted/outputted via the external control terminals CLE, ALE, for example. Moreover, the semiconductor memory device according to the present embodiment is capable of being operated in operating mode MODEd, as well as in operating modes MODEa, MODEb, MODEc. Sometimes, for example, pluralities of the pad electrodes P of the plurality of memory dies MD are respectively connected to each other via the bonding wires B, as shown inFIGS.2and3. Sometimes, for example, during execution of the data-out operation on one of a plurality of the memory dies MD4whose pad electrodes P have been connected to each other in this way, input of the command set to another memory die MD4is performed. In such a case, if address data is reflected at a timing when input of the command data has ended, there is a risk that the address data will be switched during execution of the data-out operation, and that it will be impossible for the user data Dat to be suitably outputted. Accordingly, in the semiconductor memory device according to the fourth embodiment, the controller die CD detects completion of the data-out operation, and inputs the memory die MD4with the above-described trigger signals TGR1, TGR2. FIG.35is a truth table for explaining roles of the external terminals in operating mode MODEd.FIG.35shows roles of the external control terminals in period MSel. Operation of the memory die MD4in operating mode MODEd is basically similar to the operation of the memory dies MD2, MD3in operating mode MODEc. However, in operating mode MODEd, it is possible for the trigger signals TGR1, TGR2to be inputted in period MSel. When the trigger signal TGR1is inputted, the controller die CD raises the external control terminal /WE from “L” to “H” in a state where the external control terminals CLE, ALE have been inputted with “H”, for example. When the trigger signal TGR2is inputted, the controller die CD lowers the external control terminal /CE from “H” to “L” in a state where the external control terminal /WE has been inputted with “H”, for example. FIGS.36and37are schematic timing charts showing situations when the data-out operation is executed in operating mode MODEd. In the examples ofFIGS.36and37, the memory die MD4is set to operating mode MODEd. In the examples ofFIGS.36and37, first, command data “78h” and address data Add are inputted via the external control terminals CLE, ALE. The command data “78h” is command data instructing the status-read. The address data Add included in this command set includes information of a memory die MD4(LUN0) to be targeted for the data-out operation, as the above-described chip address. Due to input of the command data “78h”, the status-read is performed, and status data Stt is outputted via the external control terminals CLE, ALE. Next, command data “05h”, address data Add, and command data “E0h” are sequentially inputted via the external control terminals CLE, ALE. This address data Add includes information of the memory die MD4(LUN0) to be targeted for the data-out operation, as the above-described chip address. After a certain standby time, the controller die CD switches (toggles) input signals of the external control terminals/RE, RE. As a result, the data-out operation is started on the memory die MD4(LUN0), and user data “DataOut” is outputted via the data signal input/output terminals DQ<7:0>. Moreover, in the example ofFIG.36, while the data-out operation on the memory die MD4(LUN0) is being performed, command data “78h” and address data Add are inputted via the external control terminals CLE, ALE. This address data Add includes information of a memory die MD4(LUN1) to be targeted for the data-out operation, as the above-described chip address. Due to input of the command data “78h”, the status-read is performed, and status data Stt is outputted via the external control terminals CLE, ALE. Next, command data “05h”, address data Add, and command data “E0h” are sequentially inputted via the external control terminals CLE, ALE. This address data Add includes the likes of an address of the memory die MD4(LUN1) to be targeted for the data-out operation, as the above-described chip address. Input of these data is also performed while the data-out operation on the memory die MD4(LUN0) is being performed, that is, in a period when the controller die CD switches (toggles) the input signals of the external control terminals/RE, RE. Now, in the case where the pluralities of pad electrodes P of the memory die MD4(LUN0) and the memory die MD4(LUN1) are respectively connected to each other via the bonding wires B as described above, the external control terminals/RE, RE too are respectively connected. Hence, if the input signals of the external control terminals/RE, RE of the memory die MD4(LUN0) are switched (toggled) while the data-out operation on the memory die MD4(LUN0) is being performed, it will result in the input signals of the external control terminals/RE, RE of the memory die MD4(LUN1) also being switched (toggled) while the data-out operation on the memory die MD4(LUN0) is being performed. However, as shown inFIG.34, the internal address switching circuit C20of the logic circuit CTR according to the fourth embodiment will not transfer address data Add stored in the region RADR1in the address register ADR to the region RADR2in the address register ADR unless the trigger signal TGR1or the trigger signal TGR2is inputted. Hence, even after the memory die MD4(LUN1) has been sequentially inputted with command data “05h”, address data Add, and command data “E0h” via the external control terminals CLE, ALE, it will not output user data from the data signal input/output terminals DQ<7:0> unless the trigger signal TGR1or the trigger signal TGR2is inputted, even if the input signals of the external control terminals/RE, RE are switched (toggled). Hence, it can be avoided that user data is simultaneously outputted from the data signal input/output terminals DQ<7:0> of the memory die MD4(LUN0) and the data signal input/output terminals DQ<7:0> of the memory die MD4(LUN1). Next, after completion of the data-out operation on the memory die MD4(LUN0), either of the above-described trigger signals TGR1, TGR2is inputted. Then, the controller die CD switches (toggles) the input signals of the external control terminals/RE, RE. As a result, the data-out operation is started on the memory die MD4(LUN1), and user data “DataOut” is outputted via the data signal input/output terminals DQ<7:0>. Note that in the fourth embodiment, it is possible for the status-read to be executed by a variety of methods. For example, in the fourth embodiment, the status-read may be executed by similar methods to in the memory dies according to any of the first through third embodiments. Moreover, in the fourth embodiment, data S00, S01, S10, S11, S20, S21, S30, S31may be outputted by the status-read, as inFIG.38. These data S00, S01, S10, S11, S20, S21, S30, S31may respectively indicate a ready/busy state of the plane PLN0of a first memory die MD4, a ready/busy state of the plane PLN1of the first memory die MD4, a ready/busy state of the plane PLN0of a second memory die MD4, a ready/busy state of the plane PLN1of the second memory die MD4, a ready/busy state of the plane PLN0of a third memory die MD4, a ready/busy state of the plane PLN1of the third memory die MD4, a ready/busy state of the plane PLN0of a fourth memory die MD4, and a ready/busy state of the plane PLN1of the fourth memory die MD4. Other Embodiments That concludes description of the semiconductor memory devices according to the first through fourth embodiments. However, the above description is merely exemplary, and specific configurations, operations, and so on, may be appropriately adjusted. For example, it is possible too for the configurations, operations, and so on, described above to be used appropriately combined. For example, it is possible too for the memory die to be operated selecting operating mode MODEa, the memory die to be operated selecting operating mode MODEc, and the memory die to be further operated reselecting operating mode MODEa, and so on, as exemplified inFIG.39. Moreover, a configuration may be adopted whereby, for example, the operating mode of the memory die is set to MODEa after power-on, and the operating mode is switched in response to acceptance of the command set, and so on. Moreover, in the description above, in operating modes MODEb, MODEc, MODEd, input/output of 2-bit data utilizing the external control terminals CLE, ALE was performed. However, such a method is merely an exemplification, and a specific method may be appropriately adjusted. For example, in operating modes MODEb, MODEc, MODEd, another terminal (for example, the external control terminal /WP, and so on, described with reference to the likes ofFIG.4), or the like, may be utilized to perform input/output of 3 or more-bit data. Moreover, one or two terminals from among terminals including the external control terminals CLE, ALE may be selected to perform input/output of 1-bit or 2-bit data. Others While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms: furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. | 89,799 |
11861227 | DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. Like reference symbols in the drawings may denote like elements, and to the extent that a description of an element has been omitted, it may be understood that the element is at least similar to corresponding elements that are described elsewhere in the specification. FIG.1is a block diagram showing a host-storage system according to some embodiments. A host-storage system10may include a host100and a storage device200. The storage device200may also include a storage controller210and a non-volatile memory (NVM)220. The host100may include a host controller110and a host memory120. The host memory120may function as a buffer memory for temporarily storing data to be transmitted to the storage device200or data that is transmitted from the storage device200. The storage device200may include storage medium for storing data in response to a request from the host100. For example, the storage device200may include at least one of an SSD (Solid State Drive), an embedded memory, and a detachable external memory. If the storage device200is an SSD, the storage device200may be a device that complies with a NVMe (non-volatile memory express) standard. If the storage device200is an embedded memory or an external memory, the storage device200may be a device that complies with a UFS (universal flash storage) or an eMMC (embedded multi-media card) standard. The host100and the storage device200may each generate and transmit packets according to the adopted standard protocol. When the non-volatile memory220of the storage device200includes a flash memory, the flash memory may include a 2D NAND memory array or a 3D (or vertical) NAND (VNAND) memory array. As another example, the storage device200may also include other various types of non-volatile memories. For example, the storage device200may include a MRAM (Magnetic RAM), a MRAM (Spin-Transfer Torque MRAM), a conductive bridging RAM (CBRAM), a FeRAM (Ferroelectric RAM), a PRAM (Phase RAM), a resistive memory (Resistive RAM), and/or other various types of memories. In some embodiments, the host controller110and the host memory120may be implemented as different semiconductor chips. In some embodiments, the host controller110and the host memory120may be integrated on the same semiconductor chip. As an example, the host controller110may be one of a plurality of modules provided in an application processor, and such an application processor may be implemented as a system on chip (SoC). Further, the host memory120may be an embedded memory provided in the application processor, or a non-volatile memory, or a memory module placed outside of the application processor. The host controller110may manage operations including storing the data (e.g., write data) of a buffer region in the non-volatile memory220and/or storing the data (e.g., read data) of the non-volatile memory220in the buffer region. The storage controller210may include a host interface211, a memory interface212, and a multi-core processor213. The storage controller210may further include a flash translation layer (FTL)214, a packet manager215, a buffer memory216, an ECC (error correction code) engine217, an AES (advanced encryption standard) engine218, and a task scheduler219. The storage controller210may further include a working memory into which the flash conversion layer (FTL)214is loaded, and when the multi-core processor213executes the flash conversion layer214, the data write and read operations on the non-volatile memory may be controlled. The host interface211may transmit and receive packets to and from the host100. Packets transmitted from the host100to the host interface211may include commands and/or data to be written to the non-volatile memory220, and packets transmitted from the host interface211to the host100may include responses to the commands, and/or data that is read from the non-volatile memory220, and the like. The memory interface212may transmit the data to be written on the non-volatile memory220to the non-volatile memory220, and/or receive the read data from the non-volatile memory220. Such memory interface212may be implemented to comply with standard conventions such as Toggle or ONFI. The flash translation layer214may perform various functions such as address mapping, wear-leveling, and garbage collection. The address mapping operation may include changing a logical address received from a host into a physical address which is used for actually storing the data in the non-volatile memory220. The wear-leveling may include ensuring that blocks in the non-volatile memory220are used uniformly to prevent an excessive degradation of a particular block, and may be implemented, for example, through a firmware capable of balancing the erasure counts of the physical blocks. The garbage collection may include ensuring an available capacity in the non-volatile memory220through a method of copying the valid data of the block to a new block and then erasing the existing block. The packet manager215may generate a packet according to the protocol of the interface discussed with the host100, and/or may parse various types of information from the packet received from the host100. Further, in some embodiments, the packet manager215may manage a packet provided from the task scheduler219to the multi-core processor213or a packet provided from the multi-core processor213to the task scheduler219. However, the embodiments are not necessarily limited thereto. The buffer memory216may temporarily store data to be recorded on the non-volatile memory220, or data to be read from the non-volatile memory220. The buffer memory216may be provided inside of the storage controller210, or may be placed outside of the storage controller210. The ECC engine217may perform error detection and correction functions of the read data from the non-volatile memory220. More specifically, the ECC engine217may generate parity bits on the write data to be written to the non-volatile memory220, and the parity bits generated in this way may be stored in the non-volatile memory220together with the write data. When reading the data from the non-volatile memory220, the ECC engine217may correct an error of the read data and output the read data with a corrected error, using the parity bits that are read from the non-volatile memory220together with the read data. The AES engine218may perform encryption and/or decryption operations of the data which are input to the storage controller210by, for example, using a symmetric-key algorithm. FIG.2is a diagram in which the storage controller, the memory interface, and the non-volatile memory ofFIG.1are reconfigured. The memory interface212ofFIG.1may include a controller interface circuit212aand a memory interface circuit212bofFIG.2. The non-volatile memory220may include first to eight pins P11to P18, a memory interface circuit212b, a control logic circuit510, and a memory cell array520. The memory interface circuit212bmay receive a chip enable signal nCE from the storage controller210through the first pin P11. The memory interface circuit212bmay transmit and receive signals to and from the storage controller210through second to eighth pins P12to P18based at least in part on the chip enable signal nCE. For example, when the chip enable signal nCE is in an enable state (e.g., a low level), the memory interface circuit212bmay transmit and receive signals to and from the storage controller210through second to eighth pins P12to P18. The memory interface circuit212bmay receive a command latch enable signal CLE, an address latch enable signal ALE, and a write enable signal nWE from the storage controller210through second to fourth pins P12to P14, respectively. The memory interface circuit212bmay receive a data signal DQ from the storage controller210or transmit the data signal DQ to the storage controller210through a seventh pin P17. The command CMD, the address ADDR, and the data DATA may be transferred through the data signal DQ. For example, the data signal DQ may be transferred through a plurality of data signal lines. In this case, the seventh pin P17may include a plurality of pins corresponding to the plurality of data signals. The memory interface circuit212bmay acquire the command CMD from the data signal DQ received in an enable section (e.g., a high level state) of the command latch enable signal CLE based on the toggle timings of the write enable signal nWE. The memory interface circuit212bmay acquire the address ADDR from the data signal DQ received in the enable section (e.g., a high level state) of the address latch enable signal ALE based on the toggle timings of the write enable signal nWE. In some embodiments, the write enable signal nWE may hold a static state (e.g., a high level or a low level) and then may be toggled between the high level and the low level. For example, the write enable signal nWE may be toggled at a section in which the command CMD or the address ADDR is transmitted. Accordingly, the memory interface circuit212bmay acquire the command CMD or the address ADDR based on the toggle timings of the write enable signal nWE. The memory interface circuit212bmay receive a read enable signal nRE from the storage controller210through a fifth pin P15. The memory interface circuit212bmay receive the data strobe signal DQS from the storage controller210through a sixth pin P16, or may transmit the data strobe signal DQS to the storage controller210. In the data DATA output operation of the non-volatile memory220, the memory interface circuit212bmay receive the toggled read enable signal nRE through the fifth pin P15before outputting the data DATA. The memory interface circuit212bmay generate the toggled data strobe signal DQS based on the toggling of the read enable signal nRE. For example, the memory interface circuit212bmay generate the data strobe signal DQS that starts toggling after a predetermined delay (e.g., tDQSRE) based on the toggling start time of the read enable signal nRE. The memory interface circuit212bmay transmit a data signal DQ including the data DATA based on the toggle timing of the data strobe signal DQS. As a result, the data DATA may be aligned with the toggle timing of the data strobe signal DQS and transmitted to the storage controller210. In the data DATA input operation of the non-volatile memory220, if the data signal DQ including the data DATA is received from the storage controller210, the memory interface circuit212bmay receive the toggled data strobe signal DQS together with the data DATA from the storage controller210. The memory interface circuit212bmay acquire the data DATA from the data signal DQ based on the toggle timing of the data strobe signal DQS. For example, the memory interface circuit212bmay acquire the data DATA by sampling the data signal DQ at a rising edge and a falling edge of the data strobe signal DQS. The memory interface circuit212bmay transmit a ready/busy output signal nR/B to the storage controller210through an eighth pin P18. The memory interface circuit212bmay transmit the state information of the non-volatile memory220to the storage controller210through the ready/busy output signal nR/B. If the non-volatile memory220is in the busy state (for example, when the internal operations of the non-volatile memory220are being performed), the memory interface circuit212bmay transmit the ready/busy output signal nR/B indicating the busy state to the storage controller210. If the non-volatile memory220is in the ready state (for example, the internal operations of the non-volatile memory220are not performed or are completed), the memory interface circuit212bmay transmit the ready/busy output signal nR/B indicating the ready state to the storage controller210. For example, while the non-volatile memory220reads the data DATA from the memory cell array520in response to a page read command, the memory interface circuit212bmay transmit the ready/busy output signal nR/B indicating the busy state (e.g., a low level) to the storage controller210. For example, while the non-volatile memory220programs the data DATA into the memory cell array520in response to the program instruction, the memory interface circuit212bmay transmit the ready/busy output signal nR/B indicating the busy state to the storage controller210. The control logic circuit510may control various operations of the non-volatile memory220. The control logic circuit510may receive the command/address CMD/ADDR acquired from the memory interface circuit212b. The control logic circuit510may generate control signals for controlling other components of the non-volatile memory220based at least in part on the received command/address CMD/ADDR. For example, the control logic circuit510may generate various control signals for programing the data DATA in the memory cell array520or reading the data DATA from the memory cell array520. The memory cell array520may store the data DATA acquired from the memory interface circuit212bbased at least in part on the control of the control logic circuit510. The memory cell array520may output the stored data DATA to the memory interface circuit212bbased at least in part on the control of the control logic circuit510. The memory cell array520may include a plurality of memory cells. In one embodiment, a plurality of memory cells may be flash memory cells. However, the present disclosure is not necessarily limited thereto, and the memory cells may be a RRAM (Resistive Random Access Memory) cell, a FRAM (Ferroelectric Random Access Memory) cell, a PRAM (Phase Change Random Access Memory) cell, a TRAM (Thyristor Random Access Memory) cell, and/or a MRAM (Magnetic Random Access Memory) cell. Hereinafter, embodiments of the present disclosure will be described according to an example in which the memory cells are NAND flash memory cells. The storage controller210may include first to eighth pins P21to P28, and a controller interface circuit212a. The first to eighth pins P21to P28may correspond to the first to eighth pins P11to P18of the non-volatile memory220. For example, the first to eight pins P21to P28of the storage controller210may be respectively connected to the first to eight pins P11to P18of the non-volatile memory220. The controller interface circuit212amay transmit the chip enable signal nCE to the non-volatile memory220through a first pin P21. The controller interface circuit212amay transmit and receive signals to and from the non-volatile memory220, which is selected through the chip enable signal nCE, through the second to eighth pins P22to P28. The controller interface circuit212amay transmit the command latch enable signal CLE, the address latch enable signal ALE, and the write enable signal nWE to the non-volatile memory220through the second to fourth pins P22to P24, respectively. The controller interface circuit212amay transmit the data signal DQ to the non-volatile memory220or receive the data signal DQ from the non-volatile memory220through a seventh pin P27. The controller interface circuit212amay transmit the data signal DQ including the command CMD or the address ADDR to the non-volatile memory220along with the toggled enable signal nWE. The controller interface circuit212amay transmit the data signal DQ including the command CMD to the non-volatile memory220by transmitting the command latch enable signal CLE in the enable state, and may transmit the data signal DQ including the address ADDR to the non-volatile memory220by transmitting the address latch enable signal ALE in the enable state. The controller interface circuit212amay transmit the read enable signal nRE to the non-volatile memory220through the fifth pin P25. The controller interface circuit212amay receive the data strobe signal DQS from the non-volatile memory220through the sixth pin P26, and/or may transmit the data strobe signal DQS to the non-volatile memory220. In the data DATA output operation of the non-volatile memory220, the controller interface circuit212amay generate a toggling read enable signal nRE and transmit the read enable signal nRE to the non-volatile memory220. For example, the controller interface circuit212amay generate the read enable signal nRE that changes from the static state (e.g., a high level or a low level) to the toggle state before the data DATA is output. As a result, the toggled data strobe signal DQS may be generated in the non-volatile memory220based on the read enable signal nRE. The controller interface circuit212amay receive the data signal DQ including the data DATA along with the toggled data strobe signal DQS from the non-volatile memory220. The controller interface circuit212amay acquire the data DATA from the data signal DQ based on the toggle timing of the data strobe signal DQS. In the data DATA input operation of the non-volatile memory220, the controller interface circuit212amay generate a toggled data strobe signal DQS. For example, the controller interface circuit212amay generate the data strobe signal DQS that changes from the static state (e.g., a high level or a low level) to the toggle state before transmitting the data DATA. The controller interface circuit212amay transmit the data signal DQ including the data DATA to the non-volatile memory220based on the toggle timings of the data strobe signal DQS. The controller interface circuit212amay receive a ready/busy output signal nR/B from the non-volatile memory220through an eighth pin P28. The controller interface circuit212amay determine the state information of the non-volatile memory220based on the ready/busy output signal nR/B. FIG.3is a diagram showing the multi-core processor ofFIG.1.FIG.4is a diagram showing a task list ofFIG.1.FIG.5is a diagram showing an ETS (Execution Time Stamp) table ofFIG.1.FIG.6is a diagram showing a TDT (Task Duration Time) table ofFIG.1. Referring toFIGS.1and3, the multi-core processor213may include n cores, where n is a natural number greater than or equal to 2. Each core may include a calculation unit2131a, a SRAM2131b, and an ITCM (Instruction Tightly Coupled Memory)2131c. In some embodiments, each core may further include a code loader2131d, a packet receiver2131e, and a task requester2131f. The calculation unit2131amay perform calculation based at least in part on code loaded into the ITCM2131c. The SRAM231bmay store the data required to perform such a calculation. The code loader2131dmay receive the code of the task to be performed by the core from an external source and load the code into the ITCM2131c. The packet receiver2131emay receive packets related to the operation of the core from an external source. The task requester2131fmay transmit a signal requesting a subsequent task, when the processing of the task issued to the core is completed. AlthoughFIG.3shows that each core includes the code loader2131d, the packet receiver2131e, and the task requester2131f, the disclosure is not necessarily limited thereto. In some embodiments, the embodiments may be implemented in such a way that the code loader2131d, the packet receiver2131eand the task requester2131fare implemented, for example, in software to control each core. Also, in some embodiments, the code loader2131d, the packet receiver2131eand the task requester2131fmay be implemented separately from the core. Referring toFIG.1again, the task scheduler219may generate tasks required to generate the commands to be provided to the non-volatile memory220in response to the commands received through the host interface211, and may operate such that the generated tasks are processed through the multi-core processor213. Specifically, the task scheduler219may generate a plurality of tasks from commands received through the host interface211, and select a core for processing a plurality of tasks by the use of a task execution time table219band an execution time stamp table219c, thereby allowing the tasks to be processed through the multi-core processor213. The commands generated by processing a plurality of tasks may be provided to the non-volatile memory220in a manner as described above referring toFIG.2, The task scheduler219may operate, using a task list219a, an execution time stamp (ETS) table219b, and a task execution time (TDT) table219c. Referring toFIGS.1and4, the task list219amay include m tasks, where m is a natural number, required to generate commands to be provided to the non-volatile memory220in response to commands received from the host100through the host interface211. Here, the respective tasks may be distinguished from each other through an ID (TID), and may be dependent on each other. For example, task2(TID2) is a task that may be processed when task1(TID1) is completed, and task4(TID4) may be a task that may be processed when all tasks1to3(TID1to TID3) are completed. Referring toFIGS.1and5, the ETS table219bmay include the execution times E1to EN of each core included in the multi-core processor213at a specific time point. The task scheduler219may check the execution times E1to EN of each core included in the multi-core processor213at a specific time point, by referring to the ETS table219b. Referring toFIGS.1and6, the TDT table219cmay include the required time required for completion for each of the tasks TID1to TIDM stored in the task list219a. InFIG.6, task1(TID1) instructs that T1time is required to complete the processing and task2(TID2) requires T2time to complete the processing. In some embodiments, the task scheduler219may be implemented, for example, in software to schedule tasks in each core included in the multi-core processor213. In this case, the task list219a, the ETS table219b, and the TDT table219cdescribed above may be stored in the buffer memory216shown inFIG.1, an internal memory of a storage controller210, or an external memory of the storage controller210. Hereinafter, the operation of the storage device according to some embodiments will be described referring toFIGS.7to15. FIG.7is a flowchart showing the operation of the storage device according to some embodiments.FIGS.8to15are diagrams for explaining the operation of the storage device according to some embodiments. Referring toFIG.7, the memory command is provided to the storage controller from the host (S100). Specifically, referring toFIGS.1and8, the host interface211may receive a memory command that requires execution of the memory operation on the non-volatile memory220. For example, the memory operation may include an operation of reading the data stored in the non-volatile memory cell of the non-volatile memory220addressed to a logical address, a write operation of writing the data to a non-volatile memory cell of the non-volatile memory220addressed to the logical address, and/or an erase operation of erasing a specific block of the non-volatile memory220addressed to the logical address. However, the embodiments are not necessarily limited thereto, and the examples of the memory operation of the non-volatile memory220may be modified in various ways. Hereinafter, a read command for requesting to read the data stored in the non-volatile memory cell of the non-volatile memory220addressed to the logical address will be explained as an example. However, it will be appreciated that the embodiments are not limited to this example. Next, tasks corresponding to the provided read commands are generated (S110). Specifically, referring toFIGS.1and8, the task scheduler219of the storage controller210may generate the tasks required to generate the read command to be transmitted to the non-volatile memory220in response to the read command provided from the host100. As an example, the first task (Task1) may include an address mapping (L2P) task of mapping a logical address LA, which is provided from the host100, to a physical address (PA) used in the non-volatile memory220, and the second task (Task2) may include determining the level of read voltage applied to the non-volatile memory cell of the non-volatile memory220. This is only one example, however, and the task scheduler219may generate Q (Q is a natural number) tasks required for generating the read command to be transmitted to the non-volatile memory220in response to the read command provided from the host100. Next, referring toFIG.7, the task request is received (S120). Specifically, referring toFIG.1, the task scheduler219of the storage controller210may receive a test request of requesting assignment of the test from the core of the multi-core processor213in which the processing of the assigned task is completed. Next, referring toFIG.7, the task execution time is searched (S130). Specifically, referring toFIGS.1and9, the task scheduler219of the storage controller210may extract tasks that need to be assigned from the task list (219aofFIG.4) in consideration of the dependency between the respective tasks. Hereinafter, a case where the task scheduler219extracts task1(TID1) and task2(TID2) will be described as an example. However, the embodiments are not necessarily limited thereto, and various tasks may be extracted according to different embodiments and/or operations. The task scheduler219checks the execution time of task1(TID1) and task2(TID2) in the TDT table219c. In the example shown inFIG.9, the task scheduler219checks that the execution times of task1(TID1) and task2(TID2) are40. Next, referring toFIG.7, the core is selected in consideration of the task execution time table and the execution time stamp table (S140). Specifically, referring toFIGS.1and10, the task scheduler219checks the ETS table219band checks that the execution time of core1is the shortest at the present time point. If, for example, task2(TID2) has a dependency on task1(TID1), the task scheduler219may assign both task1(TID1) and task2(TID2) to core1having the shortest execution time at the present time point. If, for example, there is no dependency between task2(TID2) and task1(TID1), the task scheduler219recalculates the execution time for each core, assuming that task1(TID1) is assigned to core1having the shortest execution time at the present time point. Task2(TID2) may also be assigned to the core having the shortest execution time. For example, Task2may be assigned core1, or to another core other than core1. In this example, after task1(TID1) is assigned to core1, the execution time of core1becomes 30, which is the lowest value among the execution times of other cores. Therefore, the task scheduler219may assign both task1(TID1) and task2(TID2) to core1. Next, referring toFIG.7, the ETS table is updated (S150). It will be appreciated that various components described herein, such as the task scheduler219, the packet manager215, packet receiver2131e, task requester2131f, and code loader2131dmay be implemented as individual circuits. Additionally or alternatively, one or more of the components may be implemented within one or more processors configured to execute their described functionality. Specifically, referring toFIGS.1and11, the task scheduler219updates the execution time of core1, to which task1(TID1) and task2(TID2) are assigned, to60. Next, referring toFIG.7, task1and task2are issued to core1(S160). Specifically, referring toFIGS.1and12, the task scheduler219provides task1(TID1) and task2(TID2) to the packet receiver2131eof core1to issue task1(TID1) and task2(TID2) to core1. Next, referring toFIG.7, the code required to process task1(TID1) is loaded (S170). Specifically, referring toFIGS.1and13, the code loader2131dof the core1receives the code of the task1(TID1) to be processed by the core1from an external component, and may load the code into the ITCM2131c. In some embodiments, although the codes required to process the task may be stored, for example, in the buffer memory216described above, the embodiments are not necessarily limited thereto. When the code of task1(TID1) is loaded into the ITCM2131c, core1processes task1(TID1), using the calculation unit2131aand the SRAM231b. Next, referring toFIG.7, when the processing of task1(TID1) is completed, a request for a subsequent task is transmitted (S180). Further, the code required to process task2(TID2) is loaded (S190). Specifically, referring toFIGS.1and14, when processing of the task1(TID1) is completed, the task requester2131fof the core1requests the task scheduler219to assign the subsequent task (for example, a yet-to-be-determined task other than task1or task2). Further, while the task requester2131frequests the task scheduler219to assign the subsequent task, for example, the code loader2131dof core1receives the code of task2(TID2) to be processed by core1from the buffer memory216, and loads the code into the ITCM2131c. For example, the code loader2131dof core1may not load the code required to perform the task after waiting for the timing when a new task is issued, but may rather load the code of the task scheduled to process while the task requester2131frequests the task scheduler219to assign the subsequent task. Accordingly, the operation speed of the storage device may be increased. Next, referring toFIG.7, the task2(TID2) is processed (S200). Specifically, referring toFIGS.1and14, when the code of task2(TID2) is loaded into the ITCM2131c, core1processes task2(TID2), using the calculation unit2131aand SRAM2131b. Next, referring toFIG.7, a read command to be provided to the non-volatile memory is generated based on the processing results of the tasks (S210). Specifically, referring toFIGS.1and15, the storage controller210may check the physical address corresponding to the logical address provided from the host100based on the processing result of task1(TID1), determine the read voltage level based on the processing result of task2(TID2), and generate a read command to be provided to the non-volatile memory220. At this time, the read command provided from the storage controller210to the non-volatile memory220may include a physical address and read voltage level information, unlike the read command provided from the host100to the storage controller210. The above description is directed towards one example embodiment, however, and the storage controller210may generate a read command provided from the storage controller210to the non-volatile memory220based on the processing results of Q tasks. Next, referring toFIG.7, the generated read command is transmitted to the non-volatile memory (S220). The storage controller210may provide the generated read command to the non-volatile memory220in a manner described above with reference toFIG.2. Specifically, the storage controller210may transmit the data signal (DQ ofFIG.2) including the generated read command and the generated physical address along with the toggled write enable signal (nWE ofFIG.2) to the non-volatile memory220. In the storage device200according to the present embodiment described above, each time the task scheduler219assigns the task, the execution time of each core is accounted for, and the task is assigned to the core capable of most efficiently processing the task. Also, each core does not maintain an idle state after processing the issued task until the next task is issued, but rather the code loading of the task scheduled to process is performed together at that core while requesting the task scheduler219to perform the subsequent tasks. Therefore, the usage efficiency of the core may be significantly increased, and the operating speed of the storage device may be increased. Although embodiments according to the inventive concepts of the present disclosure have been explained describing the operation of scheduling m tasks required for the storage controller210to generate a command to be provided to the non-volatile memory220in response to the command received from the host100as an example, the embodiments are not limited to these examples. In some embodiments, the storage controller210may utilize the method described above when scheduling k tasks corresponding to the plurality of commands received from the host100. For example, when the storage controller210schedules the first task for the first read command received from the host100and the second task for the second read command received from the host100, it is possible to schedule the first and second tasks, using the methods described above. FIG.16is a diagram showing a TDT (Task Duration Time) table according to some embodiments. Referring toFIGS.1,2and16, the first task T1is for processing the first read command received from the host100, the second task T2is for processing the second read command received from the host100, and a kthtask Tk is for processing the second read command received from the host100. The time required for completing the processing of the first task T1is T1, the time required for completing the processing of the second task T2is T2, and the time required for completing the processing of the kthtask Tk is Tk. When the first read command received from the host100is for reading the data stored in the first memory cell of the memory cell array520, and the second read command received from the host100for reading the data stored in the second memory cell of the memory cell array520, T2which is the time required for completing the processing of the second task T2may be changed depending on the positions of the first memory cell and the second memory cell. For example, T2, which is the time required for completing the processing of the second task T2when the first memory cell and the second memory cell are adjacent to each other in the memory cell array520, may be smaller than T2which is the time required for completing the processing of the second task T2when the first memory cell and the second memory cell are not adjacent to each other, due to locality characteristics. For example, the processing completion time T1to Tk of the tasks included in the task execution time table219dmay be changed in consideration of whether to perform the internal operation of the non-volatile memory220, the state of the non-volatile memory220, the state of the memory interface212, a relationship between the commands received from the host100described above, and the like. Further, although the tasks generated from different commands received from the host100have been described as an example, the processing completion times T1to Tm of the tasks generated from one command described above referring toFIG.6may also be changed in consideration of whether to execute the internal operation of the non-volatile memory220, the state of the non-volatile memory220, the state of the memory interface212, or other considerations. Accordingly, a memory device and a method of operating the same in accordance with the present disclosure may operate with increased speed and efficiency as a result of the increased utilization of multi-core processors connected thereto. The processes described above are some examples wherein embodiments may efficiently use the multi-core processors in accordance with the inventive concepts. In concluding the detailed description, those skilled in the art will appreciate that variations and modifications may be made to the disclosed embodiments without substantially departing from the principles of the present disclosure. Therefore, the disclosed embodiments are used in a generic and descriptive sense only and not for purposes of limitation. | 35,565 |
11861228 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to efficient scrambling and encoding for memory operations, including copyback procedures in a memory subsystem. A memory subsystem can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory subsystem that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem. A memory device can be a non-volatile memory device. A non-volatile memory device is a package of one or more dice. One example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. The dice in the packages can be assigned to one or more channels for communicating with a memory subsystem controller. Each die can consist of one or more planes. Planes can be grouped into logic units (LUN). For some types of non-volatile memory devices (e.g., NAND memory devices), each plane consists of a set of physical blocks, which are groups of memory cells to store data. A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. There are various types of cells, such as single-level cells (SLCs), multi-level cells (MLCs), triple-level cells (TLCs), and quad-level cells (QLCs). For example, a SLC can store one bit of information and has two logic states. Memory subsystems are increasing in density, with a greater number of memory dice per subsystem, and complexity, with a greater number of independent portions (groups of independent word lines, planes, etc.) within each die. While these increases allow for more storage, improved random reads, and write independence, they also result in an increasing number of endpoints for which a memory subsystem controller tracks the status of memory operations. For example, a sixteen-terabyte memory subsystem can have an eight-channel controller with sixteen die per channel. In managing such a system, the controller sends out sixteen separate operation status commands (e.g., read status commands) to poll the status of the memory components. If each die supported four independent word line groups, the number of operation status commands increases to sixty-four. With sixty-four endpoints to poll, memory interface bandwidth can be consumed by status polling and, as a result, performance and Quality of Service (QoS) suffers. Aspects of the present disclosure address the above and other deficiencies by aggregating operation status commands and sending them in parallel as an aggregate status command Instead of issuing a separate status command for each die/independent portion of memory and receiving separate responses, a single aggregate status command results in the simultaneous return of status messages from multiple dice/independent portions of memory. For example, if a memory interface channel is implemented as an eight-bit bus, up to eight status commands can be sent in parallel and up to eight status responses can be returned in parallel. As a result of the implementation of aggregate status commands, and the corresponding aggregated responses, the memory subsystem performance and QoS improve. FIG.1illustrates an example computing system100that includes a memory subsystem110in accordance with some embodiments of the present disclosure. The memory subsystem110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory subsystem110can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory module (NVDIMM). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory subsystems110. In some embodiments, the host system120is coupled to different types of memory subsystems110.FIG.1illustrates one example of a host system120coupled to one memory subsystem110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory subsystem110, for example, to write data to the memory subsystem110and read data from the memory subsystem110. The host system120can be coupled to the memory subsystem110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI), a double data rate (DDR) memory bus, a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), or any other interface. The physical host interface can be used to transmit data between the host system120and the memory subsystem110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory subsystem110is coupled with the host system120by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory subsystem110and the host system120.FIG.1illustrates a memory subsystem110as an example. In general, the host system120can access multiple memory subsystems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Although non-volatile memory devices such as NAND type memory (e.g., 2D NAND, 3D NAND) and 3D cross-point array of non-volatile memory cells are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM) A memory subsystem controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations (e.g., in response to commands scheduled on a command bus by controller115). The memory subsystem controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory subsystem controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The memory subsystem controller115can include a processing device117(processor) configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory subsystem controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory subsystem110, including handling communications between the memory subsystem110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory subsystem110inFIG.1has been illustrated as including the memory subsystem controller115, in another embodiment of the present disclosure, a memory subsystem110does not include a memory subsystem controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory subsystem110). In general, the memory subsystem controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130and/or the memory device140. The memory subsystem controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory subsystem controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130and/or the memory device140as well as convert responses associated with the memory devices130and/or the memory device140into information for the host system120. The memory subsystem110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory subsystem controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory subsystem controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory subsystem controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, a memory device130is a managed memory device, which is a raw memory device combined with a local controller (e.g., local controller135) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The memory subsystem110includes a status request aggregator113that can aggregate memory status commands In some embodiments, the controller115includes at least a portion of the status request aggregator113. For example, the controller115can include a processor117(processing device) configured to execute instructions stored in local memory119for performing the operations described herein. In some embodiments, a status request aggregator113is part of the host system110, an application, or an operating system. The status request aggregator113can aggregate operation status commands and send them in parallel as an aggregate status command Instead of issuing a separate status command for each die/independent portion of memory and receiving separate responses, a single aggregate status command results in the simultaneous return of status messages from multiple dice/independent portions of memory. Further details with regards to the operations of the status request aggregator113are described below. FIG.2is a flow diagram of an example method of memory status command aggregation in accordance with some embodiments of the present disclosure. The method200can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method200is performed by the status request aggregator113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation205, the processing device issues memory commands For example, in response to host requests to read and/or write data from/to memory, the processing device determines memory components targeted by the requests based upon an addressing scheme and issues read and/or write operation commands to the corresponding memory components. These requests can be received from one or more host systems and/or generated by a process within the memory subsystem110. The processing device can receive memory operation requests asynchronously, continuously, in batches, etc. In one embodiment, the memory subsystem110receives operation requests from one or more host systems120and stores those requests in a command queue. At operation210, the processing device aggregates memory status commands For example, a memory status command can request a status value (i.e., status information) for the last memory operation issued, such as one or more memory commands described above with reference to operation205. The status value can indicate if the last memory operation failed or succeeded. Additionally, a memory status command can request other status values to indicate, e.g., whether or not an array operation is in progress, if the memory component is in a ready state or not, if the memory component is write protected, etc. In one embodiment, the processing device aggregates memory status commands by storing them in a buffer in volatile media (e.g., local memory119or a memory device140). In one embodiment, the processing device aggregates memory status commands per channel. For example, if the memory subsystem110includes multiple channels, the processing device temporarily stores memory status commands in groups per channel or otherwise stores them in a manner that allows for tracking and retrieval per channel At operation215, the processing device determines if an aggregation threshold has been satisfied. For example, the processing device can determine if a number of aggregated memory status commands matches or exceeds the bandwidth (e.g., number of bits) of the memory interface bus used to issue the memory status commands For example, the processing device can increment a counter for each memory status command buffered per channel and determine if the threshold is satisfied by comparing the counter to the channel bandwidth value. In one embodiment, the processing device determines if an aggregation threshold has been reached per channel for multiple channels. In some embodiments, the processing device can determine if an aggregation threshold has been satisfied based on amount of time that has elapsed. For example, the processing device can stop aggregating memory status commands prior to reaching the bandwidth capacity of a channel upon the expiration of an amount of time. A time-based threshold can be based on a timestamp of a memory command issued, a time elapsed since the last aggregate status command, etc. If the processing device determines that an aggregation threshold has not been satisfied, method200returns to operation returns to operation205(or, alternatively, to operation210) and proceeds as described above. If the processing device determines that an aggregation threshold has been satisfied, method200proceeds to operation220. In an embodiment with multiple channels, method200returns to operation205for any channels that do not satisfy the aggregation threshold and proceeds to operation220for any channels that do satisfy the aggregation threshold. At operation220, the processing device assigns each of one or more memory status commands to a corresponding bit on the memory interface bus. For example, the processing device can aggregate and select memory status commands in a first-in-first-out (FIFO) manner In some embodiments, the processing device can prioritize memory status commands based upon memory status command type, to group memory status commands directed to the same memory component (e.g., the same memory die), etc. If a memory channel can transmit eight bits, the processing device selects eight memory status commands and assigns one to each of the bits of the memory channel. In an embodiment in which multiple memory dice are coupled to the same memory channel, the processing device can divide the bandwidth of the memory interface bus amongst the different memory dice. For example, if eight memory dice are coupled to an eight-bit memory channel, the processing device can assign a memory status command for each die to a different bit of the memory channel In one embodiment, more than one memory status command in the aggregate status command is directed to the same memory component (e.g., memory die or other independent portion of memory). For example, the processing device can send two different memory status command types to the same memory component. Additionally, the processing device can send memory status commands to multiple different independent portions of a larger component (e.g., multiple different independent word line groups/planes within the same memory die). At operation225, the processing device issues or otherwise sends the selected memory status commands in parallel, as an aggregated status command, via the memory interface bus. In one embodiment, the processing device sends an indication of which memory component (e.g., memory die or independent portion of memory) is the target of each bit of the aggregated status command For example, prior to or in combination with sending the memory status commands, the processing device instructs the memory components coupled to the channel which memory component is to receive and respond to each individual memory status command on which bit of the memory channel In another embodiment, each memory component is preassigned a bit on the memory interface/channel for receiving and responding to memory status commands At operation230, the processing device receives memory status messages in parallel (e.g., an aggregated memory status response) via the memory interface bus. For example, each memory component targeted by the aggregated status command can use the expected memory subsystem timing to respond to its corresponding memory status command(s) via the same bit(s) on the memory channel the processing device used to target that memory component. The processing device can track which memory status message is expected on each bit of the channel and manage each of the parallel messages accordingly. Additionally, the processing device can receive multiple consecutive aggregated memory status responses. For example, each memory component targeted by the aggregated status command can respond to its corresponding memory status command(s) via the same bit(s) on the memory channel in a sequence of responses, resulting in a sequence of multiple aggregated memory status responses. FIG.3is a flow diagram of another example method300of memory status command aggregation in accordance with some embodiments of the present disclosure. The method300can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method300is performed by the status request aggregator113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation305, the processing device aggregates memory status commands For example, the processing device aggregates memory status commands per channel as describe above with reference to operation210. At operation310, the processing device assigns each of multiple memory status commands to a bit on a memory interface bus. For example, the processing device assigns memory status commands to bits on a memory channel as described above with reference to operation220. At operation315, the processing device sends the multiple memory status commands in parallel, as an aggregated status command, to multiple independent portions of memory via the memory interface bus. For example, the processing device issues the aggregated status command as described above with reference to operation225. FIG.4illustrates an example machine of a computer system400within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system400can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory subsystem (e.g., the memory subsystem110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the status request aggregator113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system400includes a processing device402, a main memory404(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory406(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system418, which communicate with each other via a bus430. Processing device402represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device402can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device402is configured to execute instructions426for performing the operations and steps discussed herein. The computer system400can further include a network interface device408to communicate over the network420. The data storage system418can include a machine-readable storage medium424(also known as a computer-readable medium) on which is stored one or more sets of instructions426or software embodying any one or more of the methodologies or functions described herein. The instructions426can also reside, completely or at least partially, within the main memory404and/or within the processing device402during execution thereof by the computer system400, the main memory404and the processing device402also constituting machine-readable storage media. The machine-readable storage medium424, data storage system418, and/or main memory404can correspond to the memory subsystem110ofFIG.1. In one embodiment, the instructions426include instructions to implement functionality corresponding to a status request aggregator (e.g., the status request aggregator113ofFIG.1). While the machine-readable storage medium424is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. For example, a computer system or other data processing system, such as the controller115, may carry out the computer-implemented methods200and300in response to its processor executing a computer program (e.g., a sequence of instructions) contained in a memory or other non-transitory machine-readable storage medium. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 33,073 |
11861229 | DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the various embodiments. However, it will be apparent to one skilled in the art that the inventive concepts may be practiced without one or more of these specific details. System Overview FIG.1is a block diagram of a computer system100configured to implement one or more aspects of the various embodiments. As shown, computer system100includes, without limitation, a central processing unit (CPU)102and a system memory104coupled to a parallel processing subsystem112via a memory bridge105and a communication path113. Memory bridge105is coupled to system memory104via a system memory controller130. Memory bridge105is further coupled to an I/O (input/output) bridge107via a communication path106, and I/O bridge107is, in turn, coupled to a switch116. Parallel processing subsystem112is coupled to parallel processing memory134via a parallel processing subsystem (PPS) memory controller132. In operation, I/O bridge107is configured to receive user input information from input devices108, such as a keyboard or a mouse, and forward the input information to CPU102for processing via communication path106and memory bridge105. Switch116is configured to provide connections between I/O bridge107and other components of the computer system100, such as a network adapter118and various add-in cards120and121. As also shown, I/O bridge107is coupled to a system disk114that may be configured to store content and applications and data for use by CPU102and parallel processing subsystem112. As a general matter, system disk114provides non-volatile storage for applications and data and may include fixed or removable hard disk drives, flash memory devices, and CD-ROM (compact disc read-only-memory), DVD-ROM (digital versatile disc-ROM), Blu-ray, HD-DVD (high-definition DVD), or other magnetic, optical, or solid-state storage devices. Finally, although not explicitly shown, other components, such as universal serial bus or other port connections, compact disc drives, digital versatile disc drives, film recording devices, and the like, may be connected to I/O bridge107as well. In various embodiments, memory bridge105may be a Northbridge chip, and I/O bridge107may be a Southbridge chip. In addition, communication paths106and113, as well as other communication paths within computer system100, may be implemented using any technically suitable protocols, including, without limitation, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol known in the art. In some embodiments, parallel processing subsystem112comprises a graphics subsystem that delivers pixels to a display device110that may be any conventional cathode ray tube, liquid crystal display, light-emitting diode display, and/or the like. In such embodiments, parallel processing subsystem112incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry. Such circuitry may be incorporated across one or more parallel processing units (PPUs) included within parallel processing subsystem112. In some embodiments, each PUPS comprises a graphics processing unit (GPU) that may be configured to implement a graphics rendering pipeline to perform various operations related to generating pixel data based on graphics data supplied by CPU102and/or system memory104. Each PPU may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or memory devices, or in any other technically feasible fashion In some embodiments, parallel processing subsystem112incorporates circuitry optimized for general purpose and/or compute processing. Again, such circuitry may be incorporated across one or more PPUs included within parallel processing subsystem112that are configured to perform such general purpose and/or compute operations. In yet other embodiments, the one or more PPUs included within parallel processing subsystem112may be configured to perform graphics processing, general purpose processing, and compute processing operations. System memory104includes at least one device driver103configured to manage the processing operations of the one or more PPUs within parallel processing subsystem112. In various embodiments, parallel processing subsystem112may be integrated with one or more other elements ofFIG.1to form a single system. For example, parallel processing subsystem112may be integrated with CPU102and other connection circuitry on a single chip to form a system on chip (SoC). In operation, CPU102is the master processor of computer system100, controlling and coordinating operations of other system components. In particular, CPU102issues commands that control the operation of PPUs within parallel processing subsystem112. In some embodiments, CPU102writes a stream of commands for PPUs within parallel processing subsystem112to a data structure (not explicitly shown inFIG.1) that may be located in system memory104, PP memory134, or another storage location accessible to both CPU102and PPUs. A pointer to the data structure is written to a pushbuffer to initiate processing of the stream of commands in the data structure. The PPU reads command streams from the pushbuffer and then executes commands asynchronously relative to the operation of CPU102. In embodiments where multiple pushbuffers are generated, execution priorities may be specified for each pushbuffer by an application program via device driver103to control scheduling of the different pushbuffers. Each PPU includes an I/O (input/output) unit that communicates with the rest of computer system100via the communication path113and memory bridge105. This I/O unit generates packets (or other signals) for transmission on communication path113and also receives all incoming packets (or other signals) from communication path113, directing the incoming packets to appropriate components of the PPU. The connection of PPUs to the rest of computer system100may be varied. In some embodiments, parallel processing subsystem112, which includes at least one PPU, is implemented as an add-in card that can be inserted into an expansion slot of computer system100. In other embodiments, the PPUs can be integrated on a single chip with a bus bridge, such as memory bridge105or I/O bridge107. Again, in still other embodiments, some or all of the elements of the PPUs may be included along with CPU102in a single integrated circuit or system of chip (SoC). CPU102and PPUs within parallel processing subsystem112access system memory via a system memory controller130. System memory controller130transmits signals to the memory devices included in system memory104to initiate the memory devices, transmit commands to the memory devices, write data to the memory devices, read data from the memory devices, and/or the like. One example memory device employed in system memory104is double-data rate SDRAM (DDR SDRAM or, more succinctly, DDR). DDR memory devices perform memory write and read operations at twice the data rate of previous generation single data rate (SDR) memory devices. In addition, PPUs and/or other components within parallel processing subsystem112access PP memory134via a parallel processing subsystem (PPS) memory controller132. PPS memory controller132transmits signals to the memory devices included in PP memory134to initiate the memory devices, transmit commands to the memory devices, write data to the memory devices, read data from the memory devices, and/or the like. One example memory device employed in PP memory134synchronous graphics random access memory (SCRAM), which is a specialized form of SDRAM for computer graphics applications. One particular type of SCRAM is graphics double-data rate SCRAM (GDDR SDRAM or, more succinctly, GDDR). Compared with DDR memory devices, GDDR memory devices are configured with a wider data bus, in order to transfer more data bits with each memory write and read operation. By employing double data rate technology and a wider data bus, GDDR memory devices are able to achieve the high data transfer rates typically needed by PPUs. It will be appreciated that the system shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs102, and the number of parallel processing subsystems112, may be modified as desired. For example, in some embodiments, system memory104could be connected to CPU102directly rather than through memory bridge105, and other devices would communicate with system memory104via memory bridge105and CPU102. In other alternative topologies, parallel processing subsystem112may be connected to I/O bridge107or directly to CPU102, rather than to memory bridge105. In still other embodiments, I/O bridge107and memory bridge105may be integrated into a single chip instead of existing as one or more discrete devices. Lastly, in certain embodiments, one or more components shown inFIG.1may not be present. For example, switch116could be eliminated, and network adapter118and add-in cards120,121would connect directly to I/O bridge107. It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Among other things, the computer system100ofFIG.1, may include any number of CPUs102, parallel processing subsystems112, or memory systems, such as system memory104and parallel processing memory134, within the scope of the disclosed embodiments. Further, as used herein, references to shared memory may include any one or more technically feasible memories, including, without limitation, a local memory shared by one or more PPUs within parallel processing subsystem112, memory shared between multiple parallel processing subsystems112, a cache memory, parallel processing memory134, and/or system memory104. Please also note, as used herein, references to cache memory may include any one or more technically feasible memories, including, without limitation, an L1 cache, an L1.5 cache, and L2 caches. In view of the foregoing, persons of ordinary skill in the art will appreciate that the architecture described in FIG.1in no way limits the scope of the various embodiments of the present disclosure. Transferring Commands and Data to and from a DRAM via a Single Clock Signal Various embodiments include an improved DRAM that uses a single clock to transfer both commands and data to and from the DRAM. The single command/data clock in the DRAM can be selected to operate at speeds similar to or higher than the high-speed clock of a conventional multiple clock signal high-speed DRAM. With the disclosed techniques, the bits of the commands are serialized by a memory controller and transmitted to the DRAM over a small number of connections to the DRAM command (CA) I/O pins. In some examples, the bits of the commands are transmitted over a single connection to a single DRAM CA I/O pin using the single data/command clock of the DRAM. To initialize the DRAM to receive one or more commands, the memory controller transmits a synchronization command to the DRAM. The synchronization command establishes the clock edges that correspond to the start of each command, referred to as command start points. The synchronization command may be in the form of a synchronization signal applied to one or more I/O pins of the DRAM. Thereafter, the memory controller transmits subsequent commands to the DRAM according to a predetermined command length. The predetermined command length is based on the number of clock cycles needed to transfer each command to the DRAM. Stated another way, a time period between a first command start point and a second consecutive command start point is based on a command length that specifies a total number of portions of a command transferred over consecutive clock cycles. Adjacent command start points are separated from one another by the predetermined command length. In some examples, the memory controller transmits commands to the DRAM over five I/O pins, labeled CA[4:0]. The memory controller transmits each command over four clock cycles of the high-speed clock signal, where one fourth of the command is transmitted per clock cycle. As a result, the complete command includes up to 24-bits. In this manner, the DRAM avoids the need for a second lower speed clock signal for transferring commands to the DRAM. FIG.2is a block diagram of a clocking architecture200for a memory device included in system memory104and/or parallel processing memory134of the computer system100ofFIG.1, according to various embodiments. As shown, the clocking architecture200for the memory device includes a single clock signal WCK202that synchronizes various commands transferred to the memory device. In particular, the WCK202clock signal is received from the memory controller by the memory device via a WCK receiver220and then transmitted to various synchronizing registers to capture commands and data being transferred to and from the memory device. In that regard, synchronizing register240captures the data presented on command (CA) pins204via receiver222at clock edges of the WCK202clock signal. After synchronization by the synchronizing register240, the synchronized CA bits are stored in a command DRAM core260. Similarly, the single clock signal WCK202synchronizes various data transferred to the memory device. In that regard, synchronizing register242captures main data and extended data (DQ/DQX) bits206via receiver224at clock edges of the WCK202clock signal. After synchronization by the synchronizing register242, the synchronized DQ/DQX bits206are stored in a data DRAM core262. Likewise, synchronizing register246captures error detection and correction data (EDC) bits208via receiver228at clock edges of the WCK202clock signal. After synchronization by the synchronizing register246, the synchronized EDC bits208are stored in the data DRAM core262. The single clock signal WCK202of the clocking architecture200for the memory device also synchronizes various data transferred from the memory device to other devices. In that regard, synchronizing register244captures main data and extended data (DQ/DQX) read from the data DRAM core262at clock edges of the WCK202clock signal. After synchronization by the synchronizing register244, the synchronized DQ/DQX bits206are transmitted via transmitter226to the other device. Likewise, synchronizing register248captures error detection and correction data (EDC) bits208read from the data DRAM core 262 bits at clock edges of the WCK202clock signal. After synchronization by the synchronizing register248, the synchronized EDC bits208are transmitted via transmitter230to the other device. During read operations of DQ/DQX bits206and/or EDC bits208, the memory device may transmit a read clock (RCK) signal210that is synchronous with the DQ/DQX bits206and/or EDC bits208transmitted by the memory device. In such cases, synchronizing register250synchronizes a read clock (RCK) generated by a read clock (RCK) generator264to be synchronous with WCK202. Transmitter232transmits the synchronized RCK signal210to the memory controller. As a result, the RCK signal210is synchronous with the DQ/DQX bits206synchronized by synchronizing register244and/or with the EDC bits208synchronized by synchronizing register248. FIG.3is a more detailed block diagram of the command address clocking architecture300for the memory device included in system memory104and/or parallel processing memory134of the computer system100ofFIG.1, according to various embodiments. As shown, command address clocking architecture300includes unsynchronized state detection logic306. Unsynchronized state detection logic306detects, based on various conditions, whether the command pin (CA) interface is synchronized or unsynchronized. In some examples, unsynchronized state detection logic306includes asynchronous logic circuits that do not receive a clock signal. Additionally or alternatively, unsynchronized state detection logic306includes synchronous logic circuits that receive a clock signal, such as a version of the WCK202clock signal. Unsynchronized state detection logic306detects when the memory device attempts to exit from a low power, reset, or CA training state. In response, unsynchronized state detection logic306enables command start point detection logic308. Upon receipt of a synchronization command or a command start point command, the memory device synchronizes the synchronized command decode314and/or the clock logic312based on the phase of WCK202that received the synchronization command. This condition completes the synchronization procedure of the CA interface, at which point the memory device is ready to accept regular synchronous commands from the memory controller. In some examples, unsynchronized state detection logic306detects that the CA interface is unsynchronized. Unsynchronized state detection logic306detects this state when the memory device is initially powered on, such as by a full power down and power up of VPP, VDD, VDDQ, and/or the like. In some examples, unsynchronized state detection logic306detects an assertion followed by a deassertion of the reset (RST) input signal302. When unsynchronized state detection logic306detects these conditions, unsynchronized state detection logic306determines that the CA interface is unsynchronized. In addition, the memory controller initiates a CA training procedure in order to train the unsynchronized CA interface, as described herein. In general, unsynchronized state detection logic306does not determine when CA training procedures are needed. Instead, the memory controller determines when CA training procedures are needed. After the CA training procedure completes, unsynchronized state detection logic306transmits a signal to command start point detection logic308to indicate that the CA interface is now synchronized. In some examples, unsynchronized state detection logic306detects that the memory device is recovering from a low-power state, such as a power down state, a self-refresh state, and/or the like, without undergoing a reset302or a full power down and power up of VPP, VDD, and/or VDDQ. In general, when the memory device is in a low-power state, the memory device powers down one or more receivers that receive external inputs and enters an asynchronous state. In such cases, the CA interface may lose synchronization with the memory controller. CA training procedures are optional when the memory device exits from a low-power state, a power down state, a self-refresh state, and/or the like. The memory controller may reestablish synchronization via an asynchronous procedure without assertion of a reset302or a full power down and power up of VPP, VDD, and/or VDDQ. With this asynchronous procedure, the memory device may remove power from receivers and transmitters of all I/O pins, including WCK202, except for a receiver for one or more I/O pins of the memory device involved in the asynchronous procedure. When recovering from the power down state or self-refresh state, the memory controller applies, and unsynchronized state detection logic306searches for, a particular value on the one or more I/O pins of the memory device with an active receiver. For example, the memory device may keep the receiver for one of the CA204command I/O pins active during power down or self-refresh states. When recovering from the power down or self-refresh state, the memory controller may apply, and unsynchronized state detection logic306may detect, a low value on the CA204command I/O pin over four successive clock cycles of WCK202. In response, the memory device begins a synchronization phase and waits to receive a synchronization command from the memory controller to establish a new first command start point. The synchronization command may be in the form of a synchronization signal applied to one or more I/O pins of the memory device. Advantageously, this asynchronous procedure allows the memory controller to reestablish synchronization with the CA interface without incurring the latency and penalty of performing another CA training procedure and/or other signal training procedures. Instead, the memory device resumes synchronous operation with the memory controller quickly when recovering from a low-power state, such as a power down state, a self-refresh state, and/or the like. After the asynchronous procedure completes, unsynchronized state detection logic306transmits a signal to command start point detection logic308to indicate that the CA interface is now synchronized. Command start point detection logic308receives a notification from unsynchronized state detection logic306when the CA interface is unsynchronized. Command start point detection logic308receives the notification when the memory device exits from a self-refresh state, a power down state, a CA training operation, a reset, and/or the like. In response, command start point detection logic308begins detecting specific command start point commands received via CA204command I/O pins. After command start point detection logic308receives a command start point, and the command start point is aligned with the memory controller, command start point detection logic308determines that the CA interface is synchronized. Command start point detection logic308transmits signals to command start point generation logic310to begin the process of generating command start points, as described herein. Command start point generation logic310generates signals, referred to as command start points, that indicate the start of each command received via CA204command I/O pins. Command start point generation logic310enables capture of synchronous multi-cycle commands. Command start point generation logic310generates command start points via various techniques. In some examples, command start point generation logic310includes counter-based logic that counts a number ‘n’ of phases or cycles of WCK202, where n is the number of partial command words in each full command word. Command start point generation logic310generates a command start point every n cycles. Additionally or alternatively, command start point generation logic310may include other counter-based logic, clock divider circuitry, clock detection logic, and/or the like. In some examples, each command may include four partial command words (n=4), then command start point generation logic310generates a signal when the first partial command word is present on CA204command I/O pins. Command start point generation logic310does not generate a signal when the second, third, and fourth partial command words are present on CA204command I/O pins. Command start point generation logic310again generates a signal when the first partial command word of the subsequent command is present on CA204command I/O pins. Command start point generation logic310transmits the generated command start points to clock logic312and synchronized command decode logic314. Clock logic312receives the WCK clock signal202via receiver220and also receives command start points from command start point generation logic310. In some examples, clock logic312generates synchronized and divided phases of WCK202to transmit to synchronizing register240, so that synchronizing register240accurately captures the partial command words received via CA204command I/O pins. In various examples, clock logic312may or may not employ the command start point indication received from command start point generation logic310. In some examples, the memory device captures the state of the CA204command I/O pins on certain rising and/or falling edges of WCK202. In such examples, clock logic312does not need to use the command start points to determine when to sample the CA204command I/O pins. Instead, only the command deserialization logic and/or synchronized command decode logic314determine the command start points. The command start points may be determined via a counter that is initially synchronized using the command start point. Once synchronized, the counter is free running and remain in synchronization with the memory controller. Additionally or alternatively, clock logic312receives a single command start point to set the phase of the divided clock signals. Clock logic312synchronizes an internal clock divider to the single command start point. From that point on, clock logic312generates divided clock signals that continue to remain in synchronization with the original command start point(s). Synchronized command decode logic314receives signals from command start point generation logic310to identify the start point of each command received via CA204command I/O pins. Synchronized command decode logic314is enabled after command start point detection is complete, indicating that the CA interface is synchronized. After the CA interface is synchronized, synchronized command decode logic314can decode synchronous commands received via CA204command I/O pins, including read commands, write commands, activate commands, and/or the like. Additionally or alternatively, after the CA interface is synchronized, synchronized command decode logic314can decode asynchronous commands received via CA204command I/O pins, including commands that do not have a command start point. Synchronized command decode logic314transmits decoded commands to command DRAM core260. FIG.4is a timing diagram400illustrating the initialization of the memory device included in system memory104and/or parallel processing memory134of the computer system ofFIG.1to receive commands, according to various embodiments. The memory device employs a single clock signal scheme that captures both command and data. The rate of the clock signal is determined by the transfer rate of the highest speed interface of the memory device. Typically, the data interface transfers data at a higher rate than the command interface. However, in some embodiments, the command interface may transfer data at a higher rate than the data interface. The rate of the clock signal rate is set at the transfer rate of the highest speed interface, such as the data interface. This clock signal is employed to transfer data to and from the memory device, typically at a rate of one data transfer per clock cycle. This clock signal is further employed to transfer commands, at a lower transfer rate, to the memory device. More specifically, commands are transferred to the memory device over multiple clock cycles of the high-speed clock signal, such as over four clock cycles. The high-speed clock signal is labeled WCK406and illustrates the timing of the WCK202I/O pin ofFIG.2. The command interface includes any number of I/O pins for transferring the command to the memory device, including the CA I/O pins204ofFIG.2. In some embodiments, the command interface includes five I/O pins, labeled CA[4:0], shown separately as CA[4:1]408and CA[0]410command I/O pins. In some embodiments, each command transferred over four clock cycles of the WCK406. The references to 0, 1, 2, and 3 represent the four phases of a command word412. A full command word412is transferred to the memory device over four cycles of WCK406, over a consecutive series of clock cycles 0, 1, 2, and 3. Therefore, a complete command includes up to 4 clock cycles×6 bits per clock cycle=24 bits. Each full command word412represents a command to be performed by the memory device, such as a write operation, a read operation, an activate operation, and/or the like. In order to synchronize transfer of commands to the memory device, the memory controller, such as system memory controller130or parallel processing subsystem (PPS) memory controller132, transmits a synchronization (sync) command418to the memory device prior to transferring commands to the memory device. As shown, the synchronization command418is in the form of a synchronization pulse signal received on the CA[0]410command I/O pin of the memory device. Additionally or alternatively, the synchronization command418may be in the form of a synchronization pulse signal received on any other technically feasible input/output pin of the memory device, such as one of the CA[4:1]408command I/O pins. Additionally or alternatively, the synchronization command418may be in the form of a synchronization signal received on any technically feasible combination of input/output pins of the memory device, such as two or more of the CA[4:1]408and/or CA[0]410command I/O pins. Additionally or alternatively, the synchronization command418may be any signal and/or other indication that the memory device employs to identify the phase of WCK406that sets the command start point414. As shown, the memory device receives the first command start point414, indicating the phase 0 of the first command, from the memory controller at four phases of WCK406after receiving the synchronization command418. Additionally or alternatively, the memory device may receive the first command start point414at any technically feasible number of phases WCK406after receiving the synchronization command418, such as a multiple of four phases, a non-multiple of four phases, and/or fewer than four phases. The synchronization command418indicates the valid command start points414for transferring commands, that is, which clock edge corresponds to the first portion of the multi-cycle command. At certain times, the memory device loses synchronization and does not know which clock cycles are valid command start points414. For example, the memory device loses synchronization when powered up, when recovering from a reset, when recovering from a low-power state, such as a power down state or a self-refresh state, and/or the like. In such cases, the memory controller transmits a synchronization command418to the memory device that enforces a new command start point414and synchronizes the memory device with the memory controller. Once synchronized, the memory device may begin accepting commands from the memory controller. More specifically, the memory device may power up when VPP, VDD, and VDDQ402are applied to the memory device, where VPP is the pump voltage, VDD is the main power supply voltage, and VDDQ is the I/O voltage. The memory controller applies a low voltage to the reset404input of the memory device, placing the memory device in a reset state. Subsequently, the memory controller applies a high voltage to the reset404input of the memory device in order to bring the memory device out of the reset state. Prior to applying the high voltage to the reset404input, the memory controller may apply a fixed bit pattern to the CA[4:1]408and CA[0]410command I/O pins of the memory device. This fixed bit pattern is referred to herein as “straps.” The memory device samples the state of the straps on the rising edge of reset404to determine the value of the fixed bit pattern. Based on the fixed bit pattern, the memory device may undergo certain startup procedures, such as an optional command pin (CA) training416procedure to command the memory device to determine the skew between WCK406and the CA[4:1]408and CA[0]410command I/O pins. The memory controller completes the startup procedures, such as the optional CA training416procedure, via an asynchronous communication sequence with the memory device. The optional CA training416procedure determines an optimal skew of the CA[4:1]408and CA[0]410command I/O pins with respect to WCK406to ensure that setup and hold time requirements are met for the CA[4:1]408and CA[0]410command I/O pins. The optional CA training416procedure further detects and corrects any multiple cycle skewing between any two or more command I/O pins to ensure that all command I/O pins are capturing command bits for the same command word412on the same rising or falling edge of WCK406. After completion of the optional CA training416procedure, the memory device is in a state where commands may be received synchronously with respect to rising edges and/or falling edges of WCK406. Alternatively, if the memory controller and memory device did not perform the optional CA training416procedure, then the memory device is ready to receive commands synchronously any time after the rising edge of reset404. In either case, the memory controller transmits a synchronization command418to the memory device prior to transferring commands to the memory device on one of the command I/O pins, shown inFIG.4as the CA[0]410command I/O pin. When the memory device receives the synchronization command418, the memory device counts a number of rising edges or falling edges of WCK406from either the leading edge or the trailing edge of the synchronization command418. In some examples, the memory device counts four rising edges of WCK406after the trailing edge of the synchronization command418to determine the first command start point414. The memory controller, in turn, applies phase 0 of the first command word412to the CA[4:1]408and CA[0]410command I/O pins. The memory controller applies phase 0 of the first command word412so as to be valid at the fourth rising edge of WCK406after the trailing edge of the synchronization command418. The memory controller applies phases 1, 2, and 3 of the first command word412so as to be valid at the consecutive rising edges of WCK406. The memory device samples the four phases of the first command word412on the CA[4:1]408and CA[0]410on these same four rising edges of WCK406. The first rising edge of WCK406after phase 3 of the first command word412represents a second command start point414. The memory controller applies, and the memory device transfers, the four phases 0, 1, 2, 3 of the second command word412on four successive rising edges of WCK406starting with the second command start point414. The first rising edge of WCK406after phase3of the second command word412represents a third command start point414, and so on. In some embodiments, the memory device may recover from a power down state, a self-refresh state, and/or the like without undergoing a reset404or a full power down and power up of VPP, VDD, VDDQ402. In such cases, the memory device may lose synchronization with the memory controller. In such cases, the memory controller may reestablish synchronization via an asynchronous procedure without assertion of a reset404or a full power down and power up of VPP, VDD, VDDQ402. With this asynchronous procedure, the memory device may remove power from receivers and transmitters of all I/O pins, including WCK406, except for a receiver for one or more I/O pins of the memory device involved in the asynchronous procedure. When recovering from the power down state or self-refresh state, the memory controller applies, and the memory device searches for, a particular value on the one or more I/O pins of the memory device with an active receiver. For example, the memory device may keep the receiver for the CA[0]410command I/O pin active during power down or self-refresh states. When recovering from the power down or self-refresh state, the memory controller may apply, and the memory device may detect, a low value on the CA[0]410command I/O pin over four successive clock cycles of WCK406. In response, the memory device begins a synchronization phase and waits to receive a synchronization command418from the memory controller to establish a new first command start point414. The synchronization command418may be in the form of a synchronization signal applied to one or more I/O pins of the memory device. Advantageously, this asynchronous procedure allows the memory controller to reestablish synchronization with the memory device without incurring the latency and penalty of performing another optional CA training416procedure and/or other signal training procedures. Instead, the memory device resumes synchronous operation with the memory controller quickly when recovering from a low-power state, such as a power down state, a self-refresh state, and/or the like. FIG.5is a timing diagram500illustrating the transfer of successive commands to a memory device included in system memory104and/or parallel processing memory134of the computer system ofFIG.1, according to various embodiments. As shown, the high-speed clock signal is a single clock signal for commands and data, labeled WCK406, and illustrates the timing of the WCK202I/O pin ofFIG.2. The command interface includes any number of I/O pins for transferring the command to the memory device, including the CA I/O pins204ofFIG.2. In some embodiments, the command interface includes five I/O pins, labeled CA[4:0]502, and are the same command I/O pins shown separately as the CA[4:1]408and CA[0]410command I/O pins ofFIG.4. In some embodiments, the command bits CA[4:0]502may be encoded via a non-return to zero (NRZ) data signaling mode. Five command start points414are shown inFIG.5, where each command start point414is coincident with a rising edge of WCK406coincident with phase 0 of a four-phase command. Three successive phases 1, 2, 3 of a command are coincident with three successive rising edges of WCK406. The rising clock edge of WCK406following phase 3 of a command is followed by a command start point414for phase 0 of the following command. Data transferred to and from the memory device may include main data bits (DQ), extended data bits (DQX), and error detection bits (EDC). The error detection bits are used to detect and/or correct bit errors in the main data bits and/or extended data bits via any technically feasible error detection and correction code, such as a cyclic redundancy check (CRC) code. The memory device may employ multiple data signaling modes based on different data transfer modes. For example, DQ and EDC data bits may employ a redundant data strobe (RDQS) data transfer mode, as shown in the DQ/EDC504timing diagram. In such cases, the DQ and EDC data bits may be encoded via an NRZ data signaling mode. In RDQS data transfer mode, data is transmitted to and from the memory device as one-bit symbols captured at twice the rate of command phases, on every rising edge and every falling edge of WCK406. Therefore, each DQ and EDC symbol includes one bit of data. Additionally or alternatively, the data transmitted to and from the memory device may employ a data transfer mode that transfers symbols that include two or more bits of data. In one example, the DQ, DQX, and EDC data bits may be encoded via a high-speed multilevel mode with symbols that carry more than one bit of data. One such data transfer mode is the 4-level pulse amplitude modulation (PAM4) data transfer mode that employs two-bit symbols, as shown in the DQ/DQX/EDC506timing diagram. In PAM4mode, data is transmitted to and from the memory device as two-bit symbols captured at twice the rate of command phases, on every rising edge and every falling edge of WCK406. The PAM4data transfer mode allows each data I/O pin to carry two bits of data that are captured on every rising edge and every falling edge of WCK406. Therefore, in PAM4data transfer mode, the data transfer rate is four times the command transfer rate. Whether the memory device operates in RDQS mode, PAM4mode, or any other data transfer mode, the same clock signal WCK406captures both the command bits and the data bits. It will be appreciated that the system shown herein is illustrative and that variations and modifications are possible. Among other things, a single command word may include multiple groups of four phases. In some examples, a single command word may include a multiple of four phases, such as eight phases, twelve phases, and/or the like. In such examples each command is transmitted over multiple four-phase commands via the CA[4:0] I/O pins. For a single command that includes eight phases, the command is transmitted as two successive four-phase commands. As the memory controller transmits the first four-phase command to the memory device, the memory device recognizes that the command is an eight-phase command. The memory device receives the first four phases of the command starting with a certain command start point414and receives the second four phases of the command starting with the next consecutive command start point414. Similarly, for a single command that includes twelve phases, the command is transmitted as three successive four-phase commands. As the memory controller transmits the first four-phase command to the memory device, the memory device recognizes that the command is a twelve-phase command. The memory device receives the first four phases of the command starting with a certain command start point414and receives the second four phases and the third four phases of the command starting with the next two consecutive command start points414, and so on. In another example, the commands transferred by the memory controller to the memory device are described as up to24command bits transmitted as four phases of five bits. However, the number of phases may be more than four phases or fewer than four phases, within the scope of the disclosed embodiments Further, the number of command bits may be more than five bits or fewer than five bits, within the scope of the disclosed embodiments. In yet another example, the signals disclosed herein are described in terms of rising and/or falling edges, high or low levels, and/or the like. However, rising edges and falling edges may be interchanged, high levels and low levels may be interchanged, and any other technically feasible changes may be made with respect to signal edges and levels within the scope of the disclosed embodiments. FIG.6is a flow diagram of method steps for transferring commands to a memory device included in system memory104and/or parallel processing memory134of the computer system ofFIG.1, according to various embodiments. Although the method steps are described in conjunction with the systems ofFIGS.1-4, persons of ordinary skill in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the present disclosure. As shown, a method600begins at step602, where a memory device receives a synchronization command418on an input of the memory device. In order to synchronize transfer of commands to the memory device, a memory controller, such as system memory controller130or parallel processing subsystem (PPS) memory controller132, transmits a synchronization command418to the memory device prior to transferring commands to the memory device. The synchronization command may be in the form of a synchronization signal applied to one or more I/O pins of the DRAM. The synchronization command418indicates the valid command start points414for transferring commands, that is, which clock edge corresponds to the first portion of the multi-cycle command. At certain times, the memory device loses synchronization and does not know which clock cycles are valid command start points414. For example, the memory device loses synchronization when powered up, when recovering from a reset, when recovering from a low-power state, such as a power down state or a self-refresh state, and/or the like. In such cases, the memory controller transmits a synchronization command418to the memory device that enforces a new command start point414and synchronizes the memory device with the memory controller. Once synchronized, the memory device may begin accepting commands from the memory controller. More specifically, the memory device may power up when VPP, VDD, and VDDQ402are applied to the memory device, where VPP is the pump voltage, VDD is the main power supply voltage, and VDDQ is the I/O voltage. The memory controller applies a low voltage to the reset404input of the memory device, placing the memory device in a reset state. Subsequently, the memory controller applies a high voltage to the reset404input of the memory device in order to bring the memory device out of the reset state. At step604, the memory device synchronizes to a clock edge based on the synchronization command418. When the memory device receives the synchronization command418, the memory device counts a number of rising edges or falling edges of a high-speed clock WCK406from either the leading edge or the trailing edge of the synchronization command418. The high-speed clock WCK406is the same clock used by the memory device to receive and transmit data. In some examples, the memory device counts four rising edges of WCK406after the trailing edge of the synchronization command418to determine the first command start point414. At step606, the memory device receives a first portion, phase 0, of the command on the WCK406clock edge determined at step604. The memory controller, in turn, applies phase 0 of the first command word412to the CA[4:1]408and CA[0]410command I/O pins. The memory controller applies phase 0 of the first command word412so as to be valid at the fourth rising edge of WCK406after the trailing edge of the synchronization command418. At step608, the memory device receives additional portions, phase 1, 2, and 3, of the command on successive WCK406clock edges after the clock edge determined at step604. The memory controller applies phases 1, 2, and 3 of the first command word412so as to be valid at the consecutive rising edges of WCK406. The memory device samples the four phases of the first command word412on the CA[4:1]408and CA[0]410on these same four rising edges of WCK406. At step610, the memory device receives portions of additional commands on successive WCK406clock edges after the clock edge of phase 3 of the first command. The first rising edge of WCK406after phase 3 of the first command word412represents a second command start point414. The memory controller applies, and the memory device transfers, the four phases 0, 1, 2, 3 of the second command word412on four successive rising edges of WCK406starting with the second command start point414. The first rising edge of WCK406after phase 3 of the second command word412represents a third command start point414, and so on. The method600then terminates. Alternatively, the method600proceeds to step610to transfer additional commands to the memory device. Thus, by repeatedly transferring commands to the memory device in the described manner, commands and data may be transferred to and from the memory device via a single high-speed clock signal. If the memory device subsequently loses synchronization, such as when powered up, when recovering from a reset, when recovering from a low-power state, such as a power down state or a self-refresh state, and/or the like, then the method600proceeds to step602to begin synchronization again. In sum, various embodiments include an improved DRAM that uses a single clock to transfer both commands and data to and from the DRAM. The single command/data clock in the DRAM can be selected to operate at speeds similar to or higher than the high-speed clock of a conventional multiple clock signal high-speed DRAM. With the disclosed techniques, the bits of the commands are serialized by a memory controller and transmitted to the DRAM over a small number of connections to the DRAM command (CA) I/O pins. In some examples, the bits of the commands are transmitted over a single connection to a single DRAM CA I/O pin using the single data/command clock of the DRAM. To initialize the DRAM to receive one or more commands, the memory controller transmits a synchronization command to the DRAM. The synchronization command establishes the clock edges that correspond to the start of each command, referred to as command start points. The synchronization command may be in the form of a synchronization signal applied to one or more I/O pins of the DRAM. Thereafter, the memory controller transmits subsequent commands to the DRAM according to a predetermined command length. The predetermined command length is based on the number of clock cycles needed to transfer each command to the DRAM. Adjacent command start points are separated from one another by the predetermined command length. In some examples, the memory controller transmits commands to the DRAM over five I/O pins, labeled CA[4:0]. The memory controller transmits each command over four clock cycles of the high-speed clock signal, where one fourth of the command is transmitted per clock cycle. As a result, the complete command includes up to 24-bits. In this manner, the DRAM avoids the need for a second lower speed clock signal for transferring commands to the DRAM. At least one technical advantage of the disclosed techniques relative to the prior art is that, with the disclosed techniques, commands and data are received by a memory device at different transfer rates via a single clock signal. As a result, the memory device does not need internal synchronizing and training circuitry to account for possible skew between multiple clock signals. An additional advantage of the disclosed techniques is that only one receiver and I/O pin are needed to receive the clock signal rather than two receivers and I/O pins. As a result, the complexity of the internal circuitry, the surface area, and power consumption of the DRAM die may be reduced relative to approaches involving multiple clock signals. Further, the I/O pin previously employed to receive the second clock signal is available for another function, such as an additional command bit, data bit, or control signal. These advantages represent one or more technological improvements over prior art approaches. Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present disclosure and protection. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 54,518 |
11861230 | DETAILED DESCRIPTION Hereinafter, various embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings. In the following description, it is to be noted that only parts necessary for understanding the operation according to the present disclosure will be described, and the description of the other parts will be omitted so as not to obscure the subject matter of the present disclosure. FIG.1is a diagram schematically illustrating an example of a data processing system100including a memory system110in accordance with an embodiment of the present disclosure. Referring toFIG.1, the data processing system100may include a host102operatively coupled to a memory system110. The host102may include any of various portable electronic devices such as a mobile phone, MP3 player and laptop computer, or any of various non-portable electronic devices such as a desktop computer, a game machine, a television (TV), and a projector. The host102may include at least one operating system (OS), which may manage and control overall functions and operations of the host102, and provide one or more operations between the host102and a user using the data processing system100or the memory system110. The OS may support functions and operations corresponding to the use, purpose, and usage of a user. For example, the OS may be divided into a general OS and a mobile OS, depending on the mobility of the host102. The general OS may be divided into a personal OS and an enterprise OS, depending on the environment of a user. The memory system110may operate to store data for the host102in response to a request of the host102. Non-limiting examples of the memory system110may include a solid state drive (SSD), a multi-media card (MMC), a secure digital (SD) card, a universal serial bus (USB) device, a universal flash storage (UFS) device, a compact flash (CF) card, a smart media card (SMC), a personal computer memory card international association (PCMCIA) card and a memory stick. The MMC may include an embedded MMC (eMMC), reduced size MMC (RS-MMC) and micro-MMC, and the like. The SD card may include a mini-SD card and a micro-SD card. The memory system110may be embodied by various types of storage devices. Examples of such storage devices may include, but are not limited to, volatile memory devices such as a dynamic random access memory (DRAM) and a static RAM (SRAM), and nonvolatile memory devices such as a read only memory (ROM), a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a ferroelectric RAM (FRAM), a phase-change RAM (PRAM), a magneto-resistive RAM (MRAM), a resistive RAM (RRAM or ReRAM) and a flash memory. The flash memory may have a 3-dimensional (3D) stack structure. The memory system110may include a controller130and a memory device150. The memory device150may store data for the host102, and the controller130may control data storage into the memory device150. The controller130and the memory device150may be integrated into a single semiconductor device. For example, the controller130and the memory device150may be integrated as one semiconductor device to constitute a solid state drive (SSD). When the memory system110is used as an SSD, the operating speed of the host102connected to the memory system110can be improved. In addition, the controller130and the memory device150may be integrated as one semiconductor device to constitute a memory card. For example, the controller130and the memory device150may constitute a memory card such as a personal computer memory card international association (PCMCIA) card, a compact flash (CF) card, a smart media (SM) card, a memory stick, a multimedia card (MMC) including reduced size MMC (RS-MMC) and micro-MMC, a secure digital (SD) card including mini-SD card, micro-SD card and SDHC card, or a universal flash storage (UFS) device. Non-limiting application examples of the memory system110may include a computer, an Ultra Mobile PC (UMPC), a workstation, a net-book, a Personal Digital Assistant (PDA), a portable computer, a web tablet, a tablet computer, a wireless phone, a mobile phone, a smart phone, an e-book, a Portable Multimedia Player (PMP), a portable game machine, a navigation system, a black box, a digital camera, a Digital Multimedia Broadcasting (DMB) player, a 3-dimensional television, a smart television, a digital audio recorder, a digital audio player, a digital picture recorder, a digital picture player, a digital video recorder, a digital video player, a storage device constituting a data center, a device capable of transmitting/receiving information in a wireless environment, one of various electronic devices constituting a home network, one of various electronic devices constituting a computer network, one of various electronic devices constituting a telematics network, a Radio Frequency Identification (RFID) device, or one of various components constituting a computing system. The memory device150may be a nonvolatile memory device and may retain data stored therein even though power is not supplied. The memory device150may store data provided from the host102through a program operation, and provide data stored therein to the host102through a read operation. The memory device150may include a plurality of memory blocks, each of which may include a plurality of pages, and each of the pages may include a plurality of memory cells coupled to a word line. In an embodiment, the memory device150may be a flash memory. The flash memory may have a 3-dimensional (3D) stack structure. The memory device150may include a plurality of memory blocks including a single level cell (SLC) memory block for storing 1-bit data and a multi-level cell (MLC) memory block for storing multi-bit data. The SLC memory block may include a plurality of pages implemented as memory cells each storing one bit data therein. The SLC memory block may have high durability and fast data operation performance. On the other hand, the MLC memory block may include a plurality of pages implemented as memory cells each storing multi-bit data, such as two or more bits, therein. The MLC memory block may have a larger data storage space than the SLC memory block. That is, the MLC memory block may be highly integrated. The controller130may control the memory device150in response to a request of the host102. For example, the controller130may provide the host102with data read from the memory device150, and store data provided from the host102in the memory device150. For this operation, the controller130may control read, program and erase operations of the memory device150. The controller130may include a plurality of operation modules for controlling the memory device150. The plurality of operation modules may include a plurality of processing cores and a memory. The controller130may include a clock generator140for providing each of the plurality of operation modules with one or more clock signals. The plurality of operation modules may operate based on the clock signals provided from the clock generator140. The memory system110may be required to operate with a performance equal to or greater than a predetermined required performance, according to a host request pattern. The host request pattern may indicate a command type and a command pattern of each of the commands provided from the host102to the memory system110. For example, the command type may include a read type and a write type, and depending on implementation, the write type may include an SLC write type and an MLC write type. The command pattern may include a sequential pattern and a random pattern. The host request pattern may include an SLC sequential write pattern, an MLC sequential write pattern, an SLC random write pattern, an MLC random write pattern, a sequential read pattern, a random read pattern and a mixed pattern. The mixed pattern may indicate a workload pattern in which commands having different command types or command patterns are mixed and received from the host102. The required performance may indicate a performance required to be provided by the memory system110to a user. The required performance may be predetermined in a specification of the memory system110or the like. The required performance may vary depending on the host request pattern. For example, the specification of the memory system110may require the memory system110to satisfy different required performance such that a required performance for a sequential read pattern is different from a required performance for a random read pattern. Maximum clock frequency values of clock signals provided to each of the operation modules of the controller130may be predetermined. The maximum clock frequency values may be determined as frequency values that allow the memory system110to satisfy the required performance in all the host request patterns. When all of the operation modules operate according to a clock signal having the maximum frequency value, the required performance of the memory system110may be satisfied regardless of a current host request pattern. However, in this situation, power consumption and heat generation of the controller130may be maximized. Depending on a workload for each operation module, even though at least some of the operation modules operate according to a clock signal having a frequency value lower than the maximum frequency value, the required performance of the memory system110may be satisfied. The workload may include a foreground workload required to process commands received from the host102and a background workload required to manage the memory system110. The foreground workload for each operation module may vary according to the host request pattern that changes in real time. The background workload for each operation module may also vary in real time. Accordingly, an optimal clock frequency, which is a clock frequency capable of minimizing power consumption while satisfying the required performance of the memory system110, may vary in real time for each operation module. The controller130according to an embodiment may adaptively determine the optimal clock frequency for each operation module, on the basis of changes in the host request pattern and background workload. According to an embodiment, the controller130may determine a target performance of the memory system110, and determine an optimal clock frequency set on the basis of the target performance. In addition, the controller130may change the target performance and the optimal clock frequency set when detecting a change in the host request pattern or a change in the background workload. The target performance may indicate the performance of the memory system110when operation modules of the controller130operate according to the clock signal having the maximum frequency value. A clock frequency set may refer to a set of frequencies of the clock signals provided to the plurality of operation modules. The optimal clock frequency set may refer to a set of clock frequencies in which a current performance (or actual performance) of the memory system110may satisfy the target performance and power consumption may be minimized. The term “current performance” herein and below represents actual performance of the memory system110(i.e., the controller130and the memory device150). The controller130may detect a change in the background workload by detecting a change in the current performance. For example, when the current performance decreases even though the clock frequency set corresponding to the clock signals provided to the operation modules and the host request pattern do not change, the controller130may detect that the background workload increases. The controller130may initialize the clock frequency set when detecting the changes in the host request pattern and current performance. For example, the controller130may initialize the clock frequency set by determining the clock frequency set as the maximum clock frequencies determined for the operation modules. The controller130may determine the current performance given after the clock frequency set is initialized, and determine the current performance as the target performance. The controller130may repeatedly perform an operation of changing at least one clock frequency included in the clock frequency set and an operation of monitoring the current performance given after the clock frequency is changed. For example, the controller130may repeatedly perform the operation of changing the clock frequency and the operation of monitoring the current performance until the monitored current performance becomes lower than the target performance. The controller130may determine, as the optimal clock frequency set, a clock frequency set of a repeating operation immediately before a last repeating operation in which the current performance becomes lower than the target performance. The controller130may control the clock generator140to provide the plurality of operation modules with the clock signals according to the determined optimal clock frequency set. According to an embodiment, the controller130may minimize power consumption and heat generation of the memory system110while satisfying the required performance of the memory system110. FIG.2is a diagram specifically illustrating the controller130described above with reference toFIG.1in accordance with an embodiment of the present disclosure. Referring toFIG.2, the controller130may include a host interface (I/F)132, a processor134, a clock generator140, a memory interface142and a memory144, all operatively coupled via an internal bus. The clock generator140may correspond to that described with reference toFIG.1. The host interface132, the processor134, the memory interface142and the memory144may correspond to the plurality of operation modules described with reference toFIG.1. The host interface132, which is a region for exchanging data with the host102, may drive firmware referred to as a host interface layer (HIL). Depending on implementation, the host interface132may be a command queue interface that supports a protocol such as Non-Volatile Memory express (NVMe). The command queue interface may support interfacing between the host102and the memory system110on the basis of a queue pair including a submission queue for inputting a requested command and a completion queue for recording a processing result of a corresponding command. The queue pair may be included in the host interface132. The host102may determine the number of queue pairs and a queue depth for each queue pair. The queue depth may refer to the number of commands that can be simultaneously queued in each queue. The required performance of the memory system110may vary according to information on the number of queue pairs and information on the queue depth for each queue pair. For example, the host102may increase the number of commands that can be simultaneously queued in the memory system110by increasing the number of queue pairs and the queue depth for each queue pair. As the number of commands that can be simultaneously queued in the memory system110increases, the memory system110may process commands with higher performance, and a higher required performance may be specified. According to the present embodiment, when the controller130detects a change in a command queue state as well as a change in the current performance of the memory system110or a change in the host request pattern, the controller130may initialize the clock frequency set, and then change the optimal clock frequency set. The command queue state may indicate the number of queue pairs and the queue depth for each queue pair. The controller130may flexibly respond to the required performance changed by the host102by changing the optimal clock frequency set on the basis of the change in the command queue state. The host interface132may include a direct memory access (DMA) that controls data transmission/reception between the host102and the memory144. Depending on implementation, the host interface132may monitor the current performance of the memory system110by monitoring the amount of data transmitted and received through the DMA. The memory interface142may serve as a memory/storage interface for interfacing the controller130and the memory device150such that the controller130controls the memory device150in response to a request from the host102. When the memory device150is a flash memory or specifically a NAND flash memory, the memory interface142may generate a control signal for the memory device150and process data to be provided to the memory device150under the control of the processor134. The memory interface142may work as an interface (e.g., a NAND flash interface) for processing a command and data between the controller130and the memory device150. Specifically, the memory interface142may support data transfer between the controller130and the memory device150. The memory interface142may drive firmware referred to as a flash interface layer (FIL). The memory interface142may include an error correction code (ECC) component. The ECC component may detect and correct an error contained in the data read from the memory device150. In other words, the ECC component may perform an error correction decoding process on the data read from the memory device150through an ECC value used during an ECC encoding process. According to a result of the error correction decoding process, the ECC component may output a signal, for example, an error correction success/fail signal. When the number of error bits is more than a threshold value of correctable error bits, the ECC component may not correct the error bits, and may output an error correction fail signal. The ECC component may perform error correction through a coded modulation such as Low Density Parity Check (LDPC) code, Bose-Chaudhri-Hocquenghem (BCH) code, turbo code, Reed-Solomon code, convolution code, Recursive Systematic Code (RSC), Trellis-Coded Modulation (TCM) and Block coded modulation (BCM). However, the ECC component is not limited to any specific structure. The ECC component may include all circuits, modules, systems or devices for error correction. The memory144may store clock frequency information referenced to determine the clock frequency set of the controller130. The clock frequency information may include maximum clock frequencies determined for operation modules. The processor134may control overall operation of the memory system110. The processor134may drive firmware to control the overall operation of the memory system110. The firmware may be referred to as a flash translation layer (FTL). In addition, the processor134may be implemented as a microprocessor or a central processing unit (CPU) including one or more processing cores. The processor134may drive a flash translation layer (FTL) and perform a data input and output (IO) operation corresponding to a request received from the host102. For example, the processor134may control a write operation of the memory device150in response to a write request from the host102, and control a read operation of the memory device150in response to a read request from the host102. The controller130may perform various internal tasks for managing the memory device150through the processor134. For example, the internal tasks may include a background task such as a garbage collection (GC) operation, a wear leveling (WL) operation, a map flush operation and a bad block management operation. The processor134may determine the host request pattern on the basis of the command type and command pattern of one or more commands provided from the host102. The processor134may determine the clock frequency set on the basis of current performance information from the host interface132and the host request pattern. The clock generator140may provide the host interface132, the processor134, the memory interface142and the memory144with one or more clock signals on the basis of the clock frequency set determined by the processor134. A method of determining the clock frequency set by the processor134is described in detail with reference toFIG.3. FIG.3is a diagram illustrating the controller130in accordance with an embodiment of the present disclosure. The controller130ofFIG.3may correspond to the controller130described with reference toFIG.2.FIG.3illustrates some of components that may be included in the controller130. Referring toFIG.3, the host interface132may include a performance monitor222. The performance monitor222may monitor the current performance of the memory system110by monitoring the amount of data exchanged between the controller130and the host102. In a first example, the performance monitor222may monitor the current performance of the memory system110by counting the number of data blocks transmitted/received through the DMA per unit time. In a second example, the host interface132may receive command wrappers, each including information on a command from the host102. The host interface132may monitor the current performance of the memory system110on the basis of information on sizes of data included in the command wrappers received per unit time. The performance monitor222may be implemented as hardware on a controller chip, but the present disclosure is not limited thereto. The processor134may include a request (Req.) pattern monitor242, a queue monitor244and a clock determiner246. The request pattern monitor242may determine the host request pattern in real time by monitoring the command type and command pattern of one or more commands received from the host102. Known methods may be used for the request pattern monitor242to determine the host request pattern. The queue monitor244may monitor command queue state information including the information on the number of queue pairs and the information on the queue depth for each queue pair. The command queue state information may be obtained from the host interface132. The clock determiner246may change the clock frequency set on the basis of the current performance information from the performance monitor222, the host request pattern information from the request pattern monitor242and the command queue state information from the queue monitor244. The determining of the clock frequency set may refer to determining a clock frequency for each operation module of the controller130. The clock determiner246may determine the target performance by monitoring the current performance given when clock frequency values included in the clock frequency set are initialized to maximum values. The maximum clock frequency values may be determined as frequency values that allow the memory system110to satisfy the required performance in all of the host request patterns. Accordingly, the target performance may be equal to or greater than the required performance of the memory system110under a current host request pattern and command queue state. The clock determiner246may determine an optimal clock frequency set capable of minimizing power consumption while satisfying the target performance. The clock determiner246may provide the clock generator140with the determined optimal clock frequency set. The clock generator140may provide each of the plurality of operation modules with a clock signal according to the clock frequency set. The request pattern monitor242and the clock determiner246may be implemented as firmware loaded into the memory144and driven in the processor134, but the present disclosure is not limited thereto. The memory144may store a clock level table262as an example of the clock frequency data, which may be referenced by the clock determiner246to determine the clock frequency set. The clock level table262may include a plurality of clock frequency values, each of which may correspond to each of the operation modules. A clock level may be assigned to each of the plurality of clock frequency values. The clock determiner246may determine the clock frequency set of the operation modules by determining the clock level for each operation module with reference to the clock level table262. A method of determining the clock frequency set of the operation modules by the clock determiner246is described in detail with reference toFIGS.4and5. FIG.4is a diagram illustrating the clock level table262in accordance with an embodiment of the present disclosure. The clock level table262may include clock frequency values according to a plurality of clock levels for each operation module.FIG.4illustrates the host interface132, the processor134, the memory interface142and the memory144as examples of the operation modules. In an example ofFIG.4, the clock level table262may include 10 clock frequency values for each operation module, and the clock levels from level “1” to level “10” may be assigned to the clock frequency values, respectively. The clock frequency values corresponding to each of the clock levels may be determined in advance. The clock frequency value corresponding to level “10” of each operation module may correspond to a maximum clock frequency value. The clock determiner246may determine to change the clock frequency set when the host request pattern, the command queue state or the current performance changes. The clock determiner246may initialize the clock levels of the operation modules included in a clock level set to level “10” which is a default level. The clock determiner246may determine the target performance of the memory system110on the basis of the current performance given after the clock level set is initialized. The clock determiner246may change the clock level set by changing at least one clock level in the clock level set. Further, the clock determiner246may repeatedly perform an operation of determining whether the current performance is maintained greater than or equal to the target performance when the operation modules operate based on the changed clock level set. Thereby the clock level set can be determined such that power consumption can be reduced while maintaining the target performance. FIG.5is a flowchart illustrating an operation of the controller130in accordance with an embodiment of the present disclosure. Referring toFIG.5, in operation S502, the clock determiner246may detect a change in a host request pattern, a change in a command queue state or a change in a current performance. The clock determiner246may detect the changes in the host request pattern, the command queue state and/or the current performance on the basis of the host request pattern obtained from the request pattern monitor242, the command queue state obtained from the queue monitor244and/or the current performance obtained from the performance monitor222, respectively. When the clock determiner246detects the changes in the host request pattern, the command queue state or the current performance, the clock determiner246may determine to change a clock level set by performing operations S504, S506, S508, S510and S512. In a first example, the request pattern monitor242may determine the host request pattern on the basis of the type and pattern information of commands received from the host interface132. For example, when the commands received from the host102are changed from SLC sequential write commands to MLC sequential write commands, the request pattern monitor242may change the host request pattern from SLC sequential write to MLC sequential write. When the host request pattern is changed from the SLC sequential write to the MLC sequential write, a required performance of the memory system110may decrease. When the controller130operates based on a current clock level set even though the required performance of the memory system110decreases, power consumption of the memory system110may be wasted. Accordingly, the clock determiner246may determine to change the clock level set when detecting that the host request pattern is changed. In a second example, the queue monitor244may determine the command queue state on the basis of information on the number of queue pairs and information on a queue depth for each queue pair obtained from the host interface132. For example, when the command queue depth increases, command queue state information may be changed. When the command queue depth increases, the required performance of the memory system110may increase. When the controller130operates based on the current clock level set even though the required performance of the memory system110increases, it may be difficult to satisfy the required performance of the memory system110. Accordingly, the clock determiner246may determine to change the clock level set when detecting that the command queue state is changed. In a third example, the performance monitor222may determine the current performance of the memory system110on the basis of the amount of data transmitted and received from the host interface132. For example, when a background workload of the controller130increases, resources for performing a foreground operation become insufficient, and thus the current performance may decrease. When the controller130operates based on the current clock level set even though the current performance decreases, it may be difficult to satisfy the required performance of the memory system110. Accordingly, the clock determiner246may determine to change the clock level set when detecting that the current performance is changed. In operation S504, the clock determiner246may initialize a clock frequency set to maximum clock frequencies. According to an embodiment, the clock determiner246may initialize the clock frequency set by determining the clock level set as a default level set. In the example ofFIG.4, the clock determiner246may determine clock levels of the host interface132, the processor134, the memory interface142and the memory144as “10”, thereby initializing clock frequencies to “1000 MHz”, “800 MHz”, “900 MHz” and “1000 MHz”, respectively. Hereinafter, the clock level set may be denoted as [a, b, c, d], and the clock frequency set may be denoted as [A, B, C, D]. “a”, “b”, “c” and “d” may indicate the clock levels of the host interface132, the processor134, the memory interface142and the memory144, respectively. “A”, “B”, “C” and “D” may indicate the clock frequencies of the host interface132, the processor134, the memory interface142and the memory144, respectively. In operation S506, the clock determiner246may determine a target performance of the memory system110. The clock determiner246may obtain current performance information from the performance monitor222after initializing the clock frequency set to the maximum clock frequencies. The current performance given after the clock frequency set is initialized to the maximum clock frequencies may vary depending on the host request pattern, the command queue state and/or the background workload. However, the maximum clock frequencies of the operation modules may be designed to satisfy the required performance of the memory system110irrespective of the host request pattern. Accordingly, the current performance given after the clock frequency set is initialized may satisfy the required performance according to the current host request pattern and command queue state. The clock determiner246may determine, as the target performance, the current performance given after the clock frequency set is initialized, in order to determine an optimal clock frequency set that can satisfy the required performance. In operation S508, the clock determiner246may change at least one clock frequency included in the clock frequency set. In the illustrated example ofFIG.4, the clock determiner246may change the clock frequency set to [950 MHz, 780 MHz, 880 MHz, 970 MHz] by changing the clock level set [10, 10, 10, 10] to [9, 9, 9, 9]. In addition, the clock determiner246may provide the clock generator140with the changed clock frequency set, thereby controlling the host interface132, the processor134, the memory interface142and the memory144to operate based on the clock frequencies of “950 MHz”, “780 MHz”, “880 MHz” and “970 MHz”, respectively. In operation S510, the clock determiner246may determine whether the current performance is maintained greater than or equal to the target performance given after the clock frequencies are changed. When the current performance is greater than or equal to the target performance (that is, “YES” in operation S510), the clock determiner246may repeat operations S508and S510once more. When the current performance is lower than the target performance (that is, “NO” in operation S510), the clock determiner246may determine the optimal clock frequency set in which the current performance can be maintained as the target performance, in operation S512. For example, the clock determiner246may determine, as the optimal clock frequency set, the clock frequency set determined in a repeating operation immediately before a last repeating operation of operations S508and S510. For example, the clock determiner246may sequentially lower the clock level for each operation module by one level while repeating operations S508and S510. As illustrated inFIG.4, the clock level set may be changed in the order of [10, 10, 10, 10]→[9, 9, 9, 9]→[8, 8, 8, 8],→[7, 7, 7, 7],→[6, 6, 6, 6]. When the current performance becomes lower than the target performance as a result of changing the clock level set to [7, 7, 7, 7] by the clock determiner246, the clock determiner246may determine the optimal clock level set as [8, 8, 8, 8], which is the clock level set of a repeating operation immediately before a last repeating operation. Referring to the clock level table262, the optimal clock frequency set may be determined as [920 MHz, 750 MHz, 820 MHz, 920 MHz]. In operation S514, the clock determiner246may provide the clock generator140with the optimal clock frequency set so that the clock generator140provides the operation modules with clock signals on the basis of the optimal clock frequency set. The clock generator140may provide the host interface132, the processor134, the memory interface142and the memory144with the clock signals on the basis of the optimal clock frequency set. Even after the optimal clock frequency set is determined, the clock determiner246may obtain the host request pattern from the request pattern monitor242in real time, obtain the command queue state from the queue monitor244, and obtain the current performance of the memory system110from the performance monitor222. When any of the host request pattern, the command queue state and the current performance is changed, the clock determiner246may change the optimal clock frequency set by performing the operations ofFIG.5again, starting from operation S502. A method of changing the clock level set in order to find the optimal clock frequency set by the clock determiner246is not limited to an example of lowering the clock level for each operation module by one level. In a first example, the clock determiner246may lower the clock levels of some operation modules in one repeating operation of operations S508and S510. In a second example, the clock determiner246may lower corresponding clock levels by two or more levels according to the host request pattern or the operation module. According to the present embodiment, the controller130may reduce power consumption while satisfying the required performance of the memory system110despite the host request pattern, command queue state and/or current performance that are changed in real time. In a first example, when the host request pattern is changed from SLC sequential write to MLC sequential write, the clock determiner246may decrease the optimal clock frequency set, thereby satisfying the required performance and reducing the power consumption. In a second example, when the command queue depth increases, the clock determiner246may increase the optimal clock frequency set, thereby satisfying the increasing required performance. In a third example, when the background workload of the controller130increases, the clock determiner246may increase the optimal clock frequency set, thereby maintaining the required performance even while performing the background operation. An example of the clock generator140that provides the operation modules with the clock signals in response to the control of the clock determiner246is described below with reference toFIG.6. FIG.6is a diagram illustrating the clock generator140in accordance with an embodiment of the present disclosure. Referring toFIG.6, the clock generator140may include a plurality of phase-locked loops (PLLs). The plurality of phase-locked loops, e.g., 3 phase-locked loops PLL1, PLL2and PLL3, may include oscillators having different clock frequencies and dividers for dividing the clock frequency. The clock generator140may generate clock signals having various clock frequencies by setting division ratios of the plurality of phase-locked loops PLL1, PLL2and PLL3. The clock generator140may obtain a clock frequency set from an internal register of the clock determiner246. Further, the clock generator140may generate clock signals to be provided to the host interface132, the processor134and the memory interface142by using the plurality of phase-locked loops PLL1, PLL2and PLL3. In addition, the clock generator140may provide the host interface132, the processor134and the memory interface142with the generated clock signals. According to an embodiment, when the controller130detects a change in actual performance of the memory system110, a change in a host request pattern or a change in a command queue state, the controller130may determine an optimal clock frequency which can minimize power consumption while satisfying the required performance of the memory system110. The controller130may flexibly respond to changes in the current performance, the host request pattern and the command queue state by changing the optimal clock frequency set in response to the current performance, host request pattern and command queue state that are changed in real time, respectively. For example, even when the host request pattern corresponds to a mixed pattern in which the required performance is not specified in the specification in advance, the controller130may determine a target performance, and determine the optimal clock frequency set on the basis of the target performance. In addition, the controller130may detect that the required performance is improved and change the target performance by changing the command queue state of the host102, and may determine the optimal clock frequency set on the basis of the changed target performance. In addition, the controller130may detect that the background workload increases by detecting that the current performance is changed, and satisfy the required performance by changing the optimal clock frequency set. Accordingly, the controller130may minimize power consumption and heat generation while satisfying the required performance of the memory system110in real time. According to the embodiments of the present disclosure, it is possible to provide a controller and an operating method thereof capable of reducing power consumption while satisfying a required performance. Although a controller and an operating method thereof have been described with reference to the specific embodiments, these are merely examples, and the present disclosure is not limited thereto, and should be interpreted to have the widest scope according to the basic idea disclosed in the present specification. Those skilled in the art may carry out unspecified embodiments by combining and substituting the disclosed embodiments, which also do not depart from the scope of the present disclosure. In addition, those skilled in the art may easily change or modify the embodiments disclosed based on the present specification, and it is apparent that such changes or modifications also fall within the scope of the present disclosure and the following claims. Furthermore, the embodiments may be combined to form additional embodiments. | 40,275 |
11861231 | DETAILED DESCRIPTION As a preliminary note, the terms “component”, “module”, “system,” and the like as used herein are intended to refer to a computer-related entity, either software-executing general-purpose processor, hardware, firmware and a combination thereof. For example, a component may be, but is not limited to being, a process running on a hardware processor, a hardware processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). Computer executable components can be stored, for example, at non-transitory, computer readable media including, but not limited to, an ASIC (application specific integrated circuit), CD (compact disc), DVD (digital video disk), ROM (read only memory), floppy disk, hard disk, storage class memory, solid state drive, EEPROM (electrically erasable programmable read only memory), memory stick or any other storage device type, in accordance with the claimed subject matter. In one aspect, innovative technology is provided for high capacity (e.g., in peta-bytes (“PB”)) storage devices that can be scaled up or down based on storage needs, independent of compute/memory that may be used for executing a storage operating system.FIG.1Ashows an example of a system10with compute nodes12A/12B that access scalable storage devices14A-14C (may also be referred to as storage device14or storage devices14as well as PB SSD14or PB SSDs14), collectively shown as24. It is noteworthy that although three storage devices are shown in system10, the adaptive aspects of the present disclosure are not limited to 3 devices and instead may have “N” devices, hence the term storage devices14A-14N, as used herein. Each compute/memory node12A/12B can be scaled up or down based on computing needs. Storage capacity can be added or reduced by adding or removing one or more storage device14. As an example, the storage devices14include zoned namespace solid state drives (“ZNS SSDs”). In one aspect, ZNS SSDs comply with the NVMe (Non-Volatile Memory Host Controller Interface) zoned namespace (ZNS) specification defined by the NVM Express® (NVMe®) standard organization. A “zone” as defined by the NVMe ZNS standard is a sequence of blocks that are written in a sequential fashion and are overwritten by performing a “Zone Erase” or “Zone Reset operation” per the NVMe specification. Storage space at each ZNS SSD is exposed as zones, e.g., physical zones (“PZones”) and RAID zones (“RZones”), each RAID zone having a plurality of PZones. The RZones are presented to software layers that interface with a file system to process read and write requests. Conventional SSD systems face various challenges when it comes to shared SSD storage. For example, in a cluster-based storage system with multiple cluster storage nodes that provide access to storage, managing shared free space across clusters or shared file system metadata can be difficult, especially for a single multi core system. It is also difficult to implement distributed RAID on shared SSDs because it can be difficult to coordinate background RAID processing between multiple cluster nodes, as well as determining which node will respond to errors. In one aspect, as described below in detail, the technology disclosed herein solves various technical challenges that face conventional storage operating systems. FIG.1Bshows an example of storage device14A, according to one aspect of the present disclosure. The storage device14A is accessible via a network connection (e.g., Ethernet)18and a NVMeoF (NVMe over Fabric) controller16. The NVMeoF protocol is an extension of the NVMe protocol that uses network protocols, e.g., Ethernet and Fibre Channel for delivering faster and more efficient connectivity between storage devices and servers. In one aspect, the storage space at multiple PB SSDs14A-14N can be presented as a PB scale single namespace15. In NVMe® technology, a namespace is a collection of logical block addresses (LBA) accessible to a software layer, e.g., a storage operating system instance. A namespace identifier (“NSID” or “NS”) is an identifier used by a NVMe controller (e.g.,16) to provide access to a namespace. A namespace is typically not a physical isolation of blocks, rather involves isolation of addressable logical blocks. The innovative technology disclosed herein uses conventional namespace (referred to as “CNS” in the specification and some of the Figures) to provide exclusive access to one storage operating system instance, and ZNS19(e.g., having zone1-zone20,000) to provide shared access to multiple storage operating system instances, as described below in detail. CNS in this context, as used herein, refers to a contiguous range of blocks which are randomly read/writable, whereas ZNS is a collection of zones where a zone is a range of blocks that can be randomly read, but written sequentially per the NVMe ZNS standard. FIG.1Bfurther shows a logical configuration of storage device14A for reducing the overall cost of storage and efficiently add or decrease storage capacity, as needed, according to one aspect of the present disclosure. As an example, the storage device14A may include different storage media types, e.g., a non-volatile, dynamic random-access memory (NVRAM)26, high endurance flash (referred to as “HFE” e.g., triple-layer-cell SSDs (TLC)) or SCM (storage class memory)27and low endurance flash (referred to as “LFE,” e.g., quad-layer cell (QLC) SSDs29(also referred to as PB scale SSDs). The various storage devices enable a storage operating system to configure and manage storage at a giga-byte (GB) level, terra-byte (TB) level and PB (peta-byte) level using the different types of storage media. For example, if a system needs more PB scale storage, then a LFE (e.g., QLC type SSD) is simply added to provide PB scale storage. If the system needs more NVRAM or HFE (e.g., TLC type SSDs) to store hot data, then TLC type storage can be added to the storage device14A. The storage scaling up or down is independent of compute/memory nodes12A/12B. It is noteworthy that although the description below refers to SCM, TLC27and QLC29as examples of HFE and LFE, the various aspects of the present disclosure are not limited to SCM, TLC and/or QLC type storage. Storage space at various media types can be accessed via multiple namespaces shown as NSID1-NSID7. NSIDs1-6are configured to access the NVRAM26and HFE27type storage. NSID-16provide exclusive access to NVRAM26and HFE27to various storage operating system instances, as described below in detail. NSID7provides shared access to LFE, i.e., PB scale storage29, also described below in detail. Multiple NVMeoF controllers16A-16B can read and write data via an interconnect22for requests received via network connections18A/18B. As an example, interconnect22is an abstraction that represents any one or more separate physical buses and/or point-to-point connections, connected by appropriate bridges, adapters and/or controllers. Interconnect22, therefore, may include, for example, a system bus, a Peripheral Component Interconnect (PCI) Express (PCIe) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (sometimes referred to as “Firewire”) or any other interconnect type. FIG.1Cshows another example of storage device14A with built-in redundancy to handle one or more failure domains. The storage device14A in this example includes redundant components e.g., multiple network links18A-18D, NVMeoF controllers16A/16B, and multiple flash controllers30A-30D that access NVRAM26, HFE27and LFE29type storage using NSIDs1-12, where NSID3and NSID12are used for shared access using ZNS while NSID1-2and NSID34-11are used for exclusive access at NVRAM26and HFE27type storage. Reference number28A refers to a redundant fabric, while reference number28B shows a pair of NVMeoF controllers. As an example, data is stored redundantly across failure domains such that a single failure (e.g.,32) will not cause loss of data access because spare storage capacity, shown as34, can be used to store data from the failed domain. If a network link (e.g.,18A) fails, then another network link (e.g.,18B) can be used to access storage. If one of the NVMeoF controller (e.g.,16A) fails, then the other controller (e.g.,16B) can be used to access the underlying storage using the assigned namespaces. FIG.1Dshows an example configuration of using one or more storage devices14A-14N by a plurality of storage operating system instances36A-36N (may also be referred to as storage operating system instance36or storage operating system instances36). A storage operating system instance36in this context means a virtual machine executing an instance of a storage operating system, a cloud-based container or a micro-service executing an instance of the storage operating system. As an example, each storage operating system instance36may include several modules, or “layers”. These layers include a file system (may also be referred to as file system manager)42A-42N (may also be referred to as file system42) that keeps track of a directory structure (hierarchy) of the data stored in storage devices14and manages read/write operations, i.e., executes read/write operations on storage devices14in response to read/write requests. The file system42uses logical storage objects (e.g., a storage volume, a logical unit number (LUN) or any other logical object) to store information and retrieve information. The storage space at the storage devices (e.g., HFE27and LFE29) is represented by one or more “aggregates,” and within each aggregate one or more storage volumes/LUNs are created. Each storage system instance has access to one or more aggregates to store and retrieve information i.e., the storage system instance owns the “storage.” To store and retrieve information, a computing device, typically issues write and/or read requests. Based on the request type (i.e., write or read request), the storage operating system instance36stores information at the storage space within one or more aggregate or retrieves information. The file system42logically organizes stored information as a hierarchical structure for stored files/directories/objects. Each “on-disk” file may be implemented as a set of data blocks configured to store information, such as text, whereas a directory may be implemented as a specially formatted file in which other files and directories are stored. The data blocks are organized within a volume block number (VBN) space that is maintained by the file system. The file system may also assign each data block in the file a corresponding “file offset” or file block number (FBN). The file system typically assigns sequences of FBNs on a per-file basis, whereas VBNs are assigned over a larger volume address space. The file system organizes the data blocks within the VBN space as a logical volume. The file system typically consists of a contiguous range of VBNs from zero to n, for a file system of size n−1 blocks. As an example, the file system uses an inode, a data structure, to store information, such as metadata, about a file, whereas the data blocks are structures used to store the actual data for the file. The information in an inode may include, e.g., ownership of the file, file modification time, access permission for the file, size of the file, file type and references to locations of the data blocks for the file. The references to the locations of the file data are provided by pointers, which may further reference indirect blocks (e.g., L1 blocks.FIG.2B) that, in turn, reference the data blocks (e.g., L0 blocks,FIG.2B), depending upon the amount of data in the file. Each storage operating system instance36may also include a protocol layer and an associated network access layer, to enable communication over a network with other systems. Protocol layer may implement one or more of various higher-level network protocols, such as NFS (Network File System) (44A-44N), CIFS (Common Internet File System) (46A-46N), S3 (48A-48N), Hypertext Transfer Protocol (HTTP), TCP/IP and others. The S3 protocol uses an HTTP REST (Representational State Transfer) API (Application Programming Interface) that utilizes HTTP requests e.g., “get”, “put”, “post,” and “delete,” requests for reading, storing and deleting data. The S348interface is used to store and retrieve storage objects stored at cloud storage, as described below. The network access layer may also include one or more drivers, which implement one or more lower-level protocols to communicate over the network, such as Ethernet. Each operating system instance36may also include a storage access layer and an associated storage driver layer to communicate with the storage devices. The storage access layer may implement a higher-level disk storage protocol, such as a RAID layer, and a zone translation layer (ZTL), while the storage driver layer may implement a lower-level storage device access protocol, such as the NVMe protocol. Each operating system instance36executes an exclusive interface (may also be referred to as exclusive RAID CNS)38A-38N and a shared interface (may also be referred to as shared RAID ZNS)40A-40N. The exclusive interface38provides access to exclusive private, HFE27for hot data and metadata using an exclusive namespace, while the shared interface40provides access to globally shared LFE29using a shared namespace. The globally shared LFE29may also be used to store hot read-only data56that is accessible to any of the storage operating system instances36. This allows a system to promote read data that becomes hot but is still stored at a capacity tier (i.e., LFE29). This configuration provides globally shared LFE29with “read anywhere” capability. TheFIG.1Dconfiguration enables data tiering along with dis-aggregated shared storage. Furthermore, this configuration resolves distributed RAID challenges because each storage device14A-14N internally implements redundancy for each zone (seeFIG.1C) across failure domains, thus relieving the storage OS instances36to implement RAID across failure domains inside the PB SSD14(as would be the case if PB SSD was implemented as a collection of distinct SSDs visible to each storage OS instance). Furthermore, one of the storage operating system instances36can be responsible for responding to errors using both shared and exclusive storage. TheFIG.1Dconfiguration further alleviates the shared storage problem of conventional systems, according to one aspect of the present disclosure. TheFIG.1Dconfiguration is used to divide storage space into exclusive and shared storage pools using NVMe namespaces. The metadata and mutating data are stored in HFE27. Immutable data is efficiently stored by the storage operating instances36in shared LFE29. Immutable data from the LFE can be accessed by multiple storage operating instances36, without having to promote the cold data to hot data storage tiers. This improves processing read requests and reduces the overall cost of storing data since LFE are cheaper than HFE. FIG.1Eshows an example of using scalable flash tiering, according to one aspect of the present disclosure. InFIG.1Ethe storage operating system instance36includes a network module63that executes the network protocol layers to interfaces with client systems. A storage abstraction layer (“SAL”)64stores information regarding various storage resources used and available for different client systems. SAL64maintains a “storage footprint” or storage layout for different storage resources (for example, storage systems including storage devices). S3 BIN-166and S3 BIN-268are software layers that interface with a capacity tier storage operating system instance37or an object storage bucket69in the cloud. The capacity tier storage (i.e., LFE29) may be managed by the storage operating system instance37with a storage module70that interacts with the LFE capacity tier storage29. Data at the capacity tier29is accessed directly through shared interface40via read path67A, while exclusive interface38accesses data at HFE27. When data at HFE27becomes immutable, it is tiered down as immutable data67B to LFE29. Cold data67C can also be tiered out to cloud storage69via interface68. In one aspect, using a dedicated capacity storage operating system instance37to manage LFE29is advantageous because the objects written to LFE29can be efficiently checked for duplicate blocks by the storage operating system instance37, thus providing global dedupe across multiple instance objects. In one aspect, the various namespaces (e.g., NSD1-NSID12,FIG.1C) are enabled by a processor executable configuration process. The process is executed before the storage devices14are initialized. During configuration, the ZNS and CNS are first determined based on the number of storage operating system instances and a number of failure domains that are advertised by the storage devices14. For example, if the number of failure domains are 4 (as shown inFIG.1C) then the configuration process creates at least 1 CNS and 1 ZNS per failure domain. The total storage capacity and type of SSD (i.e., LFE or HFE) assigned to each namespace is based on the size determined by configuration process. In general, the ZNS (e.g.,19,FIG.1B) is used for LFE29(e.g., QLC) and consumes the majority of the storage capacity of each domain. The CNS size (e.g., NVRAM26and HFE27) is based on the amount of metadata and the expected amount of hot & mutable data. As an example, CNS can be in the range of 5%-10% of the size of the ZNS. It is noteworthy that although storage namespaces are shown as distinct namespaces, i.e., CNS and ZNS, the adaptive aspects of the present disclosure are not limited to different namespaces. CNS is simply shown to as a private namespace for HFE (e.g., TLC), while ZNS is shown as shared namespace for LFE (e.g., QLC). The configuration process starts the storage operating system instances36to discover the various namespaces. Once the namespaces are visible to each storage operating system instance36, the ownership of each namespace is assigned. The ownership information regarding each namespace is maintained as specific block offsets at a storage location. The configuration process next configures RAID or other redundancy schemes over the namespaces. The specific configuration of redundancy scheme depends on whether a single appliance with multiple storage devices is being configured or a collection of appliances are being used. An example configuration for a single appliance could be RAID1 across failure domains. After RAID or other redundancy schemes have been configured, the storage system instances36create aggregates and volumes on the namespaces owned by each. The ZNS may be assigned ownership i.e., full read/write access by special storage system instances36that serve as shared cold data repositories to the other storage system instances36, but read-only access is granted to the ZNS from non-owner instances. Ownership and shared access may be asserted using NVMe protocol reservation on the namespaces during system operation. FIG.2Ashows an example of implementing the different namespaces (e.g., as shown inFIG.1E) in storage devices14A-14N having HFE27and LFE29, according to one aspect of the present disclosure. The storage operating system instance36A executes an exclusive RAID interface38A that owns/manages (or is assigned) a higher endurance namespace such as NS1(Namespace1) to access hot and mutable data stored in HFE27. The storage operating system instance36B executes an exclusive RAID interface38B to access hot and mutable data stored in HFE27using NS2(Namespace2). LFE namespaces NS4and NS5are owned/managed (or assigned) by capacity tier instances37A and37B, respectively. The shared RAID interface40B is used by the storage operating system instance36B to access data from LFE29using the shared or ZNS namespace NS4and NS5(e.g., using the read only path67). In this example, the storage operating system instance36B can also write to the shared LFE29. Data can be written via the S3 interface66B and capacity tier instances37A and/or37B using the namespace NS4. FIG.2Bshows an example of tiering down data from HFE27to LFE29. A data volume (or a logical unit (LUN))74A of an aggregate72A is managed by the storage operating system instance36A. The data volume74A may be configured to store data files (or data containers or data objects), scripts, word processing documents, executable programs, and any other type of structured or unstructured data. From the perspective of a client system, each volume can appear to be a single storage drive. However, each volume can represent namespaces from one or more storage devices14A-14N. In capacity tier (e.g., LFE29), aggregate72B includes one or more capacity volumes74B to store immutable data or readable hot data. The immutable data may be compressed and de-duplicated. In the example ofFIG.2B, hot and mutable data are shown as files F1and F2. Each file has indirect blocks L1 that store pointers to data blocks L0 (e.g.,78). When file F2becomes immutable (or cold), then the S3 interface66A uses a S3 put operation to place the file in capacity tier29as object X (76A). To improve storage efficiency, object X is compressed and de-duped, and stored as object Y (76B) by the capacity tier interface37. FIG.2Cshows metadata update after file F2is moved to capacity tier, LFE29as object X, according to one aspect of the present disclosure. The metadata at TLC27is updated with new pointers82(shown in block80) pointing to the capacity tier storage location of object X76A/Y76B. This enables the storage operating system instance36A to access data directly from capacity volume74B using the pointers82. FIG.2Dshows an example of multiple storage operating system instances36A-36C sharing LFE29of storage devices14A-14N for storing immutable data54and storing metadata and buffered data52A-52C at HFE27A-27C/NVRAM26A-26C of storage devices14A-14N. As an example, the storage operating system instance36A uses an exclusive namespace NS1to access HFE27A/NVRAM26A, the storage operating system instance36B uses an exclusive namespace NS2to access HFE27B/NVRAM26B, and the storage operating system instance36C uses an exclusive namespace NS3to access HFE27A/NVRAM26A. Immutable data can be read by any storage operating system instance36A-36C using the shared namespace NS4. FIG.2Eshows another aspect of the present disclosure without a dedicated capacity tier storage operating system instance, and instead using storage operating system instances36A-36C. In the configuration ofFIG.2E, different zones54B-54I at LFE29have different permissions. For example, zone54B is writable by storage operating system instance37A using namespace NS1, zone54C is writable by storage operating system instance37B using namespace NS2, zone54D is writable by storage operating system instance37C using namespace NS3, and zones54E-54I are readable for all storage operating system instances via shared namespace NS4. It is noteworthy that each storage operating system instance36A-36C can access the read-only zones using metadata stored at HFE27A-27C and NVRAM26A-26C. In one aspect, to implement the configuration ofFIG.2E, a shared data structure (not shown) stores information regarding each zone in LFE29. This data structure can be replicated via multiple CNS, namely, NS1for the storage operating system instance36A, NS2namely NS2for the storage operating system instance36B and NS3, for the storage operating system instance36C. Each zone may have the following states: “Free,” “Full”, “Readable by any”, or “Writable-by-owner”. Whenever a storage operating system instance wants to modify the shared data structure to change the state of any zone it atomically obtains a lock on a page storing the zone state. After obtaining the lock, the update to the state change is written to all replicas. The update is successful if a write quorum number of replicas were successfully updated, if not, the update is rolled back, and the lock is released. Other data structures for tracking shared zone information, for example, reference counts on data blocks in zones can be managed in a similar way. The reference counts are updated whenever a file is deleted or overwritten that release blocks within a zone. Process Flows:FIGS.2F-2Jshow various process flows for using the innovative architecture described above. In one aspect, the various namespaces (e.g., NSD1-NSID12,FIG.1C) are enabled by a processor executable configuration process. The process is executed before the storage devices14are initialized. During configuration, several ZNS and CNS are first determined based on the number of storage operating system instances and a number of failure domains that are advertised by the storage devices14. For example, if the number of failure domains are 4 (as shown inFIG.1C) then the configuration process creates at least 1 CNS and 1 ZNS per failure domain. The total storage capacity and type of SSD assigned to each namespace is based on the size determined by configuration process. In general, the ZNS is used for LFE29(e.g., QLC) and consumes majority of the storage capacity of each domain. The CNS size (e.g.,26and HFE27) is based on the amount of metadata and the expected amount of hot & mutable data. As an example, CNS can be in the range of 5%-10% of the size of the ZNS. It is noteworthy that although storage namespaces are shown as distinct namespaces, i.e., CNS and ZNS, the adaptive aspects of the present disclosure are not limited to different namespaces. CNS is simply shown as a private namespace for HFE (e.g., TLC), while ZNS is shown as shared namespace for LFE (e.g., QLC). The configuration process then starts the storage operating system instances to discover the various namespaces. Once the namespaces are visible to each instance, the ownership of each namespace is assigned. The ownership information regarding each namespace is maintained as specific block offsets. The configuration process next configures RAID or other redundancy schemes over the namespaces. The specific configuration of redundancy scheme depends on whether a single appliance is being configured or a collection of appliances are being used. An example configuration for a single appliance could be RAID1 across failure domains. After RAID or other redundancy schemes have been configured, the storage system instances36create aggregates and volumes on the namespaces owned by each. The ZNS19may be assigned ownership i.e., full read/write access by special storage system instances that serve as shared cold data repositories to the other storage system instances, but read-only access is granted to the ZNS from non-owner instances. Ownership and shared access may be asserted using NVMe protocol reservation on the namespaces during system operation. FIG.2Fshows a configuration process201, according to one aspect of the present disclosure. Process201begins in block B203, before the storage devices14are deployed. In block B205, the process determines the number of storage operating system instances (36A-36N) and the number of failure domains for the storage devices14. Based on that, in block B207, exclusive namespace (e.g., NS1and NS2,FIG.2A) and shared ZNS (e.g., NS4and NS5,FIG.2A) are assigned to each failure domain. For example, if the there are 4 failure domains, then the process creates at least one exclusive and 1 shared namespace. Thereafter, in block B209, storage space at HFE27and LFE29is assigned to the exclusive and shared namespaces. In block B211, each storage operating system instance36A-36N is initialized and discover the assigned exclusive namespace (e.g., NS1and NS2,FIG.2Aand shared namespaces (e.g., NS4and NS5,FIG.2A). Ownership is assigned to each storage operating system instance36and a RAID redundancy scheme is configured. Thereafter, in block B213, aggregates and volumes are created that can be accessed via the exclusive (e.g., NS1and NS2,FIG.2A) and shared namespaces (e.g., NS4and NS5,FIG.2A), as described above in detail. In one aspect,FIG.2Gshows another process200that enables multiple storage operating system instances36A-36N to access read only data from shared LFE29, while using HFE27for reading, writing and storing metadata (seeFIG.1D). Process200begins in block B202, when storage devices14are configured with HFE27, LFE29and NVRAM26(seeFIG.1B). In block B204, exclusive namespace (e.g., NS1and NS2,FIG.2A) is assigned to each storage operating system instance36A-36N, as described above with respect toFIG.2F. The exclusive namespace (e.g., NS1and NS2,FIG.2A) is used by each storage operating system instance36to read and write information at HFE27, including metadata associated with the stored data. In block B206, a shared namespace (e.g., NS4and NS25,FIG.2A) is assigned to each storage operating system instance36A-36N, as described above with respect toFIG.2F. This enables read access to data stored in LFE29. In one aspect, in block B208, a portion of the LFE29(e.g., shown as56inFIG.1D) is configured to store hot read only data, without having to promote the hot read only data to HFE27. The hot data in this context means data is being read frequently by the storage operating system instances36using the shared namespace. In block B210, the storage operating system instances36A-36N directly access data from portion56using the shared namespace, while continuing to use HFE27for read and write access. FIG.2Hshows another process212to configure storage devices14, according to one aspect of the present disclosure. Process212begins in block B214when one or more storage devices14are configured with a first portion, e.g., HFE27, a second portion, NVRAM26(seeFIG.1B) and a third portion, LFE29for use by one or more storage operating system instances36A-36N. In block B216, one or more storage devices14are logically separated into the three portions, to use the storage space at HFE27, LFE29and NVRAM26. In block B218, an exclusive namespace (e.g., NS1and NS2,FIG.2A) is assigned to each storage system instance36A-36N, as described above with respect toFIGS.2A and2F. This enables read and write access for each storage system instance36A-36N to HFE27. As an example, HFE27is configured to store metadata and hot data. In block B220, a shared namespace (e.g., NS4and NS5,FIG.2A) is assigned to the storage operating system instances36A-36N for read access to LFE29as described above with respect toFIGS.2A and2F. In block B222, the storage operating system instances36read data from LFE29using the shared namespace. To read and write data from HFE27, the exclusive namespace of each storage operating system instances36is used. FIG.2Ishows a process224that enables a storage operating system instance36to transfer data from HFE27to the shared LFE29(seeFIGS.2C/2D), according to one aspect of the present disclosure. Process224begins in block B226, when storage devices14are configured with HFE27, LFE29and NVRAM26. In block B228, an exclusive namespace (e.g., NS1and NS2,FIG.2A) is assigned (as described above with respect toFIGS.2A and2F) to at least one of the storage operating system instances36A-36N, may also be referred to as a first storage operating system instance (e.g.,36A,FIG.2C). The exclusive namespace is used by the first storage operating system instance36A to read from and write information to HFE27, including metadata associated with the data stored at LFE29. Furthermore, a shared namespace (e.g., NS4and NS5,FIG.2A) is assigned (as described above with respect toFIGS.2A and2F) to multiple storage operating system instances36to enable shared read access at LFE29. In one aspect, in block B230, the first storage system instance36A identifies data that may have become cold or immutable (e.g., file F2,FIG.2B). The first storage operating system instance36A tracks when data is stored, modified and accessed. Based on that the first storage operating system instance36A determines when data becomes cold data or immutable data. In block B232, the S3 BIN interface66A of the first storage operating system instance36A requests (e.g., S3 PUT,FIG.2B) the capacity tier instance37to transfer the file F2from HFE27to LFE29. In block B234, the capacity tier instance37transfers the file F2as object X76A and stores the object X76A at the LFE29. It is noteworthy that the object X76A may also be stored at a cloud-based storage69, as shown inFIG.1E. In another aspect, the cold data is only transferred to the cloud-based storage69. The metadata for the file F2is updated with new pointers (e.g.,82,FIG.2C) that points to the storage location where object X76A (or Object Y76B) is stored at LFE29. When the storage operating system instance36A receives a read request to read file F2, in block B236, the updated metadata, i.e., the direct block pointers82(FIG.2C) are used to access the data stored at LFE29for the file F2. In one aspect, a method for using the HFE27and LFE29is provided. The method includes assigning (e.g., B228,FIG.2I) a first namespace (e.g., NS4,FIG.2A) to a first instance (e.g.,36B,FIG.2A) of a storage operating system and a second instance (e.g.,37A,FIG.2A) of the storage operating system for enabling read access to a first portion (e.g., LFE29) of a flash storage system by the first instance, and read and write access to the second instance; allocating (e.g., B228,FIG.2I) a second namespace (e.g., NS2,FIG.2A) to the first instance for exclusive read and write access within a second portion (e.g., HFE27,FIG.2A) of the flash storage system; generating (e.g., B232,FIG.2I), by the first instance, a request for the second instance to transfer a data object (e.g.,76A,FIG.2B) from the second portion owned by the first instance to the first portion; storing (e.g., B234,FIG.2I), by the second instance, the data object at the first portion; and updating (e.g., B234,FIG.2I) metadata of the data object at the second portion, the metadata (e.g.,80,FIG.2C) indicating a storage location (e.g.,82,FIG.2C) at the second portion where the data object is stored. The method further includes utilizing (e.g., B236,FIG.2I), by the first instance, metadata at the second portion to retrieve the data object from the first portion, in response to a read request for the data object received by the first instance. The method also includes identifying (e.g., B230,FIG.2I), by the first instance, that the data object has become cold and in response, transmitting the request to the second instance. In one aspect, updating the metadata of the data object at the second portion includes storing a pointer (e.g.,82,FIG.2C) at the second portion owned by the first instance, the pointer pointing to the storage location of the data object at the first portion. In one aspect, the first portion includes a first type of solid-state drive (e.g., QLC) and the second portion includes a second type (e.g., TLC) of solid-state drive, where the first type is a capacity tier with storage performance lower than the second type. Furthermore, the first namespace is a zoned namespace (e.g., ZNS19) for providing shared read access to the first and second instance and write access to the second instance. FIG.2Jshows a process240using the architecture ofFIG.2E, described above in detail, according to one aspect of the present disclosure. Process240begins in block B242, when storage devices14are configured with HFE27A-27C, LFE29and NVRAM26A-26C (seeFIG.2E), where the HFE27A-27C is referred to as a first portion, NVRAM26A-26C is referred to as the second portion and LFE29is referred to as a third portion. In block B244, an exclusive namespace (e.g., NS1, NS2and NS3,FIG.2E) is assigned to each storage operating system instance36A-36C to enable access to HFE27A-27C. Each exclusive namespace is used by a corresponding storage operating system instance36A-36C to read and write information at HFE27, including metadata associated with the stored data. In block B246, a shared namespace (e.g., NS4) is assigned to the multiple storage operating system instances36A-36C to enable read access at LFE29. The various zones in LFE29are configured such that some portions are writable by the storage operating system instances36A-36C. For example, zone54B is writable by the storage operating system instance36A using namespace NS1, zone54C is writable by the storage operating system instance36B using namespace NS2and zone54D is writable by the storage operating system instance36C using namespace NS3. Zones54E,54F,54G,54H and54I are readable by any storage operating system instance36A-36C using the shared namespace, NS4. HFE27A-27C and NVRAM26A-26C are used for storing metadata and buffered data. In block B248, the read only and writable zones of LFE29are used by the storage operating system instances36A-36C. The metadata can be used by each storage operating system instances36A-36C to access data from the shared zones of LFE29using the shared namespace NS4. The metadata at HFE27is maintained using the exclusive namespace NS1-NS3by the storage operating system instances36A-36C, respectively. In one aspect, process240can be implemented by a shared data structure (not shown) that stores zone information in LFE29. This data structure can be replicated via multiple CNS to HFE27(and or NVRAM26). Each zone may have the following states: “Free,” “Full”, “Readable by any”, or “Writable-by-owner”. Whenever a storage operating system instance36wants to modify the shared data structure to change the state of any zone it atomically obtains a lock on a page storing the zone state. After obtaining the lock the update to the state change is written to all replicas. The update is successful if a write quorum number of replicas were successfully updated, if not, the update is rolled back, and the lock is released. Other data structures for tracking shared zone information, for example, reference counts on data blocks in zones can be managed in a similar way. The reference counts are updated whenever a file is deleted or overwritten that release blocks within a zone. In one aspect, methods and systems for are provided for using the configuration ofFIG.2Eand the process ofFIG.2J. One method includes assigning (e.g., B246,FIG.2J) a first shared namespace (e.g., NS4,FIG.2E) to a first instance (e.g.,36A,FIG.2E)) and a second instance (e.g.,36B,FIG.2B) of a storage operating system for enabling write access to the first instance to a first zone (e.g.,54B,FIG.2E) of a first portion (e.g., LFE29) of a flash storage system (e.g.,14A-14N), and write access to the second instance to a second zone (e.g.,54C,FIG.2E) of the first portion; using (B248,FIG.2J) a first exclusive namespace (e.g., NS1,FIG.2E) by the first instance to store metadata at a first segment (e.g.,27A,FIG.2B) of a second portion (e.g.,27A-27C,FIG.2E) of the flash storage system; using (e.g., B248,FIG.2J) a second exclusive namespace (e.g., NS2,FIG.2E) by the second instance to store metadata at a second segment (e.g.,27B,FIG.2E) of the second portion of the flash storage system; and providing (e.g., B248,FIG.2J) read only access to the first instance and the second instance to a second zone of the first portion using the first namespace. The method further includes utilizing (e.g., B248,FIG.2J), by the first instance, metadata at the first segment of the second portion to retrieve a data object from the second zone of the first portion, in response to a read request for the data object received by the first instance; and utilizing (e.g., B248,FIG.2J), by the second instance, metadata at the second segment of the second portion to retrieve the data object from the second zone of the first portion, in response to a read request for the data object received by the second instance. System100:FIG.2Kshows an example of a networked operating environment100(also referred to as system100) used according to one aspect of the present disclosure. As an example, system100may include a plurality of storage systems120A-120N (may also be referred to as storage server/storage servers/storage controller/storage controllers120, and also referred to as an “on-premises” storage system120) executing a storage operating system124A-124N (may also be referred to as storage operating system124or storage operating systems124, similar to the storage operating system instances36A-36C described above), a plurality of computing systems102A-102N (shown as host102,102A-102N and may also be referred to as a “host system102”, “host systems102”, “server102” or “servers102”) and user systems108A-108N (may also be referred to as “user system108,” “user systems108,” “client system108” or “client systems108”) that may access storage space provided by a storage-subsystem116managed by the storage systems120via a connection system118such as a local area network (LAN), wide area network (WAN), the Internet and others. The storage-subsystem116includes a plurality of storage devices114A-114N (may also be referred to as storage device/storage devices/disk/disks114). In one aspect, storage devices114are similar to storage devices14A-14N with LFE29and HFE27, described above in detail. It is noteworthy that the term “disk” as used herein is intended to mean any storage device/space and not to limit the adaptive aspects to any particular type of storage device, for example, hard disks. In one aspect, the storage system120uses the storage operating system124to store and retrieve data from the storage sub-system116by accessing the storage devices114via storage device controllers103A-103N (similar to the NVMeoF controller116(FIG.1B) described above) (may also be referred to as disk controller/disk controllers103). Data is stored and accessed using read and write requests that are also referred to as input/output (I/O) requests. The storage devices114may be organized as one or more RAID groups. The various aspects disclosed herein are not limited to any storage device type or storage device configuration. In one aspect, system100also includes a cloud layer136having a cloud storage manager (may also be referred to as “cloud manager”)122, and a cloud storage operating system (may also be referred to as “Cloud Storage OS”)140(similar to storage operating system instances36,FIG.1E) having access to cloud storage128(similar to69,FIG.1E). The cloud storage manager122enables configuration and management of storage resources. As an example, a cloud provider104, provides access to the cloud layer136and its components via a communication interface112. A non-limiting example of the cloud layer136is a cloud platform, e.g., Amazon Web Services (“AWS”) provided by Amazon Inc., Azure provided by Microsoft Corporation, Google Cloud Platform provided by Alphabet Inc. (without derogation of any trademark rights of Amazon Inc., Microsoft Corporation or Alphabet Inc.), or any other cloud platform. In one aspect, communication interface112includes hardware, circuitry, logic and firmware to receive and transmit information using one or more protocols. As an example, the cloud layer136can be configured as a virtual private cloud (VPC), a logically isolated section of a cloud infrastructure that simulates an on-premises data center with the on-premise, storage system120. In one aspect, the cloud manager122is provided as a software application running on a computing device or within a VM for configuring, protecting and managing storage objects. In one aspect, the cloud manager122enables access to a storage service (e.g., backup, restore, cloning or any other storage related service) from a “micro-service” made available from the cloud layer136. In one aspect, the cloud manager122stores user information including a user identifier, a network domain for a user device, a user account identifier, or any other information to enable access to storage from the cloud layer136. Software applications for cloud-based systems are typically built using “containers,” which may also be referred to as micro-services. Kubernetes is an open-source software platform for deploying, managing and scaling containers including the cloud storage OS140, and the cloud manager122. Azure is a cloud computing platform provided by Microsoft Corporation (without derogation of any third-party trademark rights) for building, testing, deploying, and managing applications and services including the cloud storage OS140, the and cloud manager122. Azure Kubernetes Service enables deployment of a production ready Kubernetes cluster in the Azure cloud for executing the cloud storage OS140, and the cloud manager122. It is noteworthy that the adaptive aspects of the present disclosure are not limited to any specific cloud platform. The term micro-service as used herein denotes computing technology for providing a specific functionality in system100via the cloud layer136. As an example, the cloud storage OS140, and the cloud manager122are micro-services, deployed as containers (e.g., “Docker” containers), stateless in nature, may be exposed as a REST (representational state transfer) application programming interface (API) and are discoverable by other services. Docker is a software framework for building and running micro-services using the Linux operating system kernel (without derogation of any third-party trademark rights). As an example, when implemented as docker containers, docker micro-service code for the cloud storage OS140, and the cloud manager122is packaged as a “Docker image file”. A Docker container for the cloud storage OS140, and the cloud manager122is initialized using an associated image file. A Docker container is an active or running instantiation of a Docker image. Each Docker container provides isolation and resembles a lightweight virtual machine. It is noteworthy that many Docker containers can run simultaneously in a same Linux based computing system. It is noteworthy that although a single block is shown for the cloud manager122and the cloud storage OS140, multiple instances of each micro-service (i.e., the cloud manager122and the cloud storage OS140) can be executed at any given time to accommodate multiple user systems108. In one aspect, the cloud manager122and the cloud storage OS140can be deployed from an elastic container registry (ECR). As an example, ECR is provided by AWS (without derogation of any third-party trademark rights) and is a managed container registry that stores, manages, and deploys container images. The various aspects described herein are not limited to the Linux kernel or using the Docker container framework. An example of the cloud storage OS140includes the “CLOUD VOLUMES ONTAP” provided by NetApp Inc., the assignee of this application. (without derogation of any trademark rights) The cloud storage OS140is a software defined version of a storage operating system124executed within the cloud layer136or accessible to the cloud layer136to provide storage and storage management options that are available via the storage system120. The cloud storage OS140has access to cloud storage128, which may include block-based, persistent storage that is local to the cloud storage OS140and object-based storage that may be remote to the cloud storage OS140. In another aspect, in addition to cloud storage OS140, a cloud-based storage service is made available from the cloud layer136to present storage volumes (shown as cloud volume142). An example of the cloud-based storage service is the “Cloud Volume Service,” provided by NetApp Inc. (without derogation of any trademark rights). The term volume or cloud volume (used interchangeably throughout this specification) means a logical object, also referred to as a storage object, configured to store data files (or data containers or data objects), scripts, word processing documents, executable programs, and any other type of structured or unstructured data. From the perspective of a user system108, each cloud volume can appear to be a single storage drive. However, each cloud volume can represent the storage space in one storage device, an aggregate of some or all the storage space in multiple storage devices, a RAID group, or any other suitable set of storage space. The various aspects of the present disclosure may include both the Cloud storage OS140and the cloud volume service or either one of them. As an example, user systems108are computing devices that can access storage space at the storage system120via the connection system118or from the cloud layer136presented by the cloud provider104or any other entity. The user systems108can also access computing resources, as a virtual machine (“VM”) (e.g., compute VM110) via the cloud layer136. A user may be the entire system of a company, a department, a project unit or any other entity. Each user system is uniquely identified and optionally, may be a part of a logical structure called a storage tenant (not shown). The storage tenant represents a set of users (may also be referred to as storage consumers) for the cloud provider104that provides access to cloud-based storage and/or compute resources (e.g.,110) via the cloud layer136and/or storage managed by the storage system120. In one aspect, host systems102are configured to execute a plurality of processor-executable applications126A-126N (may also be referred to as “application126” or “applications126”), for example, a database application, an email server, and others. These applications may be executed in different operating environments, for example, a virtual machine environment, Windows, Solaris, Unix (without derogation of any third-party rights) and others. The applications126use storage system120or cloud storage128to store information at storage devices. Although hosts102are shown as stand-alone computing devices, they may be made available from the cloud layer136as compute nodes executing applications126within VMs (shown as compute VM110). Each host system102interfaces with a management module134of a management system132for managing backups, restore, cloning and other operations for the storage system120. The management module134is used for managing and configuring various elements of system100. Management system132may include one or more computing systems for managing and configuring the various elements. Although the management system132with the management module134is shown as a stand-alone module, it may be implemented with other applications, for example, within a virtual machine environment. Furthermore, the management system132and the management module134may also be referred to interchangeably throughout this specification. In one aspect, the storage system120provides a set of storage volumes directly to host systems102via the connection system118. In another aspect, the storage volumes are presented by the cloud storage OS140, and in that context a storage volume is referred to as a cloud volume (e.g.,142). The storage operating system124/cloud storage OS140present or export data stored at storage devices114/cloud storage128as a volume (or a logical unit number (LUN) for storage area network (“SAN”) based storage). The storage operating system124/cloud storage OS140are used to store and manage information at storage devices114/cloud storage128based on a request generated by application126, user108or any other entity. The request may be based on file-based access protocols, for example, the Common Internet File System (CIFS) protocol or Network File System (NFS) protocol, over the Transmission Control Protocol/Internet Protocol (TCP/IP). Alternatively, the request may use block-based access protocols for SAN storage, for example, the Small Computer Systems Interface (SCSI) protocol encapsulated over TCP (iSCSI) and SCSI encapsulated over Fibre Channel (FC), object-based protocol or any other protocol. In a typical mode of operation, one or more input/output (I/O) requests are sent over connection system118to the storage system120or the cloud storage OS140, based on the request. Storage system120/cloud storage OS140receives the I/O requests, issues one or more I/O commands to storage devices114/cloud storage128to read or write data on behalf of the host system102and issues a response containing the requested data over the network118to the respective host system102. Although storage system120is shown as a stand-alone system, i.e., a non-cluster-based system, in another aspect, storage system120may have a distributed architecture; for example, a cluster-based system that may include a separate network module and storage module. Briefly, the network module is used to communicate with host systems102, while the storage module is used to communicate with the storage devices114. Alternatively, storage system120may have an integrated architecture, where the network and data components are included within a single chassis. The storage system120further may be coupled through a switching fabric to other similar storage systems (not shown) which have their own local storage subsystems. In this way, all the storage subsystems can form a single storage pool, to which any client of any of the storage servers has access. In one aspect, the storage system120(or the cloud storage OS140) can be organized into any suitable number of virtual servers (may also be referred to as “VServers” or virtual storage machines), in which each VServer represents a single storage system namespace with separate network access. Each VServer has a specific client domain and a security domain that are separate from the client and security domains of other VServers. Moreover, each VServer can span one or more physical nodes, each of which can hold storage associated with one or more VServers. User systems108/host102can access the data on a VServer from any node of the clustered system, through the virtual interface associated with that VServer. It is noteworthy that the aspects described herein are not limited to the use of VServers. As an example, one or more of the host systems (for example,102A-102N) or a compute resource (not shown) of the cloud layer136may execute a VM environment where a physical resource is time-shared among a plurality of independently operating processor executable VMs (including compute VM110). Each VM may function as a self-contained platform, running its own operating system (OS) and computer executable, application software. The computer executable instructions running in a VM may also be collectively referred to herein as “guest software.” In addition, resources available within the VM may also be referred to herein as “guest resources.” The guest software expects to operate as if it were running on a dedicated computer rather than in a VM. That is, the guest software expects to control various events and have access to hardware resources on a physical computing system (may also be referred to as a host system) which may also be referred to herein as “host hardware resources”. The host hardware resource may include one or more processors, resources resident on the processors (e.g., control registers, caches, and others), memory (instructions residing in memory, e.g., descriptor tables), and other resources (e.g., input/output devices, host attached storage, network attached storage or other like storage) that reside in a physical machine or are coupled to the host system. Storage Operating System:FIG.3illustrates a generic example of the storage operating system124(or storage operating system instance36) executed by storage system120, according to one aspect of the present disclosure. Storage operating system124/36interfaces with the storage sub-system116as described above in detail. As an example, operating system124/36may include several modules, or “layers”. These layers include a file system301(similar to42) that keeps track of a directory structure (hierarchy) of the data stored in storage devices and manages read/write operations, i.e., executes read/write operations on storage devices in response to host system102requests. The storage operating system124/36may also include a protocol layer303and an associated network access layer305, to allow storage system120to communicate over a network with other systems, such as host system102, and management system132. Protocol layer303may implement one or more of various higher-level network protocols, such as NFS (e.g.,44,FIG.2A), CIFS (46,FIG.2A), S3 (e.g.,48,FIG.2A), Hypertext Transfer Protocol (HTTP), TCP/IP and others. Network access layer305may include one or more drivers, which implement one or more lower-level protocols to communicate over the network, such as Ethernet. Interactions between host systems102and the storage sub-system116are illustrated schematically as a path, which illustrates the flow of data through storage operating system124. The storage operating system124may also include a storage access layer307and an associated storage driver layer309to communicate with a storage device14. The storage access layer307may implement a higher-level disk storage protocol, such as RAID layer while the storage driver layer309may implement a lower-level storage device access protocol, such as the NVMe protocol. It should be noted that the software “path” through the operating system layers described above needed to perform data storage access for a client request may alternatively be implemented in hardware. That is, in an alternate aspect of the disclosure, the storage access request data path may be implemented as logic circuitry embodied within a field programmable gate array (FPGA) or an ASIC. This type of hardware implementation increases the performance of the file service provided by storage system120. In addition, it will be understood to those skilled in the art that the invention described herein may apply to any type of special-purpose (e.g., file server, filer or storage serving appliance) or general-purpose computer, including a standalone computer or portion thereof, embodied as or including a storage system. Moreover, the teachings of this disclosure can be adapted to a variety of storage system architectures including, but not limited to, a network-attached storage environment, a storage area network and a disk assembly directly attached to a client or host computer. The term “storage system” should therefore be taken broadly to include such arrangements in addition to any subsystems configured to perform a storage function and associated with other equipment or systems. Processing System:FIG.4is a high-level block diagram showing an example of the architecture of a processing system, at a high level, in which executable instructions as described above can be implemented. The processing system400can represent a compute node12A/12B, the storage system120, the management system132, host systems102, and others. Note that certain standard and well-known components which are not germane to the present invention are not shown inFIG.4. The processing system400includes one or more processors402and memory404, coupled to a bus system405. The bus system405shown inFIG.4is an abstraction that represents any one or more separate physical buses and/or point-to-point connections, connected by appropriate bridges, adapters and/or controllers. The bus system405, therefore, may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (sometimes referred to as “Firewire”). The processors402are the central processing units (CPUs) of the processing system400and, thus, control its overall operation. In certain aspects, the processors402accomplish this by executing programmable instructions stored in memory404. A processor402may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. Memory404represents any form of random-access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. Memory404includes the main memory of the processing system400. Instructions406which implements techniques introduced above may reside in and may be executed (by processors402) from memory404. For example, instructions406may include code for executing the process blocks ofFIGS.2F-2Jfor using the systems disclosed inFIGS.1A-2E. Also connected to the processors402through the bus system405are one or more internal mass storage devices410, and a network adapter412. Internal mass storage devices410may be or may include any conventional medium for storing large volumes of data in a non-volatile manner, such as one or more magnetic or optical based disks. The network adapter412provides the processing system400with the ability to communicate with remote devices (e.g., storage servers) over a network and may be, for example, an Ethernet adapter, a FC adapter, or the like. The processing system400also includes one or more input/output (I/O) devices408coupled to the bus system405. The I/O devices408may include, for example, a display device, a keyboard, a mouse, etc. Cloud Computing: The system and techniques described above are applicable and especially useful in the cloud computing environment where storage is presented and shared across different platforms. Cloud computing means computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that may be rapidly provisioned and released with minimal management effort or service provider interaction. The term “cloud” is intended to refer to a network, for example, the Internet and cloud computing allows shared resources, for example, software and information to be available, on-demand, like a public utility. Typical cloud computing providers deliver common business applications online which are accessed from another web service or software like a web browser, while the software and data are stored remotely on servers. The cloud computing architecture uses a layered approach for providing application services. A first layer is an application layer that is executed at client computers. In this example, the application allows a client to access storage via a cloud. After the application layer is a cloud platform and cloud infrastructure, followed by a “server” layer that includes hardware and computer software designed for cloud specific services. The storage systems described above may be a part of the server layer for providing storage services. Details regarding these layers are not germane to the inventive aspects. Thus, methods and apparatus for scalable storage appliance have been described. Note that references throughout this specification to “one aspect” or “an aspect” mean that a particular feature, structure or characteristic described in connection with the aspect is included in at least one aspect of the present invention. Therefore, it is emphasized and should be appreciated that two or more references to “an aspect” or “one aspect” or “an alternative aspect” in various portions of this specification are not necessarily all referring to the same aspect. Furthermore, the particular features, structures or characteristics being referred to may be combined as suitable in one or more aspects of the present disclosure, as will be recognized by those of ordinary skill in the art. While the present disclosure is described above with respect to what is currently considered its preferred aspects, it is to be understood that the disclosure is not limited to that described above. To the contrary, the disclosure is intended to cover various modifications and equivalent arrangements within the spirit and scope of the appended claims. | 65,135 |
11861232 | DETAILED DESCRIPTION To resolve the issues mentioned in the background, a solution is required to reduce power consumption in writing data to a storage system. The analyses show that the interface specifications for storage devices provide a write data copy mode. In this mode, if multiple groups of data to be written to a storage device are the same, only one group of data in the multiple groups of data is transmitted to an interface of a memory array of the storage device. A transmission path for transmitting another group of data in the multiple groups of data to another interface of the memory array is still activated. In addition, a special instruction is sent to the memory array to indicate a copy of written data. After receiving the special instruction, the memory array copies the group of normally transmitted data to the another interface that is of the memory array and that corresponds to the another group of data so as to write complete data to the memory array. However, the transmission path for transmitting the another group of data to the another interface of the memory array is still activated. This still causes power consumption in data writing. The embodiments of the present disclosure provide a storage system and a data writing method thereof. In a write data copy mode, if finding that multiple groups of data to be written to the memory array include groups of same data, the storage system activates a transmission path for one group of same data to be written to the storage system, but disconnects a transmission path for another group of same data to be written to the storage system. In this way, no additional power consumption is required on the transmission path for the another group of same data during the data writing process. Therefore, power consumption in writing data to the storage system can be reduced. The embodiments of the present disclosure are described in detail below with reference to the drawings. Those skilled in the art should understand that many technical details are proposed in the embodiments of the present disclosure to make the present disclosure better understood. However, even without these technical details and various changes and modifications made based on the following embodiments, the technical solutions claimed in the embodiments of the present disclosure may still be realized. A first embodiment of the present disclosure provides a storage system. The storage system provided by the embodiment of the present disclosure is described in detail below with reference to the drawings.FIG.1is a first schematic diagram of a functional structure of a storage system according to an embodiment of the present disclosure.FIG.2is second schematic diagram of a functional structure of a storage system according to an embodiment of the present disclosure.FIG.3is a third schematic diagram of a functional structure of a storage system according to an embodiment of the present disclosure.FIG.4is a fourth schematic diagram of a functional structure of a storage system according to an embodiment of the present disclosure.FIG.5is a fifth schematic diagram of a functional structure of a storage system according to an embodiment of the present disclosure.FIG.6is a sixth schematic diagram of a functional structure of a storage system according to an embodiment of the present disclosure. In the embodiments of the present disclosure, referring toFIG.1, a storage system100is configured to: enter a write data copy mode in response to a write-copy enable signal101a; if at least two groups of data in multiple groups of data exported from multiple data ports102are the same in the write data copy mode, define the at least two groups of data as a category; generate an identification signal101bthat is used to indicate a data copy; transmit one group of data in the category to an interface of a memory array103; and disconnect a transmission path between a data port102corresponding to another group of data in the category and another interface of the memory array103. The memory array103, in response to the write-copy enable signal101aand the identification signal101b, copies the one group of data in the category to the another interface that is of the memory array103and that corresponds to the another group of data in the category. In some embodiments, still referring toFIG.1, the data ports102may export the following eight groups of data: D0<7:0>, D1<7:0>, D2<7:0>, D3<7:0>, D4<7:0>, D5<7:0>, D6<7:0>, and D7<7:0>. Each of the eight groups of data includes an eight-bit unsigned number. If any two groups of data are the same, eight-bit unsigned numbers in the two groups of data have the same sequence. It should be noted that, inFIG.1, that eight groups of data are exported from the data ports102and that each group of data includes an eight-bit unsigned number are merely examples. In actual application, the number of groups of data exported from the data ports102and the number of bits for an unsigned number in each group of data are not limited. If at least two groups of data, such as D0<7:0> and D1<7:0>, in the eight groups of data are the same, the storage system can define D0<7:0> and D1<7:0> as a category, export an identification signal101bthat is used to indicate a data copy, transmit D0<7:0> to a corresponding interface of the memory array103, and disconnect a transmission path for transmitting D1<7:0> to a corresponding interface of the memory array103. In this way, no additional power consumption is caused on the transmission path for transmitting D1<7:0> to the corresponding interface of the memory array103. Therefore, power consumption in writing data to the memory array103can be reduced. The following uses two types of embodiments to describe how to write data to the memory array103. In some embodiments, the storage system100may be further configured to export the identification signal101bif all of the multiple groups of data are the same. For example, D0<7:0>, D1<7:0>, D2<7:0>, D3<7:0>, D4<7:0>, D5<7:0>, D6<7:0>, and D7<7:0> are the same. Therefore, the storage system100can activate a transmission path for transmitting any group of data in the eight groups of data to the corresponding interface of the memory array103, but disconnect transmission paths for transmitting the other seven groups of data to the corresponding interfaces of the memory array103. For example, the any group of data is D0<7:0> and the other seven groups of data are D1<7:0>, D2<7:0>, D3<7:0>, D4<7:0>, D5<7:0>, D6<7:0>, and D7<7:0>. This prevents unnecessary power consumption on the transmission paths corresponding to the other seven groups of data when the eight groups of data are the same. Therefore, power consumption in writing data to the memory array103can be reduced. It should be noted that, in actual application, N groups of data may be exported from the data ports102and all of the N groups of data may be the same. In this case, unnecessary power consumption on N−1 transmission paths corresponding to N−1 groups of data can be prevented during the transmission of the N groups of data. Therefore, power consumption in writing data to the memory array103is reduced. N is an integer greater than or equal to 2. In some other embodiments, the storage system100may be further configured to: if the multiple groups of data have at least one group of data whose data is different from that in the category, transmit the at least one group of data to a corresponding interface of the memory array103. For example, in D0<7:0>, D1<7:0>, D2<7:0>, D3<7:0>, D4<7:0>, D5<7:0>, D6<7:0>, and D7<7:0>, the former six groups of data are the same but different from D6<7:0> and D7<7:0>, wherein D7<7:0> is different from D6<7:0>. In other words, the eight groups of data are classified into three different categories. In this case, the storage system100transmits data in D0<7:0>, D6<7:0>, and <7:0> to interfaces that are of the memory array103and that correspond to D0<7:0>, D6<7:0>, and <7:0> respectively. Therefore, three transmission paths are activated to transmit the data in D0<7:0>, D6<7:0>, and <7:0> so as to write the data in D0<7:0>, D6<7:0>, and <7:0> to the memory array103. Meanwhile, the other five transmission paths are disconnected so as to prevent unnecessary power consumption in writing data to the memory array103. In addition, D0<7:0>, D1<7:0>, D2<7:0>, D3<7:0>, D4<7:0>, and D5<7:0> are the same. Therefore, the activation of a transmission path for transmitting any of these six groups of data to a corresponding interface of the memory array103can meet the requirements in actual application. For ease of description, a transmission path for transmitting D0<7:0> to an interface that is of the memory array103and that corresponds to D0<7:0> is activated. It should be noted that the foregoing example is used only for easy description. In actual application, data transmission can be implemented based on the foregoing solution if the multiple groups of data have at least one group of data whose data is different from that in the category. For example, in D0<7:0>, D1<7:0>, D2<7:0>, D3<7:0>, D4<7:0>, D5<7:0>, D6<7:0>, and D7<7:0>, the former four groups of data are the same and the latter four groups of data are the same. D0<7:0> is different from D4<7:0>. In other words, the eight groups of data are classified into two different categories. In this case, the storage system100can transmit data in D0<7:0> to the interface that is of the memory array103and that corresponds to D0<7:0> and data in D4<7:0> to the interface that is of the memory array103and that corresponds to D4<7:0> based on the identification signal101b. Therefore, the storage system100can disconnect the remaining six transmission paths based on the identification signal101bso as to prevent unnecessary power consumption in writing data to the memory array103. In addition, D0<7:0>, D1<7:0>, D2<7:0>, and D3<7:0> are the same. Therefore, the activation of a transmission path for transmitting any of these groups of data to a corresponding interface of the memory array103can meet the requirements in actual application. For ease of description, the transmission path for transmitting D0<7:0> to the interface that is of the memory array103and that corresponds to D0<7:0> is activated. D4<7:0>, D5<7:0>, D6<7:0>, and D7<7:0> are the same. Therefore, the activation of a transmission path for transmitting any of these groups of data to a corresponding interface of the memory array103can meet the requirements in actual application. For ease of description, the transmission path for transmitting D4<7:0> to the interface that is of the memory array103and that corresponds to D4<7:0> is activated. It should be noted that the foregoing example is used only for easy description. In this example, the number of categories is 2, and the groups of data D0<7:0>, D1<7:0>, D2<7:0>, and D3<7:0> are the same. In actual application, other groups of data may be the same, and the same groups of data may include multiple consecutive groups of data or inconsecutive groups of data. This is not limited in the embodiments of the present disclosure. In addition, the number of categories may be greater than 2. Still referring toFIG.1, the storage system100in the embodiments of the present disclosure includes a processing component101and multiple data channels104. The processing component101is configured to: in response to the write-copy enable signal101aand the multiple groups of data, generate a first drive signal101c, a second drive signal101d, and the identification signal101b, and send the identification signal101bto the memory array103. Each of the multiple data channels104is a transmission path for transmitting a group of data between an interface of the memory array103and a data port102. The data channel104corresponding to one group of data in the category is activated in response to the first drive signal101c. The data channel104corresponding to the another group of data in the category is disconnected in response to the second drive signal101d. A data channel104for any group of data in the category is selected and activated by receiving the first drive signal101c, and the data channel104for the another group of data in the category is disconnected by receiving the second drive signal101d. In this way, when multiple groups of same data are transmitted, the first drive signal101ccan be used to control the activation of only one data channel104, and the second drive signal101dcan be used to control the disconnection of the data channel104corresponding to the another group of data in the multiple groups of same data. This prevents unnecessary power consumption on the transmission path corresponding to the another group of data in the multiple groups of same data. Therefore, power consumption in writing data to the memory array103is further reduced. In some embodiments, referring toFIG.2, the data channel104may include multiple data transmission circuits114that are serially connected, and each stage of the multiple data transmission circuits114transmits data based on the first drive signal101cor is disconnected based on the second drive signal101d. When data is transmitted from the data port102to the interface of the memory array103, data distortion is prone to occur because the data channel104is long. In other words, data that is exported from the data port102becomes different when the data is transmitted to the interface of the memory array103. As a result, an error may occur when data is written to the memory array103. Therefore, the data channel104includes multiple data transmission circuits114that are serially connected. This helps ensure that data to be transmitted on the data channel104is processed by each data transmission circuit114and is transmitted without distortion. Referring toFIG.3, the processing component101may further include multiple identification signal transmission circuits105that are serially connected. Each identification signal transmission circuit105has an input terminal configured to receive the identification signal101band an output terminal configured to export the identification signal101b. A last identification signal transmission circuit105is configured to transmit the identification signal101bto the memory array103. When the identification signal101bis transmitted from the processing component101to the memory array103, data distortion is also prone to occur because the transmission path is long. In other words, the identification signal101bthat is exported from the processing component101becomes different when the identification signal101bis transmitted to the memory array103. As a result, the identification signal101bmay be misread by the memory array103. Consequently, the memory array103may have difficulty in determining, based on the identification signal101b, whether a data copy occurs among the multiple data groups that are written to the memory array103and which groups of data are the same. Therefore, the processing component101includes multiple identification signal transmission circuits105that are serially connected. This helps ensure that the identification signal101bexported from the processing component101is processed by each identification signal transmission circuit105and is transmitted to the memory array103without distortion. In some embodiments, referring toFIG.4, each identification signal transmission circuit105may include an even number of serially connected inverters115. The inverter115features relatively high noise tolerance, extremely high input resistance, and extremely low static power consumption, and is insensitive to noise and interference. In addition, an even number of serially connected inverters115ensure that the identification signal101bthat is finally transmitted to the memory array103is not inverted. Therefore, the identification signal101bcan be less affected when being transmitted from the processing component101to the memory array103. This further ensures that the identification signal101bis transmitted to the memory array103without distortion. In some embodiments, still referring toFIG.4, the data transmission circuit114may include a driver124and a latch134. The driver124activates the data transmission circuit114in response to the first drive signal101cor disconnects the data transmission circuit114in response to the second drive signal101d. In this way, after the driver124activates the data transmission circuit114, data transmitted on the data transmission circuit114is divided into two parts. One part is transmitted to the latch134and is latched. The other part is transmitted to a next data transmission circuit114. It should be noted that, inFIG.2toFIG.4, two data transmission circuits114that are serially connected are provided on the data channel104. This is merely an example. In actual application, the number of data transmission circuit114on the data channel104is not limited. InFIG.3andFIG.4, two identification signal transmission circuits105that are serially connected are provided. In actual application, the number of identification signal transmission circuits105is not limited. In some embodiments, referring toFIG.5, the processing component101may further include multiple signal generation units106. Each signal generation unit106corresponds to one identification signal transmission circuit105and one data transmission circuit114and is configured to: in response to the write-copy enable signal101aand the identification signal101breceived by an input terminal of the identification signal transmission circuit105, provide the first drive signal101cand the second drive signal101dto the corresponding data transmission circuit114. The identification signal101bcan be used to indicate a data copy, that is, to indicate which groups of data are the same. Therefore, the signal generation unit106can generate, based on the identification signal101breceived by the input terminal of the identification signal transmission circuit105, a new first drive signal101cfor a next data transmission circuit114that needs to transmit data, and a new second drive signal101dfor a next data transmission circuit114whose previous data transmission circuit114resides on a disconnected data channel104. This ensures that data to be transmitted on the data channel104is processed by each data transmission circuit114and is transmitted without distortion. This also ensures that each data transmission circuit114on the data channel104corresponding to the another group of data in the category is disconnected. Therefore, no interference is caused to data transmission, and power consumption in writing data to the memory array103is reduced. In some embodiments, referring toFIG.6, the storage system100may further include multiple input buffer circuits107. Each input buffer circuit107is located between a data channel104and a data port102, and the input buffer circuit107corresponding to the another group of data in the category is disconnected in response to a disconnect enable signal101ethat is exported from the processing component101. In this way, the input buffer circuit107is disconnected when receiving the disconnect enable signal101e. In other words, the input buffer circuit107corresponding to the another group of data in the category is not driven and data exported from the data port102will not be transmitted on the data channel104. The data channel104corresponding to the disconnected input buffer circuit107is also disconnected based on the second drive signal101d. In this way, no additional power consumption is caused on the transmission path for transmitting the another group of data in the category from the input buffer circuit107to the corresponding interface of the memory array103. Therefore, power consumption in writing data to the memory array103can be reduced. Still referring toFIG.6, the processing component101may further include an instruction decode unit108. The instruction decode unit108is connected to each input buffer circuit107and is configured to: export the disconnect enable signal101eto the input buffer circuit107corresponding to the another group of data in the category based on an instruction signal101fand the disconnect enable signal101e. In this way, the instruction decode unit108can export the disconnect enable signal101eto the input buffer circuit107corresponding to the another group of data in the category based on the instruction signal101fso that the input buffer circuit107corresponding to the another group of data in the category is disconnected based on the disconnect enable signal101e. In some embodiments, still referring toFIG.6, the storage system100may further include an instruction generation unit109. The instruction generation unit109is configured to receive the multiple groups of data and export the instruction signal101fif the multiple groups of data include the category. It should be noted that the instruction generation unit109can be used to analyze the multiple groups of data that need to be transmitted from the data ports102. In some examples, the instruction signal101fgenerated based on the multiple groups of data can be used to indicate that the multiple groups of data include the category and indicate which group of data in the category can be transmitted. In addition, the instruction signal101fcan also be used to indicate a position of an input buffer circuit107corresponding to each group of data. In this way, the instruction decode unit108can subsequently transmit, based on the instruction signal101f, the disconnect enable signal101eto the input buffer circuit107corresponding to the another group of data in the category. The input buffer circuit107corresponding to the another group of data in the category can be disconnected based on the disconnect enable signal101e, and the transmission path between the input buffer circuit107and the corresponding interface of the memory array103can also be disconnected. In other examples, if the number of categories is greater than or equal to 2, the instruction signal101fgenerated based on the multiple groups of data can be used to indicate that specific groups of data exported from specific data ports102belong to the same category, and indicate a position of an input buffer circuit107corresponding to each group of data. In this way, the instruction decode unit108can subsequently transmit, based on the instruction signal101f, the disconnect enable signal101eto an input buffer circuit107corresponding to a group of data that is in any of the categories and that does not need to be transmitted. This prevents the disconnect enable signal101efrom being misread by input buffer circuits107in different categories. Therefore, the instruction generation unit109can be used to analyze the multiple groups of data to be transmitted by the data ports102, so as to learn which groups of data in the multiple groups of data are the same and to obtain the instruction signal101fthat indicates a position of an input buffer circuit107corresponding to each group of data. In this way, an input buffer circuit107corresponding to another group of data in a category is disconnected, and the disconnect enable signal101eis not misread by input buffer circuits107in different categories. It should be noted that, in actual application, the instruction generation unit109can be a subunit of the processing component101, a component in parallel with the processing component101, or an external unit outside the storage system100. In some embodiments, the processing component101may be further configured to export a third drive signal (not shown in the figure) if the multiple groups of data include at least one group of data having different data, wherein the data channel104corresponding to the at least one group of data having different data is activated in response to the third drive signal. In the multiple groups of data, some groups of data are the same, while other groups of data are different. The first drive signal101cand second drive signal101dare generated based on the same groups of data, and the third drive signal is generated based on the other groups of data that are different. This ensures that only one data channel104in the multiple data channels104corresponding to the same groups of data is activated to reduce power consumption in data transmission. This also ensures that the data channels104corresponding to the other groups of data that are different are activated based on the third drive signal. Therefore, the integrity of data finally written to the memory array103can be ensured. It should be noted that the data channel104corresponding to the at least one group of data having different data can also be activated in response to the first drive signal101cin actual application. To sum up, in the write data copy mode, if finding that multiple groups of data to be written to the memory array103include same data, the storage system100activates a transmission path that is used to transmit one group of data in the category to the memory array103but disconnects a transmission path that is used to transmit another group of data in the category to the memory array103. In this way, no additional power consumption is caused on the transmission path corresponding to the another group of data in the category in the data writing process. Therefore, power consumption in writing data to the memory array103can be reduced. Another embodiment of the present disclosure further provides a data writing method of a storage system, which is applicable to the storage system provided in the foregoing embodiments. The data writing method of the storage system provided by the another embodiment of the present disclosure is described in detail below with reference to the drawings. In the embodiments of the present disclosure, the data writing method of the storage system includes the following steps: Enter a write data copy mode in response to a write-copy enable signal. If at least two groups of data in multiple groups of data exported from multiple data ports are the same in the write data copy mode, define the at least two groups of data as a category. Generate an identification signal that is used to indicate a data copy. Transmit one group of data in the category to an interface of a memory array. Disconnect a transmission path between a data port corresponding to another group of data in the category and another interface of the memory array. The memory array, in response to the write-copy enable signal and the identification signal, copies the one group of data in the category to the another interface that is of the memory array and that corresponds to the another group of data in the category. In this way, no additional power consumption is caused on the transmission path for transmitting the another group of data in the category to the corresponding interface of the memory array. Therefore, power consumption in writing data to the memory array can be reduced. In some embodiments, the step of generating an identification signal that is used to indicate a data copy may further include the following sub steps: Determine whether all of the multiple groups of data are the same. If all of the multiple groups of data are the same, generate the identification signal. In this way, if all of N groups of data are the same, the identification signal can be used to disconnect transmission paths corresponding to N−1 groups of data during the transmission of the N groups of data. This prevents unnecessary power consumption on the N−1 transmission paths corresponding to the N−1 groups of data. Therefore, power consumption in writing data to the memory array is reduced. N is an integer greater than or equal to 2. In some embodiments, the step of transmitting one group of data in the category to an interface of a memory array and disconnecting a transmission path between a data port corresponding to another group of data in the category and another interface of the memory array may include the following sub steps: In response to the write-copy enable signal and the multiple groups of data, generate a first drive signal and a second drive signal. Transmit, by a data channel corresponding to the one group of data in the category, the one group of data to the corresponding interface of the memory array in response to the first drive signal. Disconnect a data channel corresponding to the another group of data in the category in response to the second drive signal. In this way, when multiple groups of same data are transmitted, the first drive signal can be used to control the activation of only one data channel in data channels corresponding to the multiple groups of data, and the second drive signal can be used to control the disconnection of the data channel corresponding to the another group of data in the multiple groups of data. This can prevent unnecessary power consumption on the transmission path corresponding to the another group of data in the multiple groups of data. Therefore, power consumption in writing data to the memory array is further reduced. Subsequently, the identification signal can be used to copy the data that is transmitted on the activated data channel to the interface that is of the memory array and that corresponds to the another group of data in the multiple groups of data. In some embodiments, the step of transmitting one group of data in the category to an interface of a memory array and disconnecting a transmission path between a data port corresponding to another group of data in the category and another interface of the memory array may further include the following sub step: Disconnect an input buffer circuit between the data port and the data channel that correspond to the another group of data in the category. The disconnection of the input buffer circuit means that the input buffer circuit corresponding to the another group of data in the category is not driven. Therefore, data exported from the data port cannot be transmitted to the corresponding data channel. Therefore, power consumption in writing data to the memory array can be reduced. In some embodiments, the multiple groups of data to be transmitted from the data ports102can be analyzed so as to generate an instruction signal and a disconnect enable signal. Then, the disconnect enable signal can be exported to the input buffer circuit corresponding to the another group of data in the category based on the instruction signal and the disconnect enable signal. In some examples, the instruction signal can be used to indicate that the multiple groups of data include the category and indicate which group of data in the category can be normally transmitted. In this way, the disconnect enable signal can be subsequently exported to the input buffer circuit corresponding to the another group of data in the category based on the instruction signal. Then, the input buffer circuit corresponding to the another group of data in the category can be disconnected based on the disconnect enable signal, and the transmission path between the input buffer circuit and the corresponding interface of the memory array can also be disconnected. In other examples, if the number of categories is greater than or equal to 2, the instruction signal generated based on the multiple groups of data can be used to indicate that specific groups of data exported from specific data ports belong to the same category, and indicate a position of an input buffer circuit corresponding to each group of data. In this way, an instruction decode unit can subsequently transmit, based on the instruction signal, the disconnect enable signal to an input buffer circuit corresponding to a group of data that is in any of the categories and that does not need to be transmitted. This prevents the disconnect enable signal from being misread by input buffer circuits in different categories. In some embodiments, the identification signal is transmitted to the memory array by using multiple identification signal transmission circuits that are serially connected, and a data channel includes multiple data transmission circuits that are serially connected. Each data transmission circuit corresponds to an identification signal transmission circuit. The step of transmitting one group of data in the category to an interface of a memory array and disconnecting a transmission path between a data port corresponding to another group of data in the category and another interface of the memory array may further include the following sub step: In response to the write-copy enable signal and the identification signal that is received by an input terminal of a current identification signal transmission circuit, generate the first drive signal that is used to drive at least a current data transmission circuit and the second drive signal that is used to disconnect the current data transmission circuit. This ensures that each data transmission circuit114on the data channel104that needs to transmit data is activated based on the first drive signal. Therefore, data is transmitted without distortion. This also ensures that each data transmission circuit114on the data channel104corresponding to the another group of data in the category is disconnected. Therefore, no interference is caused to data transmission, and power consumption in writing data to the memory array103is reduced. To sum up, if finding that the multiple groups of data to be written to the memory array include same data after the storage system enters a write data copy mode in response to a write-copy enable signal, the storage system activates a transmission path that is used to transmit one group of data in the category to the memory array but disconnects a transmission path that is used to transmit another group of data in the category to the memory array. In this way, no additional power consumption is caused on the transmission path corresponding to the another group of data in the category in the data writing process. Therefore, power consumption in writing data to the memory array can be reduced. Those skilled in the art can understand that the above implementations are specific embodiments for implementing the present disclosure. In practical applications, various changes may be made to the above embodiments in terms of form and details without departing from the spirit and scope of the embodiments of the present disclosure. Any person skilled in the art may make changes and modifications to the embodiments without departing from the spirit and scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the scope defined by the claims. | 34,843 |
11861233 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to using duplicate data to improve error correction capability in memory devices. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. A non-volatile memory device is a package of one or more dies. Each die can consist of one or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane consists of a set of physical blocks. Each block consists of a set of pages. Each page consists of a set of memory cells (“cells”). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. For example, a single level cell (SLC) can store one bit of information and has two logic states. The various logic states have corresponding threshold voltage (Vt) levels. A threshold voltage (Vt) is the voltage applied to the cell circuitry (e.g., control gate at which a transistor becomes conductive) to set the state of the cell. A cell is set to one of its logic states based on the Vtthat is applied to the cell. For example, if a high Vtis applied to an SLC, a charge will be present in the cell, setting the SLC to store a binary logical state of 0. If a low Vtis applied to the SLC, charge will be absent in the cell, setting the SLC to store a binary logical state of 1. A memory device can be made up of cells arranged in a two-dimensional grid. Memory cells are etched onto a silicon wafer in an array of columns connected by conductive lines (also referred to as bitlines) and rows connected by conductive lines (also referred to as wordlines). A wordline can refer to a conductive line that connects control gates of a set (e.g., a row) of memory cells of a memory device that are used with one or more bitlines to generate the address of each of the memory cells. The intersection of a bitline and wordline constitutes the address of the memory cell. A block refers to a unit of the memory device used to store data and can include a group of memory cells, a wordline group, a wordline, or individual memory cells. One or more blocks can be grouped together to form a plane of the memory device in order to allow concurrent operations to take place on each plane. The memory device can include circuitry that performs concurrent memory page accesses of two or more memory planes. For example, the memory device can include a respective access line driver circuit and power circuit for each plane of the memory device to facilitate concurrent access of pages of two or more memory planes, including different page types. A memory cell can be programmed (written to) by applying a certain voltage to the memory cell, which results in an electric charge being held by the memory cell, thus allowing modulation of the voltage distributions produced by the memory cell. Precisely controlling the amount of the electric charge stored by the memory cell allows establishing multiple logical levels. A read operation can be performed by comparing the measured threshold voltage (Vt) exhibited by the memory cell to one or more reference voltage levels in order to distinguish between two logical levels for single-level cells (SLCs) and between multiple logical levels for multi-level cells. Accordingly, certain non-volatile memory devices can use a demarcation voltage (read reference voltage) to read data stored at memory cells. For example, a demarcation voltage can be applied to the memory cells and if a threshold voltage of a particular memory cell is identified as being below the demarcation voltage that is applied to the particular memory cell, then the data stored at the particular memory cell can be read as a particular value (e.g., a logical ‘1’) or determined to be in a particular state (e.g., a set state). If the threshold voltage of the particular memory cell is identified as being above the demarcation voltage, then the data stored at the particular memory cell can be read as another value (e.g., a logical ‘0’) or determined to be in another state (e.g., a reset state). Thus, the demarcation voltage can be applied to memory cells to determine values stored at the memory cells. Such threshold voltage can be within a range of threshold voltages or a normal distribution of threshold voltages. A memory device can experience varied workloads which can impact the threshold voltage distributions and cause them to shift to higher or lower values. Therefore, the threshold voltage of a memory cell or the threshold voltage distribution of all the memory cells in a memory sub-system can shift or change over time. In order to distinguish between adjacent distributions (corresponding to two different logical levels), the read threshold voltage levels can be defined such that any measured voltage that falls below a read threshold level is associated with one distribution of the pair of adjacent program distributions (e.g., a distribution corresponding to the logical state of ‘1’), while any measured voltage that is greater than or equal to the read threshold level is associated with another distribution of the pair of neighboring distributions (e.g., a distribution corresponding to the logical state of ‘0’). However, shifts of the distributions can cause them to overlap and make distinguishing the distribution to which a threshold voltage within the overlapping range of voltages belongs challenging. For example, a threshold voltage distribution of memory cells storing a logical state of ‘1’ or a threshold voltage distribution of memory cells storing a logical state of ‘0’ can drift over time and, consequently, shift the respective threshold voltage distribution to overlap with the other one. When the threshold voltage of a memory cell changes, the application of the demarcation voltage can yield an incorrect result and cause errors in the overlap region. Thus, when the threshold voltage distribution of memory cells storing logical states of ‘1’ the threshold voltage distribution of memory cells storing logical states of ‘0’ on a memory device shift in a manner such that a portions of one of the distributions overlaps with a portion of the other distribution, bit errors can occur when attempts to read the data on the cell are made by applying a read reference voltage within the range of voltages where the distributions overlap. The shift in voltage distributions can affect other endurance-related characteristics of a memory component. When data is written to and/or erased from a memory cell of a memory device, the memory cell can be damaged. As the number of write operations and/or erase operations performed on a memory cell increases and the memory cell is increasingly damaged, the probability of the data stored at the memory cell including an error increases. A characteristic associated with the endurance of the memory component is the number of write operations or a number of program/erase (P/E) cycles performed on a memory cell of the memory component. An increasing number of read and write operations or P/E cycles can result in a higher error rate of the data stored at the memory cell. This can increase the use of an error detection and correction operation (e.g., an error control operation) for subsequent data operations (e.g., read and/or write) performed on the memory cell. The increased use of the error control operation can result in increased latency and a consequent reduction of the performance of the memory device. In addition, as the error rate for a memory cell or data block continues to increase, it may even surpass the error correction capabilities of the memory sub-system, leading to an irreparable loss of the data. Furthermore, as more resources of the memory sub-system are used to perform the error control operation, fewer resources are available to perform other read operations or write operations. Therefore, upon a threshold number of read operations being performed on the data block, the memory sub-system can perform a data integrity check (also referred to herein as a “scan”) to verify that the data stored at the data block does not include any errors. During the scan, one or more reliability statistics are determined for data stored at the data block. One example of a reliability statistic is a raw bit error rate (RBER). The RBER corresponds to a number of bit errors per unit of time that the data stored at the data block experiences and can be understood as the ratio of the number of erroneous bits to the number of all data bits stored in a certain portion of the memory device (e.g., in a specified data block). In some implementations, read operations can be performed in order to determine the RBER and the log likelihood ratio (LLR) of data being correctly read so that the errors could be remedied by error correction code (ECC). However, increased use of such scans and iterative determinations of RBER and LLR can also lead to increased latency, reduction of performance, as well as fewer resources being available to perform other operations in a memory sub-system. Moreover, additional P/E cycles caused by errors and by attempts to correct them often further decrease device endurance and reliability. Aspects of the present disclosure address the above and other deficiencies by having a memory sub-system that uses duplicate data to reduce errors and extend the sub-system's endurance. Embodiments of the present disclosure can make use of some of the otherwise unused capacity or excess capacity (e.g., capacity allotted for overprovisioning) of a memory device to store duplicate copies of data. Accordingly, for at least some of the data stored on a memory device two or more copies of the data can be stored in different locations of the memory device. In some embodiments, the two identical copies of the data can be stored in two locations, while in other embodiment an inverse copy of the data can be stored in one of the two locations. The memory cells in each of the respective locations on the memory device where the two copies of the data are stored can, consequently, have their own respective distributions of threshold voltages corresponding to the different programming states of the respective cells. As is described in more detail with reference toFIGS.2A-2CandFIG.3, due to the effects of continued use and degradation of the cells, the distributions of threshold voltages for each respective programming state of the cells can drift. Accordingly there can be a range of threshold voltages where the distributions of threshold voltages corresponding to one particular programming state overlap with the distributions of threshold voltages programming state overlap. In some embodiments of the present disclosure, when the data needs to be read from the device, the data can initially be read from the first location. Then, a determination can be made whether the threshold voltage of the set of one or more memory cells at the first location is within the range of threshold voltages where the threshold voltage distributions for different programming states overlap. Throughout this description reference may be made to a set of one or more memory cells (each of which can correspond to a page, block, array or other subdivision of a memory device containing one or more memory cells). For example, in the context of a read operation a set of memory cells may refer to a page while in the context of a write operation, a set of memory cells may refer to a block. When reference is made to a set of memory cells, a description of characteristics or behavior of one memory cell of that set can be described and it can be assumed that other cells in the set can also have similar characteristics or behave in a similar manner. If the threshold voltage of the set of one or more memory cells in the first location is determined to be within the overlapping range of threshold voltage distributions, an attempt can be made to read the data from the other location. Then, a determination can be made whether the threshold voltage of the set of one or more memory cells at the other location is within the range of threshold voltages where the threshold voltage distributions for different programming states overlap. If the threshold voltage of the cell at the second location is outside the overlapping range, then the bit in the cell can be determined to be programmed to the programming state corresponding to the distribution within which the threshold voltage of the cell is found. In this case the data read from the second location can be used and significantly decrease the likelihood of reading the data incorrectly from the first location and either reduce the need to use the ECC or provide more reliable bits to the ECC. However, if the threshold voltage of the cell at the second location is also determined to be within the overlapping range, another read operation can be performed at each location. This subsequent read operation can be a “strobed” read operation that applies multiple read strobes to read data at a location as described in more detail below with reference toFIG.3. A “read strobe” herein refers to the application of a read voltage level to a wordline to determine whether a memory cell has a threshold voltages below or above the applied read voltage level. Thus, a read operation may include one or more read strobes. Accordingly, the second read operation can apply one strobe at a voltage that is positively offset from the initial read voltage level and another strobe that is negatively offset from the initial read voltage level. Using the strobed read, a measure of confidence or reliability (e.g., a LLR) that the data is correctly recorded at each respective location can be determined based on how far the threshold voltage is from the center of an overlapping range of threshold voltages. Accordingly, the data read from the location with the higher measure of confidence can be used and similarly either reduce the need to use the ECC or provide more reliable bits to the ECC. As data is written to the memory device, the proportion of the data that is recorded in duplicate copies can be adjusted. The adjustment can correlate with keeping RBER below a desired threshold level. The higher the proportion of the data recorded in duplicate copies the lower the RBER can be due to the increased reliability of the data being recorded without errors. Advantages of the present disclosure include, but are not limited to improving the reliability of data storage in the memory device. By storing duplicate copies of the data, the likelihood that the data stored in at least one of the locations can be read without producing errors is increased. This reduces the potential number of P/E cycles and error correction that needs to be performed. Furthermore, this can decrease the latency of read operations on the memory device and increase its endurance. The embodiments of the present disclosure permit a longer usable lifetime within which a memory device can operate within a given reliability margin and allow flexible control of the proportion of memory cells that operate with a desired level of reliability. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to multiple memory sub-systems110of different types.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include a negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory cells can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as a 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, or electrically erasable programmable read-only memory (EEPROM). A memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processing device, which includes one or more processors (e.g., processor117), configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., a logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which is a raw memory device130having control logic (e.g., local media controller135) on the die and a controller (e.g., memory sub-system controller115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The memory sub-system110includes a duplicate copy adjustment (DCADCA) component113that can adjust the number of copies of data made to store the data on the memory device130and thereby adjust the effective number of bits that are stored per cell in the memory device130. For example, if storing one bit of data in one cell at one location results in an effective one bit per cell being stored then having two copies of the data stored in two cells in different location results in an effective 0.5 bits per cell being stored. In some embodiments, the memory sub-system controller115includes at least a portion of the DCA component113. In some embodiments, the DCA component113is part of the host system120, memory sub-system110, an application, or an operating system. In other embodiments, local media controller135includes at least a portion of DCA component113and is configured to perform the functionality described herein. The DCA component113can receive data from the host system120or other components of the memory sub-system110to be stored on the memory device130. In some embodiments, the DCA component113can be directly connected to memory device130or can be connected to the memory device130through memory sub-system controller115. The DCA component113can write data on the memory device130and read data from the memory device130. In an embodiment, for an amount of data intended to be saved on the memory device130the data can be divided into portions that can each be saved in the respective sets of memory cells of the memory device130. The DCA component113can store multiple copies of a data portion in different sets of memory cells in the memory device130. For example, the DCA component113can store one copy of the data portion in one set of memory cells of the memory device and store another copy of the data portion in a second set of memory cells of the memory device. The memory cells where the copies of the data are stored can respectively be in different locations on the memory device130. For example, among other possibilities, the respective copies of the data can be stored (a) in separate pages, (b) in different locations within the same page, (c) in adjacent sub-blocks, (d) in memory cells connected to adjacent word lines, or (e) in separate planes. It should be understood that more than two copies of the data can be made and saved in more than two different locations on the memory device130. Furthermore, in some embodiments, such as the one depicted inFIGS.2A-2C, one of the copies of the data portion can be an inverse of the first copy of the data portion (i.e., the memory cell storing a copy of a data bit being programmed to a logical state of ‘1’ in one location and the memory cell storing a copy of the data bit programmed to a logical state of ‘0’ in the other location). Each ofFIG.2A,FIG.2B, andFIG.2Cdepicts two graphs of threshold voltages of respective memory cells in two different locations on a memory device130shown within threshold voltage distributions of different programming states. Each ofFIG.2A,FIG.2B, andFIG.2Cshow two copies of data recorded where one of the copies of the data is the inverse of the other copy of the data. InFIG.2A, the copy of the data bit at the first location201ais programmed in a memory cell with a threshold voltage represented by position215a. Position215arepresents a threshold voltage that is unambiguously within the distribution212aof threshold voltages of memory cells at location201aprogrammed to a logical state of ‘1’. Position215ais outside of the range of voltages within distribution213aof threshold voltages of memory cells at location201aprogrammed to a logical state of ‘0’ and outside the range of voltages214awhere distribution212aand distribution213aoverlap. The copy of the data bit at the second location202acan be programmed in another memory cell with a threshold voltage represented by position225a, position227a, or position229a. The potential threshold voltages represented by position225aand position227aare unambiguously within the range of voltages within distribution223aof threshold voltages of memory cells at location202aprogrammed to a logical state of ‘0’. The potential threshold voltage represented by position229a, however, is within a range of voltage levels224awhere distribution222aof threshold voltages of memory cells at location202aprogrammed to a logical state of ‘1’ overlaps with distribution223a. Position229ais not exclusively within only one of distribution222aor distribution223a. InFIG.2B, the copy of the data bit at the first location201bis programmed in a memory cell with a threshold voltage represented by position215b. Position215brepresents a threshold voltage that is unambiguously within the distribution212bof threshold voltages of memory cells at location201bprogrammed to a logical state of ‘1’. Position215bis outside of the range of voltages within distribution213bof threshold voltages of memory cells at location201bprogrammed to a logical state of ‘0’ and outside the range of voltages214bwhere distribution212band distribution213boverlap. The copy of the data bit at the second location202bcan be programmed in another memory cell with a threshold voltage represented by position225b, position227b, or position229b. The potential threshold voltages represented by position225band position227bare unambiguously within the range of voltages within distribution223bof threshold voltages of memory cells at location202bprogrammed to a logical state of ‘0’. The potential threshold voltage represented by position229b, however, is within a range of voltage levels224bwhere distribution222bof threshold voltages of memory cells at location202bprogrammed to a logical state of ‘1’ overlaps with distribution223b. Position229bis not exclusively within only one of distribution222bor distribution223b. InFIG.2C, the copy of the data bit at the first location201cis programmed in a memory cell with a threshold voltage represented by position215c. Position215crepresents a threshold voltage that is not unambiguously within the distribution212cof threshold voltages of memory cells at location201cprogrammed to a logical state of ‘1’ because it is also within the distribution213cof threshold voltages of memory cells at location201cprogrammed to a logical state of ‘0’. Thus position215cis within the range of voltages214cwhere distribution212cand distribution213coverlap. The copy of the data bit at the second location202ccan be programmed in another memory cell with a threshold voltage represented by position225c, position227c, or position229c. The potential threshold voltages represented by position225cand position227care unambiguously within the range of voltages within distribution223cof threshold voltages of memory cells at location202cprogrammed to a logical state of ‘0’. The potential threshold voltage represented by position229c, however, is within a range of voltage levels224cwhere distribution222cof threshold voltages of memory cells at location202cprogrammed to a logical state of ‘1’ overlaps with distribution223c. Position229cis not exclusively within only one of distribution222cor distribution223c. Thus, the DCA component113can receive instructions from the host system120or other components of the memory sub-system110to retrieve (i.e., read) data from the memory device130. Accordingly, the DCA component113can read a copy of the data from one location on the memory device130. The DCA component113can read the data by applying a read reference voltage level to the set of one or more memory cells storing the data on the memory device130and determining whether the threshold voltages of the set of one or more memory cells was higher or lower than the applied read reference voltage. Considering that the threshold voltage distributions of the cells on the memory device for each of the respective programming states can overlap, there may be ranges of threshold voltages within which the threshold voltage of a memory cell can be said to represent a programming state of a bit with a high confidence and data and ranges of threshold voltages within which the threshold voltage of a memory cell can be said to represent a programming state of a bit with a low confidence. For example, if a threshold voltage of a memory cell is within the range of threshold voltages where the voltage distribution of cells programmed to a logical state of ‘1’ overlaps with the voltage distribution of cells programmed to a logical state of ‘1’, then the determination of the programming state of that cell by the read operation (i.e., through the application of the read reference voltage level to the cell) can be deemed to be a low confidence determination. Accordingly, the DCA component113can determine whether a threshold voltage of the memory cell is within an overlapping range of one threshold voltage distribution and another threshold voltage distribution where each distribution represents a respective binary logical state of the memory cell. For example, the threshold voltage of the memory cell at location201ccan be determined to be at position215cwhich is within the overlapping range214cof distribution212cand distribution213c. Naturally, if the threshold voltage of the cell is determined to not be within the overlapping range it is determined to be outside the overlapping range and within a range of threshold voltages which is deemed to represent a programming state of the bit with high confidence. For example, the threshold voltage of the memory cell at location201acan be determined to be at position215athat is exclusively within distribution212aand can therefore be deemed to represent a programming state of ‘1’ with high confidence. Similarly, the threshold voltage of the memory cell at location201bcan be determined to be at position215bthat is exclusively within distribution212band can therefore be deemed to represent a programming state of ‘1’ with high confidence. If the DCA component determines that the threshold voltage is not within the overlapping range, the DCA component113can use this copy of the data bit (i.e., use the programming state of the memory cell of the first location as representative of the value of that bit) for error correction or further operation of the memory device130. In this case reading or referring to data stored at another (i.e., second) location may not be necessary. For example, if the threshold voltage of the memory cell at location201acan be determined to be at position215ait may not be necessary to read the data stored at location202a. Similarly, if the threshold voltage of the memory cell at location201bcan be determined to be at position215bit may not be necessary to read the data stored at location202b. However, If the DCA component113determines that the threshold voltage of the first memory cell is within the overlapping range, in response to that determination, the DCA component113can read the other copy of the data that was stored in another cell in the other location and similarly determine whether the threshold voltage of the other memory cell is within a range of voltages where two voltage distributions, each distribution representative of a different respective binary logical state, overlap. For example, the threshold voltage of the memory cell at location201ccan be determined to be at position215cthat is within the overlapping range214c. Notably, if a threshold voltage of a memory cell is within the overlapping range, then the confidence that the bit is being read correctly (i.e., the programming state of the bit being correctly determined by the read operation) is low, and if threshold voltage of a memory cell is outside the overlapping range, then the confidence that the bit is being read correctly is high since, by extension, the threshold voltage of the cell will be clearly within a range of threshold distributions that unambiguously corresponds to one of the possible programming states of the memory cell. Accordingly, if the DCA component113determines that the threshold voltage of the second memory cell is outside the second overlapping range, this indicates with high confidence that the bit in the second memory cell (i.e., the memory cell in the other location where the data bit was stored) was read correctly. For example, if the threshold voltage of the memory cell at location202cis determined to be in either one of position225cor227c, it would indicate with high confidence that the data bit in the memory cell at location202cis being read correctly. Consequently, the DCA component113can use the second copy of the data bit (i.e., use the programming state of the memory cell of the second location as representative of the value of that bit) for error correction or further operation of the memory device130. For example, if the threshold voltage of the memory cell at location202cis determined to be in either one of position225cor227c, then the data bit value stored at location202ccan be used instead of that stored at location201c. However, if DCA component113determines that the threshold voltage of the second memory cell being is within the second overlapping range, then the programming state of the memory cell in the second location is not unambiguous. For example, if the threshold voltage of the memory cell at location202cis determined to be at position229c, it is within the overlapping range224c. If a threshold voltage of a memory cell is unambiguously (i.e., exclusively) within a voltage distribution representative of a particular programming state (i.e., a ‘0’ or a ‘1’), then, as used herein, the data bit in that memory cell can be deemed to have been “read correctly”. Accordingly, to further resolve the ambiguity of the programming state of the data bit, the DCA component113can then determine a measure of confidence that the data bit is correctly read from the first memory cell and determine a measure of confidence that the data bit is correctly read from the second memory cell. In some embodiments, the difference between the threshold voltage of a memory cell and the center of the overlapping range of voltage distributions can serve as a measure of confidence that the bit is read correctly. For example, the larger the difference between the threshold voltage of the memory cell and the voltage at the center of the overlapping range the higher the confidence that the bit is being read correctly (i.e., that the programming state of the memory cell is being properly determined). In some embodiments, the DCA component113can determine the log likelihood ratio (LLR) that the bit is correctly read at the memory cell at each respective location and use the LLR as a proxy (i.e., indirect indication) of a value representing a difference between the threshold voltage of the memory cell and the voltage at the center of the overlapping range. In other embodiments, described in more detail below and shown inFIG.3, the DCA component113can perform a strobed read operation on each of the memory cells to divide the overlapping range of threshold voltages into bins of voltages relative to the voltage at the center of the overlapping distribution. Consequently, the DCA component113can use the copy of the data bit stored in the memory cell having the higher measure of confidence (i.e., use the programming state of the memory cell with the higher measure of confidence as representative of the value of that bit) for error correction or further operation of the memory device130. The range of threshold voltages that is unambiguously (i.e., exclusively) within a voltage distribution representative of a particular programming state (i.e., a ‘0’ or a ‘1’), then, as used herein, that range of voltages can be deemed to be a “high reliability” range of threshold voltages. Thus, in some embodiments, the DCA component113can read a first copy of data in a first memory cell of a memory device130and determine whether the threshold voltage of the first memory cell is within the high reliability range of threshold voltages. If the DCA component113determines that the threshold voltage of the first memory cell is not within the high reliability range, then, in response, the DCA component113can read the second copy of the data in the second memory cell of the memory device and determine whether the threshold voltage of the second memory cell is within the high reliability range of threshold voltages. In some embodiments, if the DCA component113determines that that the threshold voltage of the second memory cell is within the high reliability range, then the DCA component113can use the second copy of the data (i.e., use the programming state of the memory cell of the second location as representative of the value of that bit) for error correction or further operation of the memory device130. However, if the DCA component113determines that the threshold voltage of the second memory cell is not within the high reliability range, the DCA component113can perform a strobed read operation on each memory cell. FIG.3depicts a diagram300of a strobed read operation being performed on a memory cell whose threshold voltage can be within an overlapping range of threshold voltage distributions in accordance with some embodiments of the present disclosure. Threshold voltage distribution302represents the distributions of threshold voltages of memory cells programmed to a logical state of ‘1’ while threshold voltage distribution301represents the distributions of threshold voltages of memory cells programmed to a logical state of ‘0’. The distributions overlap in the range of voltage levels330. The ability of the ECC to correct errors depends on strategies that make use of estimations of the exact values of the voltages at the potential positions of the threshold voltages in a given cell such as, for example, position303, position305, position307, and position309. Such estimations can be referred to herein as “soft information” and the aforementioned strobed read can be referred to herein as a “soft read”. As noted earlier, when the voltage distributions overlap as shown with reference to the diagram300ofFIG.3, errors arise. In some embodiments the DCA component113can read all the values to the right of the reference voltage311as ‘0’ and all the values to the left of the reference voltage311as ‘1’. Thus, in the depicted situation the overlap region330will be composed of read errors. However, it should be understood from the potential of the threshold voltages being at position303, position305, position307, and position309that the error positions may vary in magnitude. The farther away (in terms of voltage) the error positions are from the reference voltage311, the more probable it is that the memory cell contains the value that was stored. For example, position307is slightly to the right of the reference voltage311VRwhile position309is farther away from the reference voltage311VR. As such, it is more likely that position307carries the greater error because correct values should not be close to the reference voltage. Alternatively, position309can be considered to carry less error than position307and is more likely to be read correctly. Similarly, position305is slightly to the left of the reference voltage311VRwhile position303is farther away to the right from the reference voltage311VR. As such, it is more likely that position305carries the greater error because correct values should not be close to the reference voltage. In some embodiments, by exploiting the exact or estimated values of position303and position305or of position307and position309, differentiation can be used between the two points and better information can then be provided to the ECC, resulting in improved decoding performance of the ECC in correcting the error. In some embodiments, the soft information estimating the exact values of position303and position305or of position307and position309, can be expressed by a log likelihood ratio (LLR). Thus, in some cases error position307could be presented to the ECC as a value of ‘0’ and assigned a low magnitude LLR (i.e., probability) due to its close proximity to the reference voltage311, whereas error position309could be presented to the ECC as a value of ‘0’ and assigned a moderate magnitude LLR (probability) due to its greater distance from the reference voltage311. In some embodiments the ECC can address and correct errors using the soft information provided by the LLRs. The LLR attributed to a bit can be representative of the probability that the voltage value read corresponds to a ‘0’ or a ‘1’ (i.e., probability that the bit in the memory cell was read correctly). In memory devices with few defects, a corresponding low raw bit error rate (RBER) will exist and most LLRs will have a large magnitude, while only a few LLRS will have a small magnitude. To perform the strobed read operation on each of the first memory cell and second memory cell, the DCA component113can apply two or more additional read reference voltages312,314to the respective cell. Each additional read reference voltage can be incrementally offset from the voltage level applied during the first read operation (i.e., the initial read reference voltage level311). For example, if an initial read operation applied a read reference voltage311VRthen two additional read reference strobes can be applied to the memory cell, one at a voltage level314of VR+i (i.e., positively offset by i volts from VR) and another at a voltage level312of VR−i (i.e., negatively offset by i volts from VR) where VRcan correspond to the center of the overlapping range330of threshold voltages. The reference voltage of each strobe of the strobed read operation can define a boundary of a bin of threshold voltages (i.e., a bin can have a maximum and minimum voltage defining the range of voltages within that bin). There may be multiple threshold voltage bins within the overlapping range of threshold distribution. For example one bin322can be defined to include the range of voltages from VRto VR+i and another bin324can be defined to include the range of voltages from VRto VR−i. The same reference voltages of the strobes can also define other bins such as bin326that includes the range of voltages from VR+i to ∞ and bin328that includes the range of voltages from VR−i to 0. In some embodiments, the DCA component113can perform additional strobed read operations by applying multiple read reference voltages that can define more bins on each side of the center of the overlapping range. In some embodiments, the DCA component113can determine which of the first memory cell and the second memory cell has its threshold voltage within a bin farthest from a center of the respective overlapping range of voltage distribution. For example the threshold voltage in one memory cell at the first location can be determined to be at position305within bin324while the threshold voltage in the other memory cell at the second location can be determined to be at positon309within bin326. Alternatively, the threshold voltage in one memory cell at the first location can be determined to be at position303within bin328while the threshold voltage in the other memory cell at the second location can be determined to be at positon307within bin322. The distance (in terms of voltage difference) of the bin away from the center of the overlapping range can serve as a measure of confidence that the bit of the corresponding memory cell is being read correctly (i.e., that the programming state of the bit being correctly determined by the read operation). Accordingly, the farther (in terms of voltage difference) a bin is from the center of the overlapping range of voltage distribution, the higher the confidence the bit is read correctly. For example, if the threshold voltage of a memory cell is at positon303the there is a higher likelihood that it is being read correctly than if the threshold voltage of the memory cell was at position309. Consequently, the DCA component113can use the copy of the data bit recorded in a memory cell having the threshold voltage within the bin farthest from a center of the overlapping range (i.e., use the programming state of the memory cell with the threshold voltage in the bin farthest from the center of the range as representative of the value of that bit) for error correction or further operation of the memory device130. Thus, in some embodiments, the DCA component113can send the data bit determined to be read with higher confidence to the ECC to increase the likelihood that the ECC will succeed in correcting the error and determine the correct programming state of the data bit. In other embodiments the DCA component113can provide a sum of the measures of confidence for each of the respective memory cell to the ECC to increase the likelihood that the ECC will succeed in correcting the error and determine the correct programming state of the data bit. Furthermore, in some embodiments, the DCA component113can control the percentage or proportion of data portions that are recorded in multiple copies on the memory device130to increase the bits-per-cell (BPC) ratio. For example, instead of storing all of the data in 2 copies which would result in a BPC of 0.5, the DCA component113can store half the data in 2 copies with the remaining data only stored in 1 copy and result in a BPC of 0.75. Accordingly, the DCA component113can adjust the proportion of data portions that are recorded in multiple copies to achieve a desired BPC or maintain a desired level of RBER. Further details with regards to the operations of the DCA component113are described with reference toFIG.4andFIG.5below. FIG.4is a flow diagram of an example method400for using duplicate data to reduce a raw bit error rate (RBER) in accordance with some embodiments of the present disclosure. The method400can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method400is performed by the DCA component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation402, the processing logic can receive data from the host system120or other components of the memory sub-system110to be stored on the memory device130. The processing logic can divide the received data into portions to be stored in the sets of one or more memory cells of the memory device130. Having received or otherwise obtained data to be stored the processing logic can, at operation404, can store a first copy of the data in a memory cell at a first location on the memory device130. At operation406, the processing logic can store a second copy of the data in another memory cell at another location on the memory device130. In some embodiments, the second copy of the data can be an inverse of the first copy of the data. The different copies of the data can be stored (a) on separate pages, (b) in different locations within the same page, (c) in adjacent sub-blocks, (d) in memory cells of adjacent word lines, or (e) in separate planes of the memory device130. The processing logic can receive instructions from the host system120or other components of the memory sub-system110to retrieve (i.e., read) data from the memory device130. Accordingly, at operation408, the processing logic can read the copy of the data from the memory cell at the first location on the memory device130. Then, at operation410, the processing logic can determine whether the threshold voltage of the memory cell is within an overlapping range of one threshold voltage distribution and another threshold voltage distribution where each distribution represents a different respective binary logical state of the memory cell. In some embodiments the processing logic can determine, at operation410, whether the threshold voltage of the memory cell is within a high reliability range of threshold voltages. In some embodiments, if it is determined, at operation410, that the threshold voltage of the memory cell is not within an overlapping range, the processing logic can, at operation411use the copy of the data stored in the memory cell at the first location of memory device130. If it is determined, at operation410, that the threshold voltage of the memory cell is within an overlapping range (i.e., not within a high reliability range of threshold voltages), the processing logic can, at operation412, the processing logic can read the other copy of the data stored in the other memory cell at the other location on the memory device130. At operation414, the processing logic can determine whether the threshold voltage of the cell at the second location is within an overlapping range of voltage distributions that each respectively represent different programming states of the memory cell. In some embodiments the processing logic can determine, at operation414, whether the threshold voltage of the memory cell at the second location is within a high reliability range of threshold voltages. If it is determined, at operation414, that the threshold voltage of the memory cell at the second location is not within an overlapping range (i.e., is within a high reliability range of threshold voltages), the processing logic can, at operation415, use the copy of the data stored in the memory cell at the second location of memory device130. However, if it is determined, at operation414, that the threshold voltage of the memory cell at the second location is within the overlapping range of threshold voltage distributions, the processing logic can, at operation416, perform a strobed read operation which is described in more detail with reference toFIG.5. FIG.5is a flow diagram of an example method500for using a strobed read operation to reduce a raw bit error rate (RBER) in accordance with some embodiments of the present disclosure. The method500can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method500is performed by the DCA component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. In some embodiments, in response to determining, at operation414, that the threshold voltage of the second memory cell is be within the overlapping range, the processing logic can, for the memory cells at each of the two locations, at operation502, apply a strobe read voltage that is positively offset from the initial read reference voltage applied during the initial read operation. Then, at operation504, the processing logic can, for the memory cells at each of the two locations, at operation502, apply a strobe read voltage that is negatively offset from the initial read reference voltage applied during the initial read operation. For example, if the processing logic applied a read reference voltage of VRduring the initial read operation, then two additional read reference strobes can be applied to the memory cell, one at a voltage level of VR+i (i.e., positively offset by i volts from VR) and another at a voltage level of VR−i (i.e., negatively offset by i volts from VR) where VRcan correspond to the center of the overlapping range of threshold voltages. At operation506, the processing logic can determine a measure of confidence that the data bit is correctly read from the first memory cell and determine a measure of confidence that the data bit is correctly read from the second memory cell. Determining the confidence measure at operation507can include demarcating bins, at operation507, within the overlapping range of threshold voltages and categorizing, at operation508, the threshold voltages the cell into the bins. The reference voltage of each strobe of the strobed read operation can define a boundary of a bin of threshold voltages within the overlapping range of threshold distribution. For example one bin can be defined to include the range of voltages from VRto VR+i and another bin can be defined to include the range of voltages from VRto VR−i. The same reference voltages of the strobes can also define other bins such as bin that includes the range of voltages from VR+i to ∞ and bin that includes the range of voltages from VR−i to 0. Accordingly, at operation507, the processing logic can demarcate the bins with the boundaries of the bins being defined by the voltage levels applied for each of the additional read strobes. Then, at operation508, the processing logic can determine the bin of thresholds voltages within which the threshold voltage of the cell is found and categorize the threshold voltage of the cell accordingly. In some embodiments, the processing logic can perform additional strobed read operations by applying multiple read reference voltages that can define more bins on each side of the center of the overlapping range. Thus, the processing logic can determine a measure of confidence that the data bit in the cell is being read correctly. In some embodiments, the difference between the threshold voltage of a memory cell and the center of the overlapping range of voltage distributions can serve as a measure of confidence that the bit is read correctly. For example, the larger the difference between the threshold voltage of the memory cell and the voltage at the center of the overlapping range the higher the confidence that the bit is being read correctly (i.e., that the programming state of the memory cell is being properly determined). In some embodiments, the processing logic can determine the log likelihood ratio (LLR) that the data bit is correctly read at the memory cell at each respective location and use the LLR as a proxy (i.e., indirect indication) of a value representing a difference between the threshold voltage of the memory cell and the voltage at the center of the overlapping range. In other embodiments, the processing logic can determine the distance (in terms of voltage difference) of the bin away from the center of the overlapping range and use it as a measure of confidence that the bit of the corresponding memory cell is being read correctly. Then, at operation510processing logic can use the copy of the data bit stored in the memory cell having the higher measure of confidence. FIG.6illustrates an example machine of a computer system600within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system600can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the DCA component113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system600includes a processing device602, a main memory604(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory606(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system618, which communicate with each other via a bus630. Processing device602represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device602can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device602is configured to execute instructions626for performing the operations and steps discussed herein. The computer system600can further include a network interface device608to communicate over the network620. The data storage system618can include a machine-readable storage medium624(also known as a computer-readable medium) on which is stored one or more sets of instructions626or software embodying any one or more of the methodologies or functions described herein including method400and method500. The instructions626can also reside, completely or at least partially, within the main memory604and/or within the processing device602during execution thereof by the computer system600, the main memory604and the processing device602also constituting machine-readable storage media. The machine-readable storage medium624, data storage system618, and/or main memory604can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions626include instructions to implement functionality corresponding to a DCA component (e.g., the DCA component113ofFIG.1). While the machine-readable storage medium624is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 70,767 |
11861234 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to dynamic adjustment of data storage for enhanced data retention. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. A non-volatile memory device is a package of one or more dies. Each die can consist of one or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane consists of a set of physical blocks. Each block consists of a set of pages. Each page consists of a set of memory cells (“cells”). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. A memory device can include multiple bits arranged in a two-dimensional grid. Memory cells are formed onto a silicon wafer in an array of columns (also hereinafter referred to as bitlines) and rows (also hereinafter referred to as wordlines). A wordline can refer to one or more rows of memory cells of a memory device that are used with one or more bitlines to generate the address of each of the memory cells. The intersection of a bitline and wordline constitutes the address of the memory cell. A block hereinafter refers to a unit of the memory device used to store data and can include a group of memory cells, a wordline group, a wordline, or individual memory cells. One or more blocks can be grouped together to form a plane of the memory device in order to allow concurrent operations to take place on each plane. The memory device can include circuitry that performs concurrent memory page accesses of two or more memory planes. For example, the memory device can include a respective access line driver circuit and power circuit for each plane of the memory device to facilitate concurrent access of pages of two or more memory planes, including different page types. A memory cell can be programmed (written to) by applying a certain voltage to the memory cell, which results in an electric charge being held by the memory cell. For example, a voltage signal VCGthat can be applied to a control electrode of the cell to open the cell to the flow of electric current across the cell, between a source electrode and a drain electrode. More specifically, for each individual memory cell (having a charge Q stored thereon) there can be a threshold control gate voltage VT(herein also referred to as the “threshold voltage” or simply as “threshold”) such that the source-drain electric current is low for the control gate voltage (VCG) being below the threshold voltage, VCG<VT. The current increases substantially once the control gate voltage has exceeded the threshold voltage, VCG>VT. Because the actual geometry of the electrodes and gates varies from cell to cell, the threshold voltages can be different even for cells implemented on the same die. The memory cells can, therefore, be characterized by a distribution P of the threshold voltages, P(Q, VT)=dW/dVT, where dW represents the probability that any given cell has its threshold voltage within the interval [VT, VT+dVT] when charge Q is placed on the cell. A memory device can have distributions P(Q, VT) that are narrow compared with the working range of control voltages tolerated by the cells of the device. Accordingly, multiple non-overlapping distributions P(Qk, VT) (“valleys”) can be fit into the working range allowing for storage and reliable detection of multiple values of the charge Qk, k=1, 2, 3 . . . . The distributions (valleys) are interspersed with voltage intervals (“valley margins”) where none (or very few) of the memory cells of the device have their threshold voltages. Such valley margins can, therefore, be used to separate various charge states Qk—the logical state of the cell can be determined by detecting, during a read operation, between which two valley margins the respective threshold voltage VTof the cell resides. This effectively allows a single memory cell to store multiple bits of information: a memory cell operated with 2N−1 well-defined valley margins and 2N valleys is capable of reliably storing N bits of information. Specifically, the read operation can be performed by comparing the measured threshold voltage VTexhibited by the memory cell to one or more reference voltage levels corresponding to known valley margins (e.g., centers of the margins) of the memory device. One type of memory cell (“cell”) is a single level cell (SLC), which stores 1 bit per cell and defines 2 logical states (“states”) (“1” or “L0” and “0” or “L1”) each corresponding to a respective VTlevel. For example, the “1” state can be an erased state and the “0” state can be a programmed state (L1). Another type of cell is a multi-level cell (MLC), which stores 2 bits per cell and defines 4 states (“11” or “L0”, “10” or “L1”, “01” or “L2” and “00” or “L3”) each corresponding to a respective VTlevel. For example, the “11” state can be an erased state and the “01”, “10” and “00” states can each be a respective programmed state. Another type of cell is a triple level cell (TLC), which stores 3 bits per cell and defines 8 states (“111” or “L0”, “110” or “L1”, “101” or “L2”, “100” or “L3”, “011” or “L4”, “010” or “L5”, “001” or “L6”, and “000” or “L7”) each corresponding to a respective VTlevel. For example, the “111” state can be an erased state and each of the other states can be a respective programmed state. Another type of a cell is a quad-level cell (QLC), which stores 4 bits per cell and defines 16 states L0-L15, where L0 corresponds to “1111” and L15 corresponds to “0000”. Another type of cell is a penta-level cell (PLC), which stores 5 bits per cell and defines 32 states. Other types of cells are also contemplated. Thus, an n-level cell can use 2nlevels of charge to store n bits. A memory device can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs, etc. or any combination of such. For example, a memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. Some memory sub-systems (e.g., SSDs) implement SLC caching for storing data. SLC caching utilizes SLC cache along with XLC storage. An XLC cell is a multiple level cell that stores more than one bit of state information per cell (e.g., MLC, TLC, QLC, PLC, as described above). SLC caching can be used to improve write speed since programming data on SLC cells is generally faster than programming data on XLC cells. Data written to the SLC cache can later be moved, asynchronously with respect to writing operations, from SLC cache to XLC storage to make room for future writes to the SCL cache (e.g., 1 bit in SLC cache can take up the same space as 4 bits in QLC storage). For example, the data can be moved in the background or during idle times to maintain performance. The SLC cache size can be selected in view of physical memory device constraints. For example, the SLC cache size can have a fixed size that does not exceed the available number of blocks on the memory device (e.g., NAND). The memory sub-system can utilize a SLC cache behavior profile specifying at least one of: size rules of the cache (e.g., rules for increasing or decreasing the cache), usage rules of the cache, rules specifying the location of the cache, etc. The SLC cache behavior profile may include a single configuration rule, or multiple rules. For example, an initial SLC cache behavior profile may be loaded by a manufacturer onto the memory sub-system at the time of manufacture. The SLC cache behavior profile can be a static profile that remains unchanged over time. For example, the initial SLC cache behavior profile can persist through the life of the memory sub-system. Alternatively, the SLC cache behavior profile can be a dynamic profile that can be updated or replaced with an updated SLC cache behavior profile via a communications interface. For example, device usage characteristics may change (e.g., usage behavior of the device in which the memory sub-system is installed), and thus the host may replace the SLC cache behavior profile over the communications interface. Illustratively, a smartphone may receive an over the air (OTA) update that specifies an updated SLC cache behavior profile that modifies the performance characteristics of the memory sub-system in response to a change in usage behavior of the smartphone. Data retention refers to the ability of a cell to retain its state information over a period of time in an operational state (e.g., powered on state) or a non-operational state (e.g., powered off state). For example, VTdistributions can shift due to factors such as time, temperature, program/erase cycles, etc. VTdistribution shifts can contribute to read errors, and therefore decrease memory sub-system performance. Illustratively, data retention can be challenging when a memory sub-system in a non-operational state is stored in a high temperature environment over a long period of time. For example, a memory sub-system can be stored in a high temperature warehouse after manufacture. Data retention can have a greater impact on data stored in XLC cells, as data retention for data stored on XLC cells can be shorter than data retention for data stored on XLC cells. However, typical SLC caching methods do not take into account data retention considerations, such as the storage of non-operational memory sub-systems within high temperature environments. Aspects of the present disclosure address the above and other deficiencies by providing a memory sub-system that implements dynamic adjustment of data storage for enhanced data retention. A memory sub-system described herein can include SLC cache and XLC storage. The SLC cache can include a static SLC cache having a fixed logical saturation size (“fixed size”) and a dynamic SLC cache having a dynamic (e.g. modifiable or configurable) maximum logical saturation size (“dynamic maximum size”). Logical saturation refers to a portion of logical locations (e.g., logical block addresses (LBAs)) that contain data (e.g., a ratio of the size of the logical locations that contain data to the total size of the logical locations). In contrast to logical saturation, physical saturation refers to a portion of physical locations (e.g., physical NAND locations) that contain data (e.g., a ratio of the size of the physical locations that contain data to the total size of the physical locations). The fixed size of static SLC cache can be expressed as a share of a storage capacity of the memory sub-system (“memory sub-system storage capacity”). Thus, the static SLC cache can store an amount of data having a logical saturation up to the fixed size. The dynamic SLC cache can have a default or base maximum logical saturation size (“default maximum size”). The default maximum size can expressed as another share of the memory sub-system storage capacity. Thus, when the dynamic SLC cache size is set at the default maximum size, the dynamic SLC cache can store an amount of data having a logical saturation up to the default maximum size. An increase of the maximum size of dynamic SLC cache can be limited by a theoretical maximum logical saturation size (“theoretical maximum size”). The theoretical maximum size can be defined by the memory sub-system storage capacity and the type of XLC storage (e.g., memory sub-system storage capacity divided by bits per XLC cell). For example, if the XLC storage is QLC storage, then the theoretical maximum size can be 25% of the memory sub-system storage capacity. Thus, when the dynamic SLC cache size is set at the theoretical maximum size, the dynamic SLC cache can store an amount of data having a logical saturation up to the theoretical maximum size. These sizes can be predetermined by the manufacturer at the time of manufacture, and maintained in the SLC cache behavior profile stored in the memory sub-system. The memory sub-system can be operatively coupled to a host system. The host system can provide data for storage on the memory sub-system. A memory sub-system controller can operate in a default mode or an enhanced data retention mode. For example, metadata indicating the mode can be maintained in the SLC cache behavior profile. When operating in the default mode, the dynamic SLC cache has the default maximum size described above, and the memory sub-system controller can cause data to be moved from SLC cache to XLC storage in the background or during idle times. When operating in the enhanced data retention mode, the memory sub-system controller can increase the maximum size of the dynamic SLC cache from the default maximum size to an enhanced maximum logical saturation size (“enhanced maximum size”) to enable continued writes to SLC cache. For example, the memory sub-system can initially operate in the enhanced data retention mode (e.g., as indicated by metadata maintained in the SLC cache behavior profile). As another example, the memory sub-system can switch from the default mode to the enhanced data retention mode upon determining that logical saturation of data stored on dynamic SLC cache is greater than the default maximum size. The enhanced maximum size is greater than the default maximum size, and less than or equal to the theoretical maximum size described above. For example, the enhanced maximum size can be selected to be less than the theoretical maximum size to maintain desired memory sub-system performance by limiting the impact of moving data from SLC cache to XLC storage. In some embodiments, the enhanced maximum size is 20% of the memory sub-system storage capacity. However such an example should not be considered limiting. The enhanced maximum size can be predetermined by the manufacturer at the time of manufacture, and stored in the SLC cache behavior profile maintained by the memory sub-system. In the event that the amount of data being written to the memory sub-system (e.g., the number of bytes) were to exceed the maximum amount of data that can be written to the memory sub-system, the memory sub-system controller can cause certain data to be moved to XLC storage (e.g., temporary files) to make room for further writes to the SLC cache. The host system may also decide to delete existing data in the SLC cache or XLC storage so that the remaining memory can be moved and retained in SLC cache. For example, the amount of data written to the memory sub-system can measured in terabytes written (TBW) to the memory sub-system. As mentioned above, the memory sub-system manufacturer can set the various SLC cache sizes (e.g., static SLC cache size, default maximum size, enhanced maximum size) at the time of memory sub-system manufacture within an SLC cache behavior profile maintained on the memory sub-system. By increasing the size of the dynamic SLC cache to the enhanced maximum size and keeping data stored in SLC cache while in the enhanced data retention mode, embodiments described herein can improve data retention when the memory sub-system is placed in the high temperature environment. After storing the data in SLC cache, the memory sub-system can then be placed in the high temperature environment while in a non-operational state. Since the data is maintained only in SLC cache, and not XLC storage, concerns related to the high temperature affecting XLC data retention are alleviated. After the memory sub-system is removed from the high temperature environment, the memory sub-system can be placed into an operational state (e.g., a user starts utilizing the memory sub-system). To improve memory sub-system performance, the memory sub-system can operate in the default mode while in the operational state to reduce the maximum size of the dynamic SLC cache back to the default maximum size and to move data from SLC cache to QLC storage. For example, as described above, memory sub-system performance can decrease as the maximum dynamic SLC cache size increases. Illustratively, for a memory sub-system having QLC storage and a storage capacity or maximum logical saturation of 512 gigabytes (GB), the static SLC cache size can be about 1% of the memory sub-system storage capacity (5 GB), the default maximum size of the dynamic SLC cache can be 10% of the memory sub-system storage capacity (51 GB), the enhanced maximum size of the dynamic SLC cache can be 20% of the memory sub-system storage capacity (102 GB) and the theoretical maximum size of the dynamic SLC cache can be 25% of the memory sub-system storage capacity (128 GB). Thus, in this example, the enhanced maximum size is double the default maximum size. However, such an example should not be considered limiting. If a host system were to write 100 GB to this memory sub-system (e.g., an operating system (OS) image), the memory sub-system controller could choose to write the first 5 GB to static SLC cache, and the next 51 GB to dynamic SLC cache. When operating in the default mode, the memory sub-system controller would move the 56 GB of data from SLC cache to QLC storage during idle time or in the background. Once the SLC cache has been freed up after moving the data to XLC storage, the memory sub-system controller would then write the remaining 44 GB of data to SLC cache, and then move the 44 GB of data to QLC storage during idle time or in the background. To prevent data from being stored in QLC storage prior to placing the memory sub-system, in a non-operational state, within a high temperature environment (e.g., warehouse within a factory) and thus improve data retention, the memory sub-system controller can operate in the enhanced data retention mode. For example, if the logical saturation of the dynamic SLC cache exceeds the default maximum size of 51 GB, the memory sub-system controller can cause the size of the dynamic SLC cache to increase to 102 GB. By increasing the maximum size of dynamic SLC cache to the enhanced maximum size, the memory sub-system controller can continue to write the remaining 44 GB of data to SLC cache without moving data to QLC storage. Further details regarding the operations performed by the memory sub-system controller will be described below with reference toFIGS.1-5. Advantages of the present disclosure include, but are not limited to, improved memory device performance. For example, implementations described herein can improve data retention, and therefore decrease error rates. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to multiple memory sub-systems110of different types.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DEV IM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include a negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory cells can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as a 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, or electrically erasable programmable read-only memory (EEPROM). A memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processing device, which includes one or more processors (e.g., processor117), configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., a logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system120into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which is a raw memory device130having control logic (e.g., local controller132) on the die and a controller (e.g., memory sub-system controller115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. The memory sub-system110includes an enhanced data retention (EDR) component113that can implement dynamic adjustment of data storage for enhanced data retention. In some embodiments, the memory sub-system controller115includes at least a portion of the EDR component113. In some embodiments, the EDR component113is part of the host system120, an application, or an operating system. In other embodiments, local media controller135includes at least a portion of EDR component113and is configured to perform the functionality described herein. For example, the memory device130can further include XLC storage137(e.g., MLC, TLC, QLC, PLC), and the memory device140can include static SLC cache142and dynamic SLC cache144. The static SLC cache142can have a fixed logical saturation size (“fixed size”) and the dynamic SLC cache144having a dynamic (e.g. modifiable or configurable) maximum logical saturation size (“dynamic maximum size”). For example, the fixed size of the static SLC cache142can be less than or equal to an available number of blocks on the memory device130. The fixed size can be expressed as a share (e.g., percentage) of a storage capacity of the memory sub-system110(“memory sub-system storage capacity”). The dynamic SLC cache144can have a default or base logical maximum saturation size (“default maximum size”). The default maximum size can expressed as a share (e.g., percentage) of the storage capacity of the memory sub-system110. The size of dynamic SLC cache144can be limited by a theoretical maximum logical saturation size (“theoretical maximum size”) determined by the memory sub-system storage capacity and the type of XLC storage137(e.g., memory sub-system storage capacity divided by bits per XLC cell). For example, if XLC storage137is QLC storage, then the theoretical maximum size can be 25% of the memory sub-system storage capacity. The EDR component113can receive data from the host system120, and write data to the memory device130. To do so, the EDR component113can operate in the default mode or the enhanced data retention mode. For example, metadata indicating the mode can be maintained in the SLC cache behavior profile. When operating in the default mode, the dynamic SLC cache of the memory device130has the default maximum size, and the EDR component113can cause data to be moved from the SLC cache of the memory device140(e.g., static SLC cache142and dynamic SLC cache144) to the XLC storage137in the background or during idle times. When operating in the enhanced data retention mode, the EDR component113can increase the maximum size of the dynamic SLC cache144from the default maximum size to an enhanced maximum logical saturation size (“enhanced maximum size”) to enable continued writes to SLC cache. For example, the memory sub-system110can initially operate in the enhanced data retention mode (e.g., as indicated by metadata maintained in the SLC cache behavior profile). As another example, the memory sub-system110can switch from the default mode to the enhanced data retention mode upon determining that logical saturation of data stored on dynamic SLC cache is greater than the default maximum size. The enhanced maximum size is greater than the default maximum size, and less than or equal to the theoretical size. For example, the enhanced maximum size can be selected to be less than the theoretical maximum size to maintain desired performance of the memory sub-system110. The enhanced maximum size can be predetermined by the manufacturer at the time of manufacture, and stored in the SLC cache behavior profile maintained by the memory sub-system110. In the event that the amount of data written to the memory sub-system110(e.g., the number of bytes) were to exceed the maximum amount of data that can be written to the memory sub-system110, the EDR component113can cause data to be moved to XLC storage137(e.g., temporary files) to make room for further writes to SLC cache. The host system120may also decide to delete existing data in SLC cache (e.g., dynamic SLC cache144) or XLC storage137so that the remaining memory can be removed and retained in SLC cache. For example, the amount of data written to the memory sub-system110can be measured in terabytes written (TBW) to the memory sub-system110. The various SLC cache sizes (e.g., static SLC cache size, default maximum size, enhanced maximum size) can be set by the manufacturer at the time of manufacture of the memory sub-system110, and can be maintained within an SLC cache behavior profile stored on the memory sub-system110(e.g., as firmware within the memory sub-system controller115). By increasing the size of the dynamic SLC cache144to the enhanced maximum size to enable storage of data in SLC cache only, embodiments described herein can improve data retention when the memory sub-system110is placed in the high temperature environment. Further details regarding the operation of the EDR component113are described below with reference toFIGS.2-4. FIG.2is a flow diagram of an example method200for implementing dynamic adjustment of data storage for enhanced data retention, in accordance with some embodiments of the present disclosure. The method200can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method200is performed by the EDR component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation210, processing logic receives data from a host system. For example, the host system can be the host system120ofFIG.1. In some embodiments, the data includes OS image data having a particular size. For example, the size of the OS image data can be 100 GB. At operation220, the processing logic writes the data to SLC cache of the memory sub-system. For example, the memory sub-system can include SLC cache and XLC storage (e.g., MLC storage, TLC storage, QLC storage, or PLC storage) and the data is written only to SLC cache. The SLC cache includes static SLC cache having a fixed logical saturation size (“fixed size”) and dynamic SLC cache having a dynamic (e.g., modifiable or configurable) maximum logical saturation size (“dynamic maximum size”) controlled by the processing logic. For example, the fixed size of the static SLC cache can be a share (e.g., percentage) of the storage capacity of the memory sub-system (“memory sub-system storage capacity”). In some embodiments, the fixed size is 1% of the memory sub-system storage capacity. Illustratively, if the memory sub-system storage capacity is 512 GB, then the fixed size can be 5 GB. The size of dynamic SLC cache can be limited by a theoretical maximum logical saturation size (“theoretical maximum size”) determined by the memory sub-system storage capacity and the type of XLC storage (e.g., memory sub-system storage capacity divided by bits per XLC cell). In some embodiments, the XLC storage is QLC storage, and the theoretical maximum size is 25% of the memory sub-system storage capacity. For example, if the memory sub-system storage capacity is 512 GB, then the theoretical maximum size is 128 GB. The processing logic can operate in a particular operating mode for managing data storage between SLC cache and XLC storage. For example, the operating mode can be a default mode or an enhanced data retention mode. The operating mode can be determined based on metadata indicating the mode that is maintained by the memory sub-system (e.g., in the SLC cache behavior profile. In the default mode, the dynamic SLC cache has a default maximum logical saturation size (“default maximum size”) less than the theoretical maximum size. For example, the default maximum size can be a share (e.g., percentage) of the memory sub-system storage capacity. In some embodiments, the default maximum size is 10% of the memory sub-system storage capacity. For example, if the memory sub-system storage capacity is 512 GB, then the default maximum size is 51 GB. In the enhanced data retention mode, the dynamic SLC cache has an enhanced maximum logical saturation size (“enhanced maximum size”) greater than the default maximum size, and less than or equal to the theoretical maximum size. For example, the enhanced maximum size can be a percentage of the memory sub-system storage capacity less than or equal to the theoretical maximum size. To improve memory sub-system performance, the enhanced maximum size can be less than the theoretical maximum size. In some embodiments, the enhanced maximum size is 20% of the memory sub-system storage capacity. For example, if the memory sub-system storage capacity is 512 GB, then the default maximum size is 102 GB. In some embodiments, writing the data to SLC cache at operation220includes determining whether to initiate a write operation in the enhanced data retention mode. In response to determining to initiate the write operation in the enhanced data retention mode, writing the data to SLC cache includes initiating the write operation in the enhanced data retention mode to write a portion of the data to the SLC cache. Further details regarding initiating data writes to SLC cache in the enhanced data retention mode will be described below with reference toFIG.3. In some embodiments, writing the data to SLC cache at operation220includes determining whether to initiate a write operation in the default mode. In response to determining to initiate the write operation in the default mode, writing the data to SLC cache includes initiating the write operation in the default mode to write the portion of the data to the SLC cache. Further details regarding initiating data writes to SLC cache in the default mode will be described below with reference toFIG.4. After all the data is written to SLC cache, the memory sub-system can be placed in a high temperature environment (e.g., warehouse) while in a non-operational state. After some amount of time, the memory sub-system can be given to a user. Thus, at operation230, the processing logic can place the memory sub-system in the default mode when in an operational state. In the default mode, the maximum size of dynamic SLC cache is reduced back to the default maximum size and data can begin moving from SLC cache to XLC storage. FIG.3is a flow diagram of an example method300for writing data to SLC cache of a memory sub-system (e.g., operation220ofFIG.2), in accordance with some embodiments of the present disclosure. The method300can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method300is performed by the EDR component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation310, processing logic initiates operations while the memory sub-system in the enhanced data retention mode. For example, the processing logic can initiate writes to SLC cache of the memory sub-system in the enhanced data retention mode. The memory sub-system can be empty (e.g., zero logical saturation). It is assumed that the processing logic received data to write to SLC from a host system (e.g., operation210ofFIG.2). For example, as described above with reference toFIGS.1-2, the SLC cache can include static SLC cache having a fixed size and dynamic SLC cache having a default maximum size in the default mode and an enhanced maximum size greater than the default maximum size in the enhanced data retention mode, and the memory sub-system can further include XLC storage. The enhanced maximum size can be less than or equal to a theoretical maximum size determined based on the memory sub-system storage capacity and the type of XLC storage. In some embodiments, the enhanced maximum size is less than the theoretical maximum size. At operation320, the processing logic determines whether an amount of data written to the memory sub-system satisfies a threshold condition. For example, the processing logic can determine whether the amount of data written to the memory sub-system is greater than a data threshold. The amount of data written to the memory sub-system can be a cumulative amount of data written over the lifetime of the memory sub-system. The amount of data written to the memory sub-system can be a number of bytes written to the memory sub-system. In some embodiments, the amount of data written to the memory sub-system is measured in terabytes written (TBW). The amount of data written to the memory sub-system can reflect the physical saturation of data within the memory sub-system. Illustratively, assume that the first 50 GB of data is written 10 times. Although the logical saturation is 50 GB, the physical saturation is 500 GB (0.5 TB). If the amount of data written to the memory sub-system satisfies the threshold condition (e.g., the amount of data written to the memory sub-system is greater than the data threshold), this implies that the memory sub-system is outside of the production facility (e.g., factory) and will not be subject to the high temperature environment for storage. The processing logic can switch to the default mode (as described above with reference toFIGS.1-2) to enable data movement from SLC cache to XLC storage at operation330and the process ends. Otherwise, if the amount of data written to the memory sub-system does not satisfy the threshold condition (e.g., the amount of data written to the memory sub-system is less than or equal to the data threshold), this implies that the memory sub-system is still in the production facility and will be subject to the high temperature environment for storage. The processing logic can then determine, at operation340, whether the logical saturation satisfies a threshold condition. For example, the processing logic can determine whether the amount of data written to SLC cache is greater than the total maximum size of SLC cache. Assuming that data is written to static SLC cache before dynamic SLC cache, this is equivalent to determining whether the amount of data written to dynamic SLC cache is greater than the enhanced maximum size. If the logical saturation does not satisfy the threshold condition at operation340(e.g., the amount of data written to SLC cache is less than or equal to the total maximum size of SLC cache), this means that there is still space in dynamic SLC cache to continue writes to dynamic SLC cache. Thus, the process reverts back to operation310to continue writing to SLC cache in the enhanced data retention mode. Otherwise, if the logical saturation does not satisfy the enhanced threshold condition (e.g., the amount of data written to SLC cache is greater than the total maximum size of SLC cache), this means that SLC cache is filled beyond the total maximum size of SLC cache. Therefore, a portion of the data will be written to XLC storage. Illustratively, if the enhanced maximum size of dynamic SLC cache is 20% of the memory sub-system storage capacity, but the logical saturation of the data is 25% of the memory sub-system storage capacity, then the 5% difference can be written to XLC storage. To address this situation, the host system (e.g., host system120ofFIG.1) can erase or delete data (e.g., temporary files) to reduce the logical saturation to below the enhanced maximum size. Illustratively, if the logical saturation of the data is 25% of the memory sub-system storage capacity, then the remaining data after the data deletion can be 15% of the memory sub-system storage capacity. Moreover, at operation350, the processing logic causes the memory sub-system to perform operations in the default mode. For example, the processing logic can enable data movement from SLC cache to XLC storage. Operating in default mode also reduces the size of dynamic SLC cache from the enhanced maximum size to the default maximum size. At operation360, the processing logic determines whether an amount of data written to the memory sub-system satisfies a threshold condition (similar to operation320). Thus, if the amount of data written to the memory sub-system satisfies the threshold condition (e.g., the amount of data written to the memory sub-system is greater than the data threshold), the processing logic can then enable data movement from SLC cache to XLC storage at operation330and the process ends. Otherwise, if the amount of data written to the memory sub-system does not satisfy the threshold condition (e.g., the amount of data written to the memory sub-system is less than or equal to the data threshold), the processing logic can then determine, at operation370, whether the logical saturation satisfies a threshold condition. For example, the processing logic can determine whether the amount of data written to SLC cache is greater than the total maximum size of SLC cache. Assuming that data is written to static SLC cache before dynamic SLC cache, this is equivalent to determining whether the amount of data written to dynamic SLC cache is greater than the default maximum size. If the logical saturation does not satisfy the threshold condition at operation370(e.g., the amount of data written to SLC cache is less than or equal to the total size of SLC cache), then the process reverts back to operation310to perform operations in the enhanced data retention mode. To do so, the processing logic can cause data to be moved to SLC cache and cause the size of dynamic SLC cache to be increased to the enhanced maximum size. Otherwise, if the logical saturation satisfies the threshold condition at operation370(e.g., the amount of data written to SLC cache is greater than the total size of SLC cache), this means that SLC cache is filled to capacity. The process can then revert back to operation350to perform operations in the default mode (e.g., enable data movement from SLC cache to XLC storage and reduce the size of dynamic SLC cache to the default maximum size). FIG.4is a flow diagram of an example method400for writing data to SLC cache of a memory sub-system (e.g., operation220ofFIG.2), in accordance with some embodiments of the present disclosure. The method400can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method400is performed by the EDR component113ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation410, processing logic initiates operations while the memory sub-system in the default mode. For example, the processing logic can initiate writes to SLC cache of the memory sub-system in the default mode. The memory sub-system can be empty (e.g., zero logical saturation). It is assumed that the processing logic received data to write to SLC from a host system (e.g., operation210ofFIG.2). For example, as described above with reference toFIGS.1-2, the SLC cache can include static SLC cache having a fixed size and dynamic SLC cache having a default maximum size in the default mode and an enhanced maximum size greater than the default maximum size in the enhanced data retention mode, and the memory sub-system can further include XLC storage. The enhanced maximum size can be less than or equal to a theoretical maximum size determined based on the memory sub-system storage capacity and the type of XLC storage. In some embodiments, the enhanced maximum size is less than the theoretical maximum size. At operation420, the processing logic determines whether an amount of data written to the memory sub-system satisfies a threshold condition (similar to operation320ofFIG.3). If the amount of data written to the memory sub-system satisfies the threshold condition (e.g., the amount of data written to the memory sub-system is greater than the data threshold), then the processing logic can enable data movement from SLC cache to XLC storage at operation430and the process ends. Otherwise, if the amount of data written to the memory sub-system does not satisfy the threshold condition (e.g., the amount of data written to the memory sub-system is less than or equal to the data threshold), then the processing logic can then determine, at operation440, whether the logical saturation satisfies a threshold condition. For example, the processing logic can determine whether the amount of data written to SLC cache is greater than the total maximum size of SLC cache. Assuming that data is written to static SLC cache before dynamic SLC cache, this is equivalent to determining whether the amount of data written to dynamic SLC cache is greater than the default maximum size. If the logical saturation does not satisfy the threshold condition at operation430(e.g., the amount of data written to SLC cache is less than or equal to the total size of SLC cache), then the process reverts back to operation410to continue writing to SLC cache in the default mode. Otherwise, if the logical saturation does not satisfy the enhanced threshold condition (e.g., the amount of data written to SLC cache is greater than the total maximum size of SLC cache), this means that SLC cache is filled to capacity. At operation450, the processing logic causes the memory sub-system to perform operations in the enhanced data retention mode. For example, the processing logic can increase the size of dynamic SLC cache from the default maximum size to the enhanced maximum size, and continue writing data to SLC cache. At operation460, the processing logic determines whether an amount of data written to the memory sub-system satisfies a threshold condition (similar to operation420). If the amount of data written to the memory sub-system satisfies the threshold condition (e.g., the amount of data written to the memory sub-system is greater than the data threshold), then the processing logic can then enable data movement from SLC cache to XLC storage at operation430and the process ends. Otherwise, if the amount of data written to the memory sub-system does not satisfy the threshold condition (e.g., the amount of data written to the memory sub-system is less than or equal to the data threshold), then the processing logic can then determine, at operation470, whether the logical saturation satisfies a threshold condition. For example, the processing logic can determine whether the amount of data written to SLC cache is greater than the total size of SLC cache. Assuming that data is written to static SLC cache before dynamic SLC cache, this is equivalent to determining whether the amount of data written to dynamic SLC cache is greater than the default maximum size. If the logical saturation does not satisfy the threshold condition at operation470(e.g., the amount of data written to SLC cache is less than or equal to the total size of SLC cache), then the process reverts back to operation450to continue performing operations in the enhanced data retention mode. Otherwise, if the logical saturation satisfies the threshold condition at operation470(e.g., the amount of data written to SLC cache is greater than the total size of SLC cache), this means that SLC cache is filled beyond the total maximum size of SLC cache. Therefore, a portion of the data will be written to XLC storage. To address this situation, the host system (e.g., host system120ofFIG.1) can erase or delete data (e.g., temporary files) to reduce the logical saturation to below the enhanced maximum size. Illustratively, if the logical saturation of the data is 25% of the memory sub-system storage capacity, then the remaining data after the data deletion can be 15% of the memory sub-system storage capacity. Moreover, the process can then revert back to operation410to perform operations in the default mode to enable data movement from SLC cache to XLC storage and reduce the size of dynamic SLC cache to the default maximum size. FIG.5illustrates a block/flow diagram (“diagram”)500illustrating an example implementation of dynamic adjustment of data storage for enhanced data retention. The diagram500shows a memory sub-system505including an initial SLC cache510-1and an initial XLC storage320-1. The initial SLC cache510-1and the initial XLC storage520-1represent an initially empty state (i.e., before any data writes). The initial SLC cache510-1has an initial SLC cache size and the initial XLC storage520-1has an initial XLC storage size. For example, the SLC cache510-1can include static SLC cache having a fixed size and dynamic SLC cache having a default maximum size. The memory sub-system controller can then write a certain amount of data to the memory sub-system505(e.g., an OS image installed on the memory sub-system505) at operation530-1. The memory sub-system controller at decision540can determine whether the amount of data satisfies a threshold condition. For example, the memory sub-system controller can determine whether the amount of data is less than or equal to a logical saturation threshold for the SLC cache (e.g., if the OS image is less than or equal than 100 GB). If the memory sub-system controller determines that the amount of data satisfies the threshold condition (e.g., the amount of data is less than or equal to the logical saturation threshold), then the memory sub-system controller can operate in an enhanced data retention mode to increase the size of the dynamic SLC cache from the default maximum size to the enhanced maximum size. This results in a larger sized SLC cache510-2including dynamic SLC cache having an enhanced maximum size, and XLC storage520-2. The data written to the SLC cache520-2is indicated by data525-1. The memory sub-system controller can then analyze the data525-1at operation530-3, and determine whether to erase any of the data525-1at decision550. For example, the memory sub-system controller can determine whether to erase any temporary files (e.g., from the OS image). If not, this means that. The data storage process terminates at operation530-4. If the memory sub-system controller determines that data should be erased at decision550, then the memory sub-system controller erases a portion of the data525-1to achieve data525-2at operation530-5. The memory sub-system505is reverted back to the initial state including SLC cache510-1and XLC storage520-1, and the data525-2can have about the same size as the size of the SLC cache510-1. After operation530-4or530-5, the memory sub-system505can then be placed in a high temperature environment (e.g., warehouse) for storage while in a non-operational state, as indicated by event550. At some point, the memory sub-system505is taken out of the high temperature environment and is received by a user. The memory sub-system controller can then place the memory sub-system505in the default mode at operation530-6to store data525-1across the SLC cache510-1and the XLC storage520-1. Reverting back to decision540, if the memory sub-system controller determines that amount of data does not satisfy the threshold condition (e.g., the amount of data is greater than the logical saturation threshold), then the memory sub-system controller can still operate in the enhanced data retention mode to achieve SLC cache510-2and XLC storage520-2. However, since the amount of data exceeds the logical saturation threshold for the SLC cache510-2, some data is stored in XLC storage520-2, as indicated by data525-3. The memory sub-system controller can then cause a portion of data525-3to be erased at operation530-8to achieve a state of the memory sub-system505similar to that shown after operation530-4. Shutdown processing can be needed here with sufficient time to complete the erase operations and relocate any data from the XLC storage to the SLC cache. The memory sub-system controller can use idle time and/or shutdown processing time to complete this before indicating that the shutdown processing is complete. Further details regarding diagram500are described above with reference toFIGS.1-4. FIG.6illustrates an example machine of a computer system600within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system600can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the EDR component113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system600includes a processing device602, a main memory604(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory606(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system618, which communicate with each other via a bus630. Processing device602represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device602can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device602is configured to execute instructions626for performing the operations and steps discussed herein. The computer system600can further include a network interface device608to communicate over the network620. The data storage system618can include a machine-readable storage medium624(also known as a computer-readable medium) on which is stored one or more sets of instructions426or software embodying any one or more of the methodologies or functions described herein. The instructions626can also reside, completely or at least partially, within the main memory604and/or within the processing device602during execution thereof by the computer system600, the main memory604and the processing device602also constituting machine-readable storage media. The machine-readable storage medium624, data storage system618, and/or main memory604can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions626include instructions to implement functionality corresponding to an EDR component (e.g., the EDR component113ofFIG.1). While the machine-readable storage medium624is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 68,089 |
11861235 | DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for coalescing write operations in a cloud-based storage system in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110A) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay be similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an application specific integrated circuit (‘ASIC’), a field programmable gate array (‘FPGA’), a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate4(‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage controller119. In one embodiment, storage controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120nstored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example system124for data storage in accordance with some implementations. In one embodiment, system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices119a,119band119c,119d, respectively. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one controller125ato another controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration data between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units152or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storages152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storages152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storages152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit152may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e. multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., a field programmable gate array (FPGA). In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units152described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple storage units152and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage units152ofFIGS.2A-C. In this version, each storage unit152has a processor such as controller212(seeFIG.2C), an FPGA (field programmable gate array), flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The storage unit152may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units152may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the storage unit152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool_region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit152fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g. partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS™ environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or local area network (‘LAN’), or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider302with minimal management effort. Generally, the user of the cloud services provider302is unaware of the exact computing resources utilized by the cloud services provider302to provide the services. Although in many cases such a cloud services provider302may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider302offers computing infrastructure such as virtual machines and other resources as a service to subscribers. In addition, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider302offers a development environment to application developers. Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform. Furthermore, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider302offers application software, databases, as well as the platforms that are used to run the applications to the storage system306and users of the storage system306, providing the storage system306and users of the storage system306with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. The cloud services provider302may be further configured to provide services to the storage system306and users of the storage system306through the implementation of an authentication as a service (‘AaaS’) service model where the cloud services provider302offers authentication services that can be used to secure access to applications, data sources, or other resources. The cloud services provider302may also be configured to provide services to the storage system306and users of the storage system306through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306. Readers will appreciate that the cloud services provider302may be configured to provide additional services to the storage system306and users of the storage system306through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider302or a limitation as to the service models that may be implemented by the cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. Public cloud and private cloud deployment models may differ and may come with various advantages and disadvantages. For example, because a public cloud deployment involves the sharing of a computing infrastructure across different organization, such a deployment may not be ideal for organizations with security concerns, mission-critical workloads, uptime requirements demands, and so on. While a private cloud deployment can address some of these issues, a private cloud deployment may require on-premises staff to manage the private cloud. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premise with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array306and remote, cloud-based storage that is utilized by the storage array306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider302, as well as addressing security concerns associated with sensitive data to the cloud services provider302over data communications networks. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model where the cloud services provider302offers application software, databases, as well as the platforms that are used to run the applications to the storage system306and users of the storage system306, providing the storage system306and users of the storage system306with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include storage resources308, which may be embodied in many forms. For example, in some embodiments the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate. In some embodiments, the storage resources308may include 3D crosspoint non-volatile memory in which bit storage is based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. In some embodiments, the storage resources308may include flash memory, including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, and others. In some embodiments, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM, in which data is stored through the use of magnetic storage elements. In some embodiments, the example storage resources308may include non-volatile phase-change memory (‘PCM’) that may have the ability to hold multiple bits in a single cell as cells can achieve a number of distinct intermediary states. In some embodiments, the storage resources308may include quantum memory that allows for the storage and retrieval of photonic quantum information. In some embodiments, the example storage resources308may include resistive random-access memory (‘ReRAM’) in which data is stored by changing the resistance across a dielectric solid-state material. In some embodiments, the storage resources308may include storage class memory (‘SCM’) in which solid-state nonvolatile memory may be manufactured at a high density using some combination of sub-lithographic patterning techniques, multiple bits per cell, multiple layers of devices, and so on. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Amay include various forms of storage-class memory (‘SCM’). SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC networks. The communications resources310can also include FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks. The communications resources310can also include InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters. The communications resources310can also include NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more application-specific integrated circuits (‘ASICs’) that are customized for some particular purpose as well as one or more central processing units (‘CPUs’). The processing resources312may also include one or more digital signal processors (‘DSPs’), one or more field-programmable gate arrays (‘FPGAs’), one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform various tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. Through the use of such data protection techniques, business continuity and disaster recovery objectives may be met as a failure of the storage system may not result in the loss of data stored in the storage system. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources308in the storage system306. For example, the software resources314may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. Readers will appreciate that the presence of such software resources314may provide for an improved user experience of the storage system306, an expansion of functionality supported by the storage system306, and many other benefits. Consider the specific example of the software resources314carrying out data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. In such an example, the systems described herein may more reliably (and with less burden placed on the user) perform backup operations relative to interactive backup management systems that require high degrees of user interactivity, offer less robust automation and feature sets, and so on. For further explanation,FIG.3Csets forth an example of a cloud-based storage system318in accordance with some embodiments of the present disclosure. In the example depicted inFIG.3C, the cloud-based storage system318is created entirely in a cloud computing environment316such as, for example, Amazon Web Services (‘AWS’), Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle Cloud, and others. The cloud-based storage system318may be used to provide services similar to the services that may be provided by the storage systems described above. For example, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318, the cloud-based storage system318may be used to provide storage services to users of the cloud-based storage system318through the use of solid-state storage, and so on. The cloud-based storage system318depicted inFIG.3Cincludes two cloud computing instances320,322that each are used to support the execution of a storage controller application324,326. The cloud computing instances320,322may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment316to support the execution of software applications such as the storage controller application324,326. In one embodiment, the cloud computing instances320,322may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application324,326may be booted to create and configure a virtual machine that may execute the storage controller application324,326. In the example method depicted inFIG.3C, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data received from the users of the cloud-based storage system318to the cloud-based storage system318, erasing data from the cloud-based storage system318, retrieving data from the cloud-based storage system318and providing such data to users of the cloud-based storage system318, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances320,322that each include the storage controller application324,326, in some embodiments one cloud computing instance320may operate as the primary controller as described above while the other cloud computing instance322may operate as the secondary controller as described above. In such an example, in order to save costs, the cloud computing instance320that operates as the primary controller may be deployed on a relatively high-performance and relatively expensive cloud computing instance while the cloud computing instance322that operates as the secondary controller may be deployed on a relatively low-performance and relatively inexpensive cloud computing instance. Readers will appreciate that the storage controller application324,326depicted inFIG.3Cmay include identical source code that is executed within different cloud computing instances320,322. Consider an example in which the cloud computing environment316is embodied as AWS and the cloud computing instances are embodied as EC2 instances. In such an example, AWS offers many types of EC2 instances. For example, AWS offers a suite of general purpose EC2 instances that include varying levels of memory and processing power. In such an example, the cloud computing instance320that operates as the primary controller may be deployed on one of the instance types that has a relatively large amount of memory and processing power while the cloud computing instance322that operates as the secondary controller may be deployed on one of the instance types that has a relatively small amount of memory and processing power. In such an example, upon the occurrence of a failover event where the roles of primary and secondary are switched, a double failover may actually be carried out such that: 1) a first failover event where the cloud computing instance322that formerly operated as the secondary controller begins to operate as the primary controller, and 2) a third cloud computing instance (not shown) that is of an instance type that has a relatively large amount of memory and processing power is spun up with a copy of the storage controller application, where the third cloud computing instance begins operating as the primary controller while the cloud computing instance322that originally operated as the secondary controller begins operating as the secondary controller again. In such an example, the cloud computing instance320that formerly operated as the primary controller may be terminated. Readers will appreciate that in alternative embodiments, the cloud computing instance320that is operating as the secondary controller after the failover event may continue to operate as the secondary controller and the cloud computing instance322that operated as the primary controller after the occurrence of the failover event may be terminated once the primary role has been assumed by the third cloud computing instance (not shown). Readers will appreciate that while the embodiments described above relate to embodiments where one cloud computing instance320operates as the primary controller and the second cloud computing instance322operates as the secondary controller, other embodiments are within the scope of the present disclosure. For example, each cloud computing instance320,322may operate as a primary controller for some portion of the address space supported by the cloud-based storage system318, each cloud computing instance320,322may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system318are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. In such an example, a controller failure may take more time to recover from as a new cloud computing instance that includes the storage controller application would need to be spun up rather than having an already created cloud computing instance take on the role of servicing I/O operations that would have otherwise been handled by the failed cloud computing instance. The cloud-based storage system318depicted inFIG.3Cincludes cloud computing instances340a,340b,340nwith local storage330,334,338. The cloud computing instances340a,340b,340ndepicted inFIG.3Cmay be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment316to support the execution of software applications. The cloud computing instances340a,340b,340nofFIG.3Cmay differ from the cloud computing instances320,322described above as the cloud computing instances340a,340b,340nofFIG.3Chave local storage330,334,338resources whereas the cloud computing instances320,322that support the execution of the storage controller application324,326need not have local storage resources. The cloud computing instances340a,340b,340nwith local storage330,334,338may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 I3 instances that include one or more SSDs, and so on. In some embodiments, the local storage330,334,338must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338can include a software daemon328,332,336that, when executed by a cloud computing instance340a,340b,340ncan present itself to the storage controller applications324,326as if the cloud computing instance340a,340b,340nwere a physical storage device (e.g., one or more SSDs). In such an example, the software daemon328,332,336may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications324,326can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications324,326may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications324,326and the cloud computing instances340a,340b,340nwith local storage330,334,338may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338may also be coupled to block-storage342,344,346that is offered by the cloud computing environment316. The block-storage342,344,346that is offered by the cloud computing environment316may be embodied, for example, as Amazon Elastic Block Store (‘EBS’) volumes. For example, a first EBS volume may be coupled to a first cloud computing instance340a, a second EBS volume may be coupled to a second cloud computing instance340b, and a third EBS volume may be coupled to a third cloud computing instance340n. In such an example, the block-storage342,344,346that is offered by the cloud computing environment316may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon328,332,336(or some other module) that is executing within a particular cloud comping instance340a,340b,340nmay, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage330,334,338resources. In some alternative embodiments, data may only be written to the local storage330,334,338resources within a particular cloud comping instance340a,340b,340n. In an alternative embodiment, rather than using the block-storage342,344,346that is offered by the cloud computing environment316as NVRAM, actual RAM on each of the cloud computing instances340a,340b,340nwith local storage330,334,338may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In the example depicted inFIG.3C, the cloud computing instances340a,340b,340nwith local storage330,334,338may be utilized, by cloud computing instances320,322that support the execution of the storage controller application324,326to service I/O operations that are directed to the cloud-based storage system318. Consider an example in which a first cloud computing instance320that is executing the storage controller application324is operating as the primary controller. In such an example, the first cloud computing instance320that is executing the storage controller application324may receive (directly or indirectly via the secondary controller) requests to write data to the cloud-based storage system318from users of the cloud-based storage system318. In such an example, the first cloud computing instance320that is executing the storage controller application324may perform various tasks such as, for example, deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Either cloud computing instance320,322, in some embodiments, may receive a request to read data from the cloud-based storage system318and may ultimately send a request to read data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Readers will appreciate that when a request to write data is received by a particular cloud computing instance340a,340b,340nwith local storage330,334,338, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to not only write the data to its own local storage330,334,338resources and any appropriate block-storage342,344,346that are offered by the cloud computing environment316, but the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay also be configured to write the data to cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. The cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nmay be embodied, for example, as Amazon Simple Storage Service (‘S3’) storage that is accessible by the particular cloud computing instance340a,340b,340n. In other embodiments, the cloud computing instances320,322that each include the storage controller application324,326may initiate the storage of the data in the local storage330,334,338of the cloud computing instances340a,340b,340nand the cloud-based object storage348. Readers will appreciate that, as described above, the cloud-based storage system318may be used to provide block storage services to users of the cloud-based storage system318. While the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay support block-level access, the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nsupports only object-based access. In order to address this, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. Consider an example in which data is written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nin 1 MB blocks. In such an example, assume that a user of the cloud-based storage system318issues a request to write data that, after being compressed and deduplicated by the storage controller application324,326results in the need to write 5 MB of data. In such an example, writing the data to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nis relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, the software daemon328,332,336or some other module of computer program instructions that is executing on the particular cloud computing instance340a,340b,340nmay be configured to: 1) create a first object that includes the first 1 MB of data and write the first object to the cloud-based object storage348, 2) create a second object that includes the second 1 MB of data and write the second object to the cloud-based object storage348, 3) create a third object that includes the third 1 MB of data and write the third object to the cloud-based object storage348, and so on. As such, in some embodiments, each object that is written to the cloud-based object storage348may be identical (or nearly identical) in size. Readers will appreciate that in such an example, metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data). Readers will appreciate that the cloud-based object storage348may be incorporated into the cloud-based storage system318to increase the durability of the cloud-based storage system318. Continuing with the example described above where the cloud computing instances340a,340b,340nare EC2 instances, readers will understand that EC2 instances are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of the EC2 instance. As such, relying on the cloud computing instances340a,340b,340nwith local storage330,334,338as the only source of persistent data storage in the cloud-based storage system318may result in a relatively unreliable storage system. Likewise, EBS volumes are designed for 99.999% availability. As such, even relying on EBS as the persistent data store in the cloud-based storage system318may result in a storage system that is not sufficiently durable. Amazon S3, however, is designed to provide 99.999999999% durability, meaning that a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options. Readers will appreciate that while a cloud-based storage system318that can incorporate S3 into its pool of storage is substantially more durable than various other options, utilizing S3 as the primary pool of storage may result in storage system that has relatively slow response times and relatively long I/O latencies. As such, the cloud-based storage system318depicted inFIG.3Cnot only stores data in S3 but the cloud-based storage system318also stores data in local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, such that read operations can be serviced from local storage330,334,338resources and the block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n, thereby reducing read latency when users of the cloud-based storage system318attempt to read data from the cloud-based storage system318. In some embodiments, all data that is stored by the cloud-based storage system318may be stored in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such embodiments, the local storage330,334,338resources and block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances340a,340b,340nwithout requiring the cloud computing instances340a,340b,340nto access the cloud-based object storage348. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system318may be stored in the cloud-based object storage348, but less than all data that is stored by the cloud-based storage system318may be stored in at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system318should reside in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block-storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. As described above, when the cloud computing instances340a,340b,340nwith local storage330,334,338are embodied as EC2 instances, the cloud computing instances340a,340b,340nwith local storage330,334,338are only guaranteed to have a monthly uptime of 99.9% and data stored in the local instance store only persists during the lifetime of each cloud computing instance340a,340b,340nwith local storage330,334,338. As such, one or more modules of computer program instructions that are executing within the cloud-based storage system318(e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances340a,340b,340nfrom the cloud-based object storage348, and storing the data retrieved from the cloud-based object storage348in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Consider an example in which all cloud computing instances340a,340b,340nwith local storage330,334,338failed. In such an example, the monitoring module may create new cloud computing instances with local storage, where high-bandwidth instances types are selected that allow for the maximum data transfer rates between the newly created high-bandwidth cloud computing instances with local storage and the cloud-based object storage348. Readers will appreciate that instances types are selected that allow for the maximum data transfer rates between the new cloud computing instances and the cloud-based object storage348such that the new high-bandwidth cloud computing instances can be rehydrated with data from the cloud-based object storage348as quickly as possible. Once the new high-bandwidth cloud computing instances are rehydrated with data from the cloud-based object storage348, less expensive lower-bandwidth cloud computing instances may be created, data may be migrated to the less expensive lower-bandwidth cloud computing instances, and the high-bandwidth cloud computing instances may be terminated. Readers will appreciate that in some embodiments, the number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318. The number of new cloud computing instances that are created may substantially exceed the number of cloud computing instances that are needed to locally store all of the data stored by the cloud-based storage system318in order to more rapidly pull data from the cloud-based object storage348and into the new cloud computing instances, as each new cloud computing instance can (in parallel) retrieve some portion of the data stored by the cloud-based storage system318. In such embodiments, once the data stored by the cloud-based storage system318has been pulled into the newly created cloud computing instances, the data may be consolidated within a subset of the newly created cloud computing instances and those newly created cloud computing instances that are excessive may be terminated. Consider an example in which 1000 cloud computing instances are needed in order to locally store all valid data that users of the cloud-based storage system318have written to the cloud-based storage system318. In such an example, assume that all 1,000 cloud computing instances fail. In such an example, the monitoring module may cause 100,000 cloud computing instances to be created, where each cloud computing instance is responsible for retrieving, from the cloud-based object storage348, distinct 1/100,000th chunks of the valid data that users of the cloud-based storage system318have written to the cloud-based storage system318and locally storing the distinct chunk of the dataset that it retrieved. In such an example, because each of the 100,000 cloud computing instances can retrieve data from the cloud-based object storage348in parallel, the caching layer may be restored 100 times faster as compared to an embodiment where the monitoring module only create 1000 replacement cloud computing instances. In such an example, over time the data that is stored locally in the 100,000 could be consolidated into 1,000 cloud computing instances and the remaining 99,000 cloud computing instances could be terminated. Readers will appreciate that various performance aspects of the cloud-based storage system318may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system318can be scaled-up or scaled-out as needed. Consider an example in which the monitoring module monitors the performance of the could-based storage system318via communications with one or more of the cloud computing instances320,322that each are used to support the execution of a storage controller application324,326, via monitoring communications between cloud computing instances320,322,340a,340b,340n, via monitoring communications between cloud computing instances320,322,340a,340b,340nand the cloud-based object storage348, or in some other way. In such an example, assume that the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system318. In such an example, the monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc. . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. Consider, as an additional example of dynamically sizing the cloud-based storage system318, an example in which the monitoring module determines that the utilization of the local storage that is collectively provided by the cloud computing instances340a,340b,340nhas reached a predetermined utilization threshold (e.g., 95%). In such an example, the monitoring module may create additional cloud computing instances with local storage to expand the pool of local storage that is offered by the cloud computing instances. Alternatively, the monitoring module may create one or more new cloud computing instances that have larger amounts of local storage than the already existing cloud computing instances340a,340b,340n, such that data stored in an already existing cloud computing instance340a,340b,340ncan be migrated to the one or more new cloud computing instances and the already existing cloud computing instance340a,340b,340ncan be terminated, thereby expanding the pool of local storage that is offered by the cloud computing instances. Likewise, if the pool of local storage that is offered by the cloud computing instances is unnecessarily large, data can be consolidated and some cloud computing instances can be terminated. Readers will appreciate that the cloud-based storage system318may be sized up and down automatically by a monitoring module applying a predetermined set of rules that may be relatively simple of relatively complicated. In fact, the monitoring module may not only take into account the current state of the cloud-based storage system318, but the monitoring module may also apply predictive policies that are based on, for example, observed behavior (e.g., every night from 10 PM until 6 AM usage of the storage system is relatively light), predetermined fingerprints (e.g., every time a virtual desktop infrastructure adds 100 virtual desktops, the number of IOPS directed to the storage system increase by X), and so on. In such an example, the dynamic scaling of the cloud-based storage system318may be based on current performance metrics, predicted workloads, and many other factors, including combinations thereof. Readers will further appreciate that because the cloud-based storage system318may be dynamically scaled, the cloud-based storage system318may even operate in a way that is more dynamic. Consider the example of garbage collection. In a traditional storage system, the amount of storage is fixed. As such, at some point the storage system may be forced to perform garbage collection as the amount of available storage has become so constrained that the storage system is on the verge of running out of storage. In contrast, the cloud-based storage system318described here can always ‘add’ additional storage (e.g., by adding more cloud computing instances with local storage). Because the cloud-based storage system318described here can always ‘add’ additional storage, the cloud-based storage system318can make more intelligent decisions regarding when to perform garbage collection. For example, the cloud-based storage system318may implement a policy that garbage collection only be performed when the number of IOPS being serviced by the cloud-based storage system318falls below a certain level. In some embodiments, other system-level functions (e.g., deduplication, compression) may also be turned off and on in response to system load, given that the size of the cloud-based storage system318is not constrained in the same way that traditional storage systems are constrained. Readers will appreciate that embodiments of the present disclosure resolve an issue with block-storage services offered by some cloud computing environments as some cloud computing environments only allow for one cloud computing instance to connect to a block-storage volume at a single time. For example, in Amazon AWS, only a single EC2 instance may be connected to an EBS volume. Through the use of EC2 instances with local storage, embodiments of the present disclosure can offer multi-connect capabilities where multiple EC2 instances can connect to another EC2 instance with local storage (‘a drive instance’). In such embodiments, the drive instances may include software executing within the drive instance that allows the drive instance to support I/O directed to a particular volume from each connected EC2 instance. As such, some embodiments of the present disclosure may be embodied as multi-connect block storage services that may not include all of the components depicted inFIG.3C. In some embodiments, especially in embodiments where the cloud-based object storage348resources are embodied as Amazon S3, the cloud-based storage system318may include one or more modules (e.g., a module of computer program instructions executing on an EC2 instance) that are configured to ensure that when the local storage of a particular cloud computing instance is rehydrated with data from S3, the appropriate data is actually in S3. This issue arises largely because S3 implements an eventual consistency model where, when overwriting an existing object, reads of the object will eventually (but not necessarily immediately) become consistent and will eventually (but not necessarily immediately) return the overwritten version of the object. To address this issue, in some embodiments of the present disclosure, objects in S3 are never overwritten. Instead, a traditional ‘overwrite’ would result in the creation of the new object (that includes the updated version of the data) and the eventual deletion of the old object (that includes the previous version of the data). In some embodiments of the present disclosure, as part of an attempt to never (or almost never) overwrite an object, when data is written to S3 the resultant object may be tagged with a sequence number. In some embodiments, these sequence numbers may be persisted elsewhere (e.g., in a database) such that at any point in time, the sequence number associated with the most up-to-date version of some piece of data can be known. In such a way, a determination can be made as to whether S3 has the most recent version of some piece of data by merely reading the sequence number associated with an object13and without actually reading the data from S3. The ability to make this determination may be particularly important when a cloud computing instance with local storage crashes, as it would be undesirable to rehydrate the local storage of a replacement cloud computing instance with out-of-date data. In fact, because the cloud-based storage system318does not need to access the data to verify its validity, the data can stay encrypted and access charges can be avoided. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components depicted inFIG.3Bmay be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system306while also reducing various costs associated with the establishment and operation of the storage system306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage system306depicted inFIG.3Bmay be useful for supporting various types of software applications. For example, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. The storage systems described above may operate to support a wide variety of applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson, Microsoft Oxford, Google DeepMind, Baidu Minwa, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. Reinforcement learning may be employed to find the best possible behavior or path that a particular software application or machine should take in a specific situation. Reinforcement learning differs from other areas of machine learning (e.g., supervised learning, unsupervised learning) in that correct input/output pairs need not be presented for reinforcement learning and sub-optimal actions need not be explicitly corrected. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. A GPU is a modern processor with thousands of cores, well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others. Applications of AI techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Furthermore, AI may impact a wide variety of industries and sectors. For example, AI solutions may be used in healthcare to take clinical notes, patient files, research data, and other inputs to generate potential treatment options for doctors to explore. Likewise, AI solutions may be used by retailers to personalize consumer recommendations based on a person's digital footprint of behaviors, profile data, or other data. Training deep neural networks, however, requires both high quality input data and large amounts of computation. GPUs are massively parallel processors capable of operating on large amounts of data simultaneously. When combined into a multi-GPU cluster, a high throughput pipeline may be required to feed input data from storage to the compute engines. Deep learning is more than just constructing and training models. There also exists an entire data pipeline that must be designed for the scale, iteration, and experimentation necessary for a data science team to succeed. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. A data scientist works to improve the usefulness of the trained model through a wide variety of approaches: more data, better data, smarter training, and deeper models. In many cases, there will be teams of data scientists sharing the same datasets and working in parallel to produce new and improved training models. Often, there is a team of data scientists working within these phases concurrently on the same shared datasets. Multiple, concurrent workloads of data processing, experimentation, and full-scale training layer the demands of multiple access patterns on the storage tier. In other words, storage cannot just satisfy large file reads, but must contend with a mix of large and small file reads and writes. Finally, with multiple data scientists exploring datasets and models, it may be critical to store data in its native format to provide flexibility for each user to transform, clean, and use the data in a unique way. The storage systems described above may provide a natural shared storage home for the dataset, with data protection redundancy (e.g., by using RAID6) and the performance necessary to be a common access point for multiple developers and multiple experiments. Using the storage systems described above may avoid the need to carefully copy subsets of the data for local work, saving both engineering and GPU-accelerated servers use time. These copies become a constant and growing tax as the raw data set and desired transformations constantly update and change. Readers will appreciate that a fundamental reason why deep learning has seen a surge in success is the continued improvement of models with larger data set sizes. In contrast, classical machine learning algorithms, like logistic regression, stop improving in accuracy at smaller data set sizes. As such, the separation of compute resources and storage resources may also allow independent scaling of each tier, avoiding many of the complexities inherent in managing both together. As the data set size grows or new data sets are considered, a scale out storage system must be able to expand easily. Similarly, if more concurrent training is required, additional GPUs or other compute resources can be added without concern for their internal storage. Furthermore, the storage systems described above may make building, operating, and growing an AI system easier due to the random read bandwidth provided by the storage systems, the ability to of the storage systems to randomly read small files (50 KB) high rates (meaning that no extra effort is required to aggregate individual data points to make larger, storage-friendly files), the ability of the storage systems to scale capacity and performance as either the dataset grows or the throughput requirements grow, the ability of the storage systems to support files or objects, the ability of the storage systems to tune performance for large or small files (i.e., no need for the user to provision filesystems), the ability of the storage systems to support non-disruptive upgrades of hardware and software even during production model training, and for many other reasons. Small file performance of the storage tier may be critical as many types of inputs, including text, audio, or images will be natively stored as small files. If the storage tier does not handle small files well, an extra step will be required to pre-process and group samples into larger files. Storage, built on top of spinning disks, that relies on SSD as a caching tier, may fall short of the performance needed. Because training with random input batches results in more accurate models, the entire data set must be accessible with full performance. SSD caches only provide high performance for a small subset of the data and will be ineffective at hiding the latency of spinning drives. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. Distributed deep learning may can be used to significantly accelerate deep learning with distributed computing on GPUs (or other form of accelerator or computer program instruction executor), such that parallelism can be achieved. In addition, the output of training machine learning and deep learning models, such as a fully trained machine learning model, may be used for a variety of purposes and in conjunction with other tools. For example, trained machine learning models may be used in conjunction with tools like Core ML to integrate a broad variety of machine learning model types into an application. In fact, trained models may be run through Core ML converter tools and inserted into a custom application that can be deployed on compatible devices. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. Readers will further appreciate that the systems described above may be deployed in a variety of ways to support the democratization of AI, as AI becomes more available for mass consumption. The democratization of AI may include, for example, the ability to offer AI as a Platform-as-a-Service, the growth of Artificial general intelligence offerings, the proliferation of Autonomous level 4 and Autonomous level 5 vehicles, the availability of autonomous mobile robots, the development of conversational AI platforms, and many others. For example, the systems described above may be deployed in cloud environments, edge environments, or other environments that are useful in supporting the democratization of AI. As part of the democratization of AI, a movement may occur from narrow AI that consists of highly scoped machine learning solutions that target a particular task to artificial general intelligence where the use of machine learning is expanded to handle a broad range of use cases that could essentially perform any intelligent task that a human could perform and could learn dynamically, much like a human. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains. Such blockchains may be embodied as a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block in a blockchain may contain a hash pointer as a link to a previous block, a timestamp, transaction data, and so on. Blockchains may be designed to be resistant to modification of the data and can serve as an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. This makes blockchains potentially suitable for the recording of events, medical records, and other records management activities, such as identity management, transaction processing, and others. In addition to supporting the storage and use of blockchain technologies, the storage systems described above may also support the storage and use of derivative items such as, for example, open source blockchains and related tools that are part of the IBM™ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Readers will appreciate that blockchain technologies may impact a wide variety of industries and sectors. For example, blockchain technologies may be used in real estate transactions as blockchain based contracts whose use can eliminate the need for 3rdparties and enable self-executing actions when conditions are met. Likewise, universal health records can be created by aggregating and placing a person's health history onto a blockchain ledger for any healthcare provider, or permissioned health care providers, to access and update. Readers will appreciate that the usage of blockchains is not limited to financial transactions, contracts, and the like. In fact, blockchains may be leveraged to enable the decentralized aggregation, ordering, timestamping and archiving of any type of information, including structured data, correspondence, documentation, or other data. Through the usage of blockchains, participants can provably and permanently agree on exactly what data was entered, when and by whom, without relying on a trusted intermediary. For example, SAP's recently launched blockchain platform, which supports MultiChain and Hyperledger Fabric, targets a broad range of supply chain and other non-financial applications. One way to use a blockchain for recording data is to embed each piece of data directly inside a transaction. Every blockchain transaction may be digitally signed by one or more parties, replicated to a plurality of nodes, ordered and timestamped by the chain's consensus algorithm, and stored permanently in a tamper-proof way. Any data within the transaction will therefore be stored identically but independently by every node, along with a proof of who wrote it and when. The chain's users are able to retrieve this information at any future time. This type of storage may be referred to as on-chain storage. On-chain storage may not be particularly practical, however, when attempting to store a very large dataset. As such, in accordance with embodiments of the present disclosure, blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Each hash may serve as a commitment to its input data, with the data itself being stored outside of the blockchain. Readers will appreciate that any blockchain participant that needs an off-chain piece of data cannot reproduce the data from its hash, but if the data can be retrieved in some other way, then the on-chain hash serves to confirm who created it and when. Just like regular on-chain data, the hash may be embedded inside a digitally signed transaction, which was included in the chain by consensus. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). While typical PoW systems only depend on the previous block in order to generate each successive block, the PoA algorithm may incorporate data from a randomly chosen previous block. Combined with the blockweave data structure, miners do not need to store all blocks (forming a blockchain), but rather can store any previous blocks forming a weave of blocks (a blockweave). This enables increased levels of scalability, speed and low-cost and reduces the cost of data storage in part because miners need not store all blocks, thereby resulting in a substantial reduction in the amount of electricity that is consumed during the mining process because, as the network expands, electricity consumption decreases because a blockweave demands less and less hashing power for consensus as data is added to the system. Furthermore, blockweaves may be deployed on a decentralized storage network in which incentives are created to encourage rapid data sharing. Such decentralized storage networks may also make use of blockshadowing techniques, where nodes only send a minimal block “shadow” to other nodes that allows peers to reconstruct a full block, instead of transmitting the full block itself. The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In memory computing involves the storage of information in RAM that is distributed across a cluster of computers. In-memory computing helps business customers, including retailers, banks and utilities, to quickly detect patterns, analyze massive data volumes on the fly, and perform their operations quickly. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible. Readers will appreciate that the systems described above may be better suited for the applications described above relative to other systems that may include, for example, a distributed direct-attached storage (DDAS) solution deployed in server nodes. Such DDAS solutions may be built for handling large, less sequential accesses but may be less able to handle small, random accesses. Readers will further appreciate that the storage systems described above may be utilized to provide a platform for the applications described above that is preferable to the utilization of cloud-based resources as the storage systems may be included in an on-site or in-house infrastructure that is more secure, more locally and internally managed, more robust in feature sets and performance, or otherwise preferable to the utilization of cloud-based resources as part of a platform to support the applications described above. For example, services built on platforms such as IBM's Watson may require a business enterprise to distribute individual user information, such as financial transaction information or identifiable patient records, to other institutions. As such, cloud-based offerings of AI as a service may be less desirable than internally managed and offered AI as a service that is supported by storage systems such as the storage systems described above, for a wide array of technical reasons as well as for various business reasons. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM™ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Such platforms may seamlessly collect, organize, secure, and analyze data across an enterprise, as well as simplify hybrid data management, unified data governance and integration, data science and business analytics with a single solution. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing—so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. Likewise, machines like locomotives and gas turbines that generate large amounts of information through the use of a wide array of data-generating sensors may benefit from the rapid data processing capabilities of an edge solution. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. Consider a specific example of inventory management in a warehouse, distribution center, or similar location. A large inventory, warehousing, shipping, order-fulfillment, manufacturing or other operation has a large amount of inventory on inventory shelves, and high resolution digital cameras that produce a firehose of large data. All of this data may be taken into an image processing system, which may reduce the amount of data to a firehose of small data. All of the small data may be stored on-premises in storage. The on-premises storage, at the edge of the facility, may be coupled to the cloud, for external reports, real-time control and cloud storage. Inventory management may be performed with the results of the image processing, so that inventory can be tracked on the shelves and restocked, moved, shipped, modified with new products, or discontinued/obsolescent products deleted, etc. The above scenario is a prime candidate for an embodiment of the configurable processing and storage systems described above. A combination of compute-only blades and offload blades suited for the image processing, perhaps with deep learning on offload-FPGA or offload-custom blade(s) could take in the firehose of large data from all of the digital cameras, and produce the firehose of small data. All of the small data could then be stored by storage nodes, operating with storage units in whichever combination of types of storage blades best handles the data flow. This is an example of storage and function acceleration and integration. Depending on external communication needs with the cloud, and external processing in the cloud, and depending on reliability of network connections and cloud resources, the system could be sized for storage and compute management with bursty workloads and variable conductivity reliability. Also, depending on other inventory management aspects, the system could be configured for scheduling and resource management in a hybrid edge/cloud environment. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. Big data analytics applications enable data scientists, predictive modelers, statisticians and other analytics professionals to analyze growing volumes of structured transaction data, plus other forms of data that are often left untapped by conventional business intelligence (BI) and analytics programs. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. Big data analytics is a form of advanced analytics, which involves complex applications with elements such as predictive models, statistical algorithms and what-if analyses powered by high-performance analytics systems. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's Alexa, Apple Siri, Google Voice, Samsung Bixby, Microsoft Cortana, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. Readers will appreciate that some transparently immersive experiences may involve the use of digital twins of various “things” such as people, places, processes, systems, and so on. Such digital twins and other immersive technologies can alter the way that humans interact with technology, as conversational platforms, augmented reality, virtual reality and mixed reality provide a more natural and immersive interaction with the digital world. In fact, digital twins may be linked with the real-world, perhaps even in real-time, to understand the state of a thing or system, respond to changes, and so on. Because digital twins consolidate massive amounts of information on individual assets and groups of assets (even possibly providing control of those assets), digital twins may communicate with each other to digital factory models of multiple linked digital twins. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. Furthermore, application monitoring and visibility tools may be deployed to move application workloads around different clouds, identify performance issues, and perform other tasks. In addition, security and compliance tools may be deployed for to ensure compliance with security requirements, government regulations, and so on. Such a multi-cloud environment may also include tools for application delivery and smart workload management to ensure efficient application delivery and help direct workloads across the distributed and heterogeneous infrastructure, as well as tools that ease the deployment and maintenance of packaged and custom applications in the cloud and enable portability amongst clouds. The multi-cloud environment may similarly include tools for data portability. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Such crypto-anchors may take many forms including, for example, as edible ink, as a mobile sensor, as a microchip, and others. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming though the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. Through the use of a parallel file system, file contents may be distributed over a plurality of storage servers using striping and metadata may be distributed over a plurality of metadata servers on a directory level, with each server storing a part of the complete file system tree. Readers will appreciate that in some embodiments, the storage servers and metadata servers may run in userspace on top of an existing local file system. Furthermore, dedicated hardware is not required for client services, the metadata servers, or the hardware servers as metadata servers, storage servers, and even the client services may be run on the same machines. Readers will appreciate that, in part due to the emergence of many of the technologies discussed above including mobile devices, cloud services, social networks, big data analytics, and so on, an information technology platform may be needed to integrate all of these technologies and drive new business opportunities by quickly delivering revenue-generating products, services, and experiences—rather than merely providing the technology to automate internal business processes. Information technology organizations may need to balance resources and investments needed to keep core legacy systems up and running while also integrating technologies to build an information technology platform that can provide the speed and flexibility in areas such as, for example, exploiting big data, managing unstructured data, and working with cloud applications and services. One possible embodiment of such an information technology platform is a composable infrastructure that includes fluid resource pools, such as many of the systems described above that, can meet the changing needs of applications by allowing for the composition and recomposition of blocks of disaggregated compute, storage, and fabric infrastructure. Such a composable infrastructure can also include a single management interface to eliminate complexity and a unified API to discover, search, inventory, configure, provision, update, and diagnose the composable infrastructure. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, a clustering and scheduling tool for Docker containers that enables IT administrators and developers to establish and manage a cluster of Docker nodes as a single virtual system. Likewise, containerized applications may be managed through the use of Kubernetes, a container-orchestration system for automating deployment, scaling and management of containerized applications. Kubernetes may execute on top of operating systems such as, for example, Red Hat Enterprise Linux, Ubuntu Server, SUSE Linux Enterprise Servers, and others. In such examples, a master node may assign tasks to worker/minion nodes. Kubernetes can include a set of components (e.g., kubelet, kube-proxy, cAdvisor) that manage individual nodes as a well as a set of components (e.g., etcd, API server, Scheduler, Control Manager) that form a control plane. Various controllers (e.g., Replication Controller, DaemonSet Controller) can drive the state of a Kubernetes cluster by managing a set of pods that includes one or more containers that are deployed on a single node. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. MEC technology is designed to be implemented at the cellular base stations or other edge nodes, and enables flexible and rapid deployment of new applications and services for customers. MEC may also allow cellular operators to open their radio access network (‘RAN’) to authorized third-parties, such as application developers and content provider. Furthermore, edge computing and micro data centers may substantially reduce the cost of smartphones that work with the 5G network because customer may not need devices with such intensive processing power and the expensive requisite components. Readers will appreciate that 5G networks may generate more data than previous network generations, especially in view of the fact that the high network bandwidth offered by 5G networks may cause the 5G networks to handle amounts and types of data (e.g., sensor data from self-driving cars, data generated by AR/VR technologies) that weren't as feasible for previous generation networks. In such examples, the scalability offered by the systems described above may be very valuable as the amount of data increases, adoption of emerging technologies increase, and so on. For further explanation,FIG.3Dillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3D, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3D, the components illustrated inFIG.3Dare not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Dwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanationFIG.4shows a flowchart of an example method for coalescing write operations in a cloud-based storage system that includes receiving (402) (e.g., by a cloud computing instance (340a) of a cloud based storage system (318)), from a storage controller application (342), a first plurality of write operations (403a), wherein each of the first plurality of write operations comprises a respective write to a storage volume (e.g., block storage (342) of the cloud computing instance (340a). Each of the first plurality of write operations (403a) may be received from a storage controller application (324) executed in a cloud computing instance (320) of the cloud-based storage system (318) over a Transmission Control Protocol/Internet Protocol (TCP/IP) connection, or over another network connection. Each of the first plurality of write operations (403a) may comprise a write of a block of data for writing to a block storage volume (e.g., block storage (342)). Accordingly, each of the first plurality of write operations (403a) may be embodied as a data block to be written. Each of the first plurality of write operations (403a) may further be embodied as a command or message to facilitate writing of the data block to the storage volume. The command or message may comprise, for example, an NVRAM message to write data to block storage (342). Each of the first plurality of write operations (403a) may correspond to one or more files to be written. For example, assuming a block size of 4 kilobytes, a 1024 kilobyte file may be written as 256 blocks divided amongst 256 write operations (403a). Receiving (402) the first plurality of write operations (403a) may comprise storing the first plurality of write operations (403a) into a data structure. The data structure may comprise an ordered data structure or an unordered data structure. For example, the data structure may comprise a buffer, a queue, a list, or other data structure. The method ofFIG.4further includes coalescing (404) (e.g., by a storage controller application) the first plurality of write operations into a plurality of coalesced write operations (405), wherein each of the coalesced write operations are configured to effect two or more of the first plurality of write operations (404). Each of the coalesced write operations (405) may comprise data blocks for storage from two or more of the first plurality of write operations (403a). Each of the coalesced write operations (405) may comprise commands, messages, or other data from two or more of the first plurality of write operations (403a) to effect writing of the corresponding data blocks to the storage volume. Each of the coalesced write operations (405) may comprise a single command or message based on commands or messages from two or more of the first plurality of write operations (403a) to effect writing of the corresponding data blocks to the storage volume. Thus, a coalesced write operation (405) can be performed (e.g., sent to the storage volume) as a single command (e.g., a single input/output control (IOCTL) command) that causes two or more data blocks to be written. Coalescing (404) the first plurality of write operations may comprise coalescing the first plurality of write operations into adjacent coalesced write operations (405). For example, the received first plurality of write operations may be directed to any address, which may be non-adjacent. The first plurality of write operations may be transformed into coalesced write operations (405) directed to adjacent addresses. For example, the first plurality of write operations may be coalesced into adjacent NVRAM write operations that may be written to block storage. Coalescing (404) the first plurality of write operations may be based on an input/output operations per second (IOPS) threshold. The IOPS threshold may define a number of input or output operations of a particular size that may be performed on a storage volume (e.g., block storage (342)) in a second. For example, assume a cloud computing environment (316) assigns an IOPS threshold to the cloud-based storage system (318) based on a subscribed service tier. The IOPS threshold may define a maximum number of input/output operations that may be performed per second assuming a size data to be written or read falls below a size limit. In other words, an IOPS threshold may comprise an IOPS limit component determined as a function of a size limit component. As an example, the IOPS threshold may set a 10,000 IOPS limit for data blocks of 16 kilobytes or less. The IOPS limit may be modified (e.g., reduced) for data blocks of greater size. For example, the IOPS limit may be adjusted to a 5,000 IOPS limit for 32 kilobyte data blocks. Assume that the IOPS threshold may not be adjusted for data blocks below the size limit. For example, given an IOPS threshold of 10,000 IOPS for data blocks 16 kilobytes are less, the cloud-based storage system (318) would still be limited to 10,000 IOPS when writing data blocks of 8 kilobytes, 4 kilobytes, etc. Accordingly, it would be beneficial to coalesce write operations (403a) smaller than the size limit in order to reduce the number of operations required and not exceed the IOPS threshold. Readers will appreciate that other scenarios would benefit from coalescing write operations (403a). For example, the IOPS threshold may not scale linearly with the OP size. Accordingly, coalescing (402) the first plurality of write operations (403a) into the plurality of coalesced write operations (405) based on the IOPS threshold may comprise coalescing (402) the first plurality of write operations (403a) into the plurality of coalesced write operations (405) such that the size of the plurality of coalesced write operations (405) is less than a size limit of the IOPS threshold. For example, assuming that the first plurality of write operations (403a) are 4 kilobytes each and an IOPS threshold of 10,000 IOPS for data blocks less than 16 kilobytes (e.g., the size limit), groups of four of the first plurality of write operations (403a) may be coalesced into a single coalesced write operation (405). As another example, assume that the first plurality of write operations (403a) are 12 kilobytes each. Here, coalescing the first plurality of write operations (403a) would result in coalesced write operations (405) exceeding the size threshold at 24 kilobytes each. Accordingly, it may be determined to not coalesce the first plurality of write operations (403a) in this case. Coalescing (404) the first plurality of write operations may be based on a rate at which the first plurality of write operations (403a) are received. Coalescing (404) the first plurality of write operations may be performed in response to receiving the first plurality of write operations (403a) at a rate exceeding an IOPS limit component of the IOPS threshold. For example, assume that the first plurality of write operations (403a) are received at a rate of 12,000 operations per second and are 4 kilobytes each. Further assume an IOPS threshold of 10,000 IOPS for data blocks less than 16 kilobytes. Here, performing the first plurality of write operations (403a) uncoalesced would result in 12,000 IOPS, exceeding the IOPS threshold and potentially incurring penalties or costs from the provider of the cloud computing environment (316). Accordingly, the first plurality of write operations (403a) can be coalesced such that the coalesced write operations (405) can be performed at a rate less than the IOPS threshold. Here, the first plurality of write operations (403a) may be coalesced such that four write operations (403a) are coalesced into each coalesced write operation (405). Thus, the first plurality of write operations (403a) received at 12,000 per second may be performed as coalesced write operations (405) at 3,000 IOPS. Similarly, assume that the first plurality of write operations (403a) are received at a rate of 9,000 operations per second and are 4 kilobytes each. Further assume an IOPS threshold of 10,000 IOPS for data blocks less than 16 kilobytes. Here, performing the first plurality of write operations (403a) uncoalesced would result in 9,000 IOPS and not exceed the IOPS threshold. Accordingly, it may be determined to not coalesce the first plurality of write operations (403a) in this case. The method ofFIG.4also includes performing (406) (e.g., by the cloud computing instance (340a)) the plurality of coalesced write operations (405) on the storage volume (e.g., block storage (342)). Performing (406) the plurality of coalesced write operations (405) on the storage volume may comprise sending, to the storage volume, a single input/output control command (ioctl) for each coalesced write operation (405) of the plurality of coalesced write operations (405). For example, assume 12,000 write operations (403a) have been coalesced into 3,000 coalesced write operations (405). Performing (406) the plurality of coalesced write operations (405) on the storage volume would comprise sending 3,000 ioctls to the storage volume (e.g., block storage (342)). Accordingly, performing (406) the plurality of coalesced write operations effects the writing of data from the first plurality of write operations (403a) to the storage volume using fewer IOPS than performing the first plurality of write operations (403a) uncoalesced. Although addresses coalescing write operations in a cloud-based storage system is discussed with respect to a cloud computing instance (340a), a storage controller application (324), and block storage (342), it is understood that these components are recited for exemplary purposes. Other components of the cloud-based storage system (318) may be used interchangeably with these components. For example, write operations (403a) may be received from storage controller applications (324,326) from cloud computing instances (320,322) by cloud computing instances (340a,340b,340n). The write operations (403a) may be coalesced by cloud computing instances (340a,340b,340n) for performing on block storage (342,344,346). For further explanationFIG.5shows a flowchart of an example method for coalescing write operations in a cloud-based storage system that includes receiving (402) (e.g., by a cloud computing instance (340a) of a cloud based storage system (318)), from a storage controller application (342), a first plurality of write operations (403a), wherein each of the first plurality of write operations comprises a respective write to a storage volume (e.g., block storage (342) of the cloud computing instance (340a); coalescing (404) (e.g., by the cloud computing instance (340a)) the first plurality of write operations (403a) into a plurality of coalesced write operations (405), wherein each of the coalesced write operations (405) are configured to effect two or more of the first plurality of write operations (404); and performing (406) the plurality of coalesced write operations (405) on the storage volume (e.g., block storage (342)). FIG.5differs fromFIG.4in that receiving (402) (e.g., by a cloud computing instance (340a) of a cloud based storage system (318)), from a storage controller application (342), a first plurality of write operations (403a), wherein each of the first plurality of write operations comprises a respective write to a storage volume (e.g., block storage (342) of the cloud computing instance (340a) comprises adding (502) the first plurality of write operations (403a) to a queue. The queue may comprise a queue of a fixed or maximum size, or of a dynamic size. The queue may comprise, for example, a First-In-First-Out (FIFO) data structure. Accordingly, coalescing (404) two or more of the first write operations (403a) may comprise removing the two or more first write operations (403a) from the queue to generate the corresponding coalesced write operations (405). The generated coalesced write operation (405) may then be stored in the queue. The generated coalesced write operation (405) may also be stored in another queue, buffer, or other data structure for storing coalesced write operations (405). For further explanationFIG.6shows a flowchart of an example method for coalescing write operations in a cloud-based storage system that includes receiving (402) (e.g., by a cloud computing instance (340a) of a cloud based storage system (318)), from a storage controller application (342), a first plurality of write operations (403a), wherein each of the first plurality of write operations comprises a respective write to a storage volume (e.g., block storage (342) of the cloud computing instance (340a), wherein receiving (402) the first plurality of write operations (403a) comprises adding (502) the first plurality of write operations (403a) to a queue; coalescing (404) (e.g., by the cloud computing instance (340a)) the first plurality of write operations (403a) into a plurality of coalesced write operations (405), wherein each of the coalesced write operations (405) are configured to effect two or more of the first plurality of write operations (404); and performing (406) the plurality of coalesced write operations (405) on the storage volume (e.g., block storage (342)). FIG.6differs fromFIG.5in that the method ofFIG.6also includes determining (602) to coalesce the first plurality of write operations (403) based on a state of the queue. The state of the queue may comprise a usage rate of the queue. The usage rate may comprise a rate at which write operations (403a) are added to and/or removed from the queue. For example, if the rate at which write operations (403a) are removed from the queue falls below a threshold, it may be determined to coalesce the first plurality of write operations (403a) to ensure that a rate at which data is written to the storage volume meets a quality of service standard. As another example, if the rate at which write operations (403a) are added to the queue exceeds a threshold, it may be determined to coalesce the first plurality of write operations (403a) to ensure that data can be written within an IOPS restriction enforced by the cloud computing environment (316). For example, assume an IOPS threshold of 10,000 IOPS for data blocks of 16 kilobytes or less. Further assume that 4 kilobyte write operations (403a) are being added to the queue at a rate of 20,000 write operations (403a) per second. If the write operations (403a) were performed uncoalesced at a rate at which they are added to the queue, the IOPS threshold will be exceeded and additional costs may accrue. Accordingly, it may be determined to coalesce the write operations (403a) such that four write operations (403a) are coalesced per coalesced write operation (405). Thus, the coalesced write operations (405) can be performed at a rate of 5,000 IOPS for 16 kilobytes of data per operation, failing below the IOPS threshold. As a further example, it may be determined to coalesce the first plurality of write operations (403a) in response to write operations (403a) being added to the queue at a rate faster than they are being removed from the queue (403a). For example, a limit on IOPS may be placed on the storage volume that cannot be exceeded, and the write operations (403a) may be added to the queue at a rate faster than this limit. Accordingly, the write operations (403a) may be coalesced to overcome this limit, and eliminate or reduce any divergence between the rate at which write operations (403a) are added and the rate at which write operations (403a) are removed from the queue. For further explanationFIG.7shows a flowchart of an example method for coalescing write operations in a cloud-based storage system that includes receiving (402) (e.g., by a cloud computing instance (340a) of a cloud based storage system (318)), from a storage controller application (342), a first plurality of write operations (403a), wherein each of the first plurality of write operations comprises a respective write to a storage volume (e.g., block storage (342) of the cloud computing instance (340a); coalescing (404) (e.g., by the cloud computing instance (340a)) the first plurality of write operations (403a) into a plurality of coalesced write operations (405), wherein each of the coalesced write operations (405) are configured to effect two or more of the first plurality of write operations (404); and performing (406) the plurality of coalesced write operations (405) on the storage volume (e.g., block storage (342)). FIG.7differs fromFIG.4in that the method ofFIG.7includes receiving (702) (e.g., from the storage controller application (324) by the cloud computing instance (340a)) a second plurality of write operations (403b). The method ofFIG.7further includes determining (704) not to coalesce the second plurality of write operations (403b)). Determining (704) not to coalesce the second plurality of write operations (403b) may be based on an IOPS threshold. The IOPS threshold may define a maximum number of input/output operations that may be performed per second assuming a size data to be written or read falls below a size limit. In other words, an IOPS threshold may comprise an IOPS limit component determined as a function of a size limit component. As an example, the IOPS threshold may set a 10,000 IOPS limit for data blocks of 16 kilobytes or less. Determining (704) not to coalesce the second plurality of write operations (403b) may be based on a size of write operations (403b) compared to the size limit component of the IOPS threshold. For example, where a size of write operations (403b) is greater than half of the size limit component, it may be determined not to coalesce the second plurality of write operations (403b) as a size of the coalesced operations (two or more times the size of the write operations (403b)) would exceed the size limit component of the IOPS threshold. As another example, it may be determined (704) to not coalesce the write operations (403b) in response to a rate at which the write operations (403b) are received falls below the IOPS limit component of the IOPS threshold. For example, assume an IOPS threshold of 10,000 IOPS for data blocks of 16 kilobytes or less, and assume that the write operations (403b) are received at a rate of 9,000 operations per second. Here, performing the second plurality of write operations (403b) uncoalesced would not exceed the IOPS threshold. Accordingly, it may be determined to not coalesce the second plurality of write operations (403b) to save the computational resources that would be spent in coalescing. The method ofFIG.7further includes performing (706) (e.g., by the cloud computing instance (340a) the second plurality of write operations (403b) on the storage volume (e.g., block storage (342)). In other words, the second plurality of write operations (403b) are performed on the storage volume uncoalesced. For further explanationFIG.8shows a flowchart of an example method for coalescing write operations in a cloud-based storage system that includes receiving (402) (e.g., by a cloud computing instance (340a) of a cloud based storage system (318)), from a storage controller application (342), a first plurality of write operations (403a), wherein each of the first plurality of write operations comprises a respective write to a storage volume (e.g., block storage (342) of the cloud computing instance (340a); coalescing (404) (e.g., by the cloud computing instance (340a)) the first plurality of write operations (403a) into a plurality of coalesced write operations (405), wherein each of the coalesced write operations (405) are configured to effect two or more of the first plurality of write operations (404); and performing (406) the plurality of coalesced write operations (405) on the storage volume (e.g., block storage (342)). FIG.8differs fromFIG.4in that coalescing (404) (e.g., by the cloud computing instance (340a)) the first plurality of write operations (403a) into a plurality of coalesced write operations (405) comprises coalescing (802), into each of the plurality of coalesced write operations (405), two or more write messages. For example, assume that each of the write operations (403a) comprises an NVRAM write message to write a data block to block storage (342). Accordingly, the coalesced write operations (405) would be generated to comprise two or more of the NVRAM write messages such that when the coalesced write operation (405) is performed (406) (e.g., sent to block storage (342)), the block storage (342) executes the two or more NVRAM write messages in each coalesced write operation (405). Thus, block storage (342) only receives a single operation (e.g., a single ioctl), only counting as a single operation for purposes of IOPS thresholds, that effects the writing of two or more blocks to block storage (342). Example embodiments are described largely in the context of a fully functional computer system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure. Embodiments can include be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to some embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Advantages and features of the present disclosure can be further described by the following statements: 1. A method of coalescing write operations in a cloud-based storage system, the method comprising: receiving, from a storage controller application of the cloud-based storage system, a first plurality of write operations, wherein each of the first plurality of write operations comprises a respective write to a storage volume; coalescing the first plurality of write operations into a plurality of coalesced write operations, wherein each of the coalesced write operations are configured to effect two or more of the first plurality of write operations; and performing the plurality of coalesced write operations on the storage volume. 2. The method of statement 1, wherein performing the plurality of coalesced write operations comprises sending, to the storage volume, a single input/output control command for each coalesced write operation of the coalesced write operations. 3. The method of statement 2 or statement 1, wherein receiving the first plurality of write operations comprises adding the first plurality of write operations to a queue. 4. The method of any of statements 1-3 further comprising determining, based on a state of the queue, to coalesce the first plurality of write operations. 5. The method of any of statements 1-4, wherein determining to coalesce the first plurality of write operations is further based on an Input/Output Operations per Second (IOPS) threshold. 6. The method of any of statements 1-5, further comprising: receiving a second plurality of write operations; determining not to coalesce the second plurality of write operations; and performing the second plurality of write operations on the storage volume. 7. The method of any of statements 1-6, wherein each of the first plurality of write operations comprises a write message, and wherein coalescing the first plurality of write operations comprises coalescing, into each of the plurality of coalesced write operations, two or more write messages. | 232,554 |
11861236 | DETAILED DESCRIPTION Aspects of the present disclosure are directed to asymmetric plane driver circuits in a multi-plane memory device in a memory sub-system. A memory sub-system can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. A non-volatile memory device is a package of one or more dies. Each die can consist of one or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane consists of a set of physical blocks. Each block consists of a set of pages. Each page consists of a set of memory cells (“cells”). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. A memory device can be made up of bits arranged in a two-dimensional grid or three-dimensional grid. Memory cells are etched onto a silicon wafer in an array of columns (also hereinafter referred to as bitlines) and rows (also hereinafter referred to as wordlines). A wordline can refer to one or more rows of memory cells of a memory device that are used with one or more bitlines to generate the address of each of the memory cells. The intersection of a bitline and wordline constitutes the address of the memory cell. A block hereinafter refers to a unit of the memory device used to store data and can include a group of memory cells, a wordline group, a wordline, or individual memory cells. One or more blocks can be grouped together to form a plane of the memory device in order to allow concurrent operations to take place on each plane. The memory device can include circuitry that performs concurrent memory page accesses of two or more memory planes. For example, the memory device can include multiple access line driver circuits and power circuits that can be shared by the planes of the memory device to facilitate concurrent access of pages of two or more memory planes, including different page types. For ease of description, these circuits can be generally referred to as independent plane driver circuits. In each generation of memory devices, the architecture of the memory array trends toward a smaller physical footprint while maintaining or even increasing the memory capacity. The independent plane driver circuits, and other circuitry associated with the memory array, are located in a logic layer disposed beneath the memory array. Accordingly, as the footprint of the memory array decreases, it can be desirable to similarly decrease the footprint of this logic layer to avoid peripheral extension of the logic layer beyond the footprint of the associated memory array. Since the independent plane driver circuits support improved random read performance on high density memory arrays, it is desirable to maintain the multi-lane parallel access functionality provided by the independent plane driver circuits even despite the size reduction of the logic layer where they reside. This objective is at odds, however, with the number of independent plane driver circuits used in a multi-plane memory device, as well as any additional circuitry needed to support additional vertical layers (i.e., tiers) added to the three-dimensional memory array. Some memory devices attempt to facilitate the reduction of the footprint of the logic layer by maintaining a certain number of duplicate plane driver circuits with certain concessions. For example, some memory devices make reductions to a number of inhibit schemes that can be supported in the memory device, a number of bias sources available to the plane driver circuits, a number of high voltage switches present in the memory device architecture, and/or a length of the high voltage switches used in the plane driver circuits. These concessions can lead to a reduced area for each of the duplicate plane driver circuits, but can result in an increased program disturbance effect, decreased read/window budget, increased time to market, increased switching constraints, decreased reliability, and/or other negative performance or operational impacts. Aspects of the present disclosure address the above and other deficiencies by implementing asymmetric plane driver circuits in a multi-plane memory device. In one embodiment, the asymmetric plane driver circuits include one or more primary plane driver circuits and one or more secondary plane driver circuits having different functionality than the primary plane driver circuits. A primary plane driver circuit can include components to allow the primary plane driver circuit to perform read, program, and erase operations on the planes of the multi-plane memory device. For example, the primary plane driver circuit can include high voltage sources used to apply relatively high program and/or erase voltages during a program and/or erase operation respectively, a number of high voltage switches and associated control logic used to perform program and erase operations, test mode circuitry, and other components utilized to perform read, program, and erase operations. The multi-plane memory device can further include signal routing lines and switches to connect the primary plane driver circuit to each plane, or to a subset of the planes, in the multi-plane memory device to permit primary plane driver circuit to perform program or erase operations, for example, on two or more planes in parallel. A secondary plane driver circuit can include fewer components than a primary plane driver circuit, such as components to allow the secondary plane driver circuit to perform read operations on an associated plane of the multi-plane memory device. For example, the primary plane driver circuit can include drivers to support program/erase/read and test mode operations, as well as high voltage switches with larger channel lengths to address reliability concerns since the primary plane driver circuit supports program and erase operations. By comparison, the secondary plane driver circuits can include only drivers to support read operations, and these drivers can include high voltage switches with smaller channel lengths since the read voltages are not as high as the program and erase voltages. In one embodiment, each of the one or more secondary plane driver circuits is associated with only one plane of the multi-plane memory device and each of the one or more secondary plane driver circuits can perform a separate read operation on its associated plane concurrently. Since the multi-plane memory device architecture can support concurrent read operations on separate planes, the secondary plane driver circuits can provide this functionality while the primary plane driver circuit can be used for program or erase operations performed on individual planes or separate planes in parallel. Advantages of this approach include, but are not limited to, a reduction in size of the logic layer in a multi-plane memory device. Since the secondary plane driver circuits include fewer components, they can be manufactured to occupy less area than a primary plane driver circuit in the logic layer of the memory device, thereby permitting the logic layer to have a smaller footprint than if a primary plane driver circuit were included for each plane of the multi-plane memory device. In addition, the scalability of additional memory planes or vertical memory tiers to the memory array of the memory device is possible without incurring proportional increases to the footprint of the logic layer. Furthermore, compromises to the functionality and performance of the plane driver circuits do not occur in order to realize the decrease in size, as none of reductions to the inhibit schemes used in the memory device, the number of bias sources available to the plane driver circuits, and/or the channel length of devices used in high voltage switches are necessary. Accordingly, the asymmetric plane driver circuit architecture described herein provides the same functionality as a symmetric architecture, but with significant savings in the size of the circuitry in the logic layer of the memory device. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a hybrid of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory module (NVDIMM). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to different types of memory sub-system110.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM). A memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can be a processing device, which includes one or more processors (e.g., processor117), configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which includes a raw memory device130having control logic (e.g., local media controller135) on the die and a controller (e.g., memory sub-system controller115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. In one embodiment, the memory device130includes asymmetric plane driver circuits150used to perform memory access operations on the multiple memory planes of memory device130. In one embodiment, the asymmetric plane driver circuits150include one or more primary plane driver circuits and one or more secondary plane driver circuits having different functionality than the primary plane driver circuits. For example, a primary plane driver circuit can include components to support read operations, program operations, and erase operations on any of the planes of memory device130. For example, the primary plane driver circuit can include high voltage sources used to apply relatively high program/erase voltages during a program/erase operation respectively, a number of high voltage switches and associated control logic used to perform program and erase operations, test mode circuitry, and other components utilized to perform read, program, and erase operations. The one or more secondary plane driver circuits can each include fewer components than a primary plane driver circuit, such as components to allow the secondary plane driver circuit to perform read operations on an associated one of the planes of memory device130. Since a secondary plane driver circuit includes fewer components than a primary plane driver circuit, the secondary plane driver circuit can occupy a smaller footprint area than then primary plane driver circuit. In one embodiment, the footprint area represents the two-dimensional amount of space that a given component occupies on a substrate (e.g., silicon wafer) that forms the logic layer. In one embodiment, the footprint area of a secondary plane driver circuit can be approximately one third of the footprint area of a primary plane driver circuit. In one embodiment, memory device130further includes signal routing lines and switches to connect the primary plane driver circuit to each plane, or to a subset of the planes, to permit primary plane driver circuit to perform program or erase operations, for example, on two or more planes in parallel. In one embodiment, each secondary plane driver circuit is associated with only one plane of memory device130. The signal routing lines and switches further connect each secondary plane driver circuit directly to the associated plane, so that each secondary plane driver circuit can perform a separate read operation on its associated plane concurrently. In one embodiment, local media controller135of memory device130receives memory access commands, such as a read commands directed to different planes of memory device130, and configures the routing circuitry (i.e., the signal routing lines and switches) to couple the primary plane driver circuit and secondary plane driver circuits to the appropriate planes to concurrently perform memory operations, such as read operations, corresponding to the received memory access commands. Further details with regards to the operations of local media controller135and asymmetric plane driver circuits150are described below. In some embodiments, memory device130includes local media controller135and at least a portion of asymmetric plane driver circuits150and is configured to perform the functionality described herein. In such an embodiment, asymmetric plane driver circuits150can be implemented using hardware or as firmware, stored on memory device130, executed by the control logic (e.g., local media controller135) to perform the operations related to concurrent memory plane access described herein. FIG.2is a block diagram illustrating a multi-plane memory device with asymmetric plane driver circuits in accordance with some embodiments of the present disclosure. The memory device130includes a memory array270divided into memory planes272(0)-272(3) that each includes a respective number of memory cells. The multi-plane memory device130can further include a logic layer280disposed under memory array270. Among other components, the logic layer280can include local media controller135, including a power control circuit and access control circuit for concurrently performing memory access operations for different memory planes272(0)-272(3). The memory cells can be non-volatile memory cells, such as NAND flash cells, or can generally be any type of memory cells. The memory planes272(0)-272(3) can each be divided into blocks of data, with a different relative block of data from each of the memory planes272(0)-272(3) concurrently accessible during memory access operations. For example, during memory access operations, data block282of the memory plane272(0), data block283of the memory plane272(1), data block284of the memory plane272(2), and data block285of the memory plane272(3) can each be accessed concurrently. Each of the memory planes272(0)-272(3) can be coupled to a respective page buffer (PB)276(0)-276(3). Each page buffer276(0)-276(3) can be configured to provide data to or receive data from the respective memory plane272(0)-272(3). The page buffers276(0)-276(3) can be controlled by local media controller135. Data received from the respective memory plane272(0)-272(3) can be latched at the page buffers276(0)-276(3), respectively, and retrieved by local media controller135, and provided to the memory sub-system controller115via the NVMe interface. Each of the memory planes272(0)-272(3) can be further coupled to a respective plane driver circuit, such as an access line driver circuit. In one embodiment, the logic layer280includes asymmetric plane driver circuits150, such as primary plane driver circuit274(0) and secondary plane driver circuits278(0)-278(2). The plane driver circuits274(0) and278(0)-278(2) can be configured to condition a page of a respective block of an associated memory plane272(0)-272(3) for a memory access operation. In one embodiment, primary plane driver circuit274(0) is configured to perform multiple types memory access operations, such as programming data (i.e., writing data), reading data, or erasing data, while each secondary plane driver circuit278(0)-278(2) is configured to perform only one type of memory access operation, such as reading data. Each of the plane driver circuits274(0) and278(0)-278(2) can be coupled to a respective global access lines associated with a respective memory plane272(0)-272(3). Each of the global access lines can be selectively coupled to respective local access lines within a block of a plane during a memory access operation associated with a page within the block. The plane driver circuits274(0) and278(0)-278(2) can be controlled based on signals from local media controller135. Each of the plane driver circuits274(0) and278(0)-278(2) can include or be coupled to a respective power circuit, and can provide voltages to respective access lines based on voltages provided by the respective power circuit. The voltages provided by the power circuits can be based on signals received from local media controller135. The local media controller135can control the plane driver circuits274(0) and278(0)-278(2) and page buffers276(0)-276(3) to concurrently perform memory access operations associated with each of a group of memory command and address pairs (e.g., received from memory sub-system controller115). For example, local media controller135can control the plane driver circuits274(0) and278(0)-278(2) and page buffer376(0)-376(3) to perform the concurrent memory access operations. Local media controller135can include a power control circuit that serially configures two or more of the plane driver circuits274(0) and278(0)-278(2) for the concurrent memory access operations, and an access control circuit configured to control two or more of the page buffers276(0)-276(3) to sense and latch data from the respective memory planes272(0)-272(3), or program data to the respective memory planes272(0)-272(3) to perform the concurrent memory access operations. In operation, local media controller135can receive a group of memory command and address pairs via the NVMe bus, with each pair arriving in parallel or serially. In some examples, the group of memory command and address pairs can each be associated with different respective memory planes272(0)-272(3) of the memory array270. The local media controller135can be configured to perform concurrent memory access operations (e.g., read operations or program operations) for the different memory planes272(0)-272(3) of the memory array370responsive to the group of memory command and address pairs. For example, the power control circuit of local media controller135can serially configure, for the concurrent memory access operations based on respective page type (e.g., LP, UP, XP, TP, SLC/MLC/TLC/QLC page), the plane driver circuits274(0) and278(0)-278(2) for two or more memory planes272(0)-272(3) associated with the group of memory command and address pairs. In one embodiment, the page types can include lower pages (LPs), upper pages (UPs), extra pages (XPs), and top pages (TPs). Each bit of the memory cell is stored at a different page portion of the memory cell. Various read level thresholds can be used for the various page types: SLC logical page types are lower logical pages (LPs), MLC logical page types are LPs and upper logical pages (UPs), TLC logical page types are LPs, UPs, and extra logical pages (XPs), and QLC logical page types are LPs, UPs, XPs and top logical pages (TPs). For example, a memory cell of the QLC memory can have a total of four logical pages, including a lower logical page (LP), an upper logical page (UP), an extra logical page (XP) and a top logical page (TP), where each logical page stores a bit of data. For example, a bit can be represented by each of the four logical pages of the memory cell. After the plane driver circuits274(0) and278(0)-278(2) have been configured, the access control circuit of local media controller135can concurrently control the page buffers276(0)-276(3) to access the respective pages of each of the two or more memory planes272(0)-272(3) associated with the group of memory command and address pairs, such as retrieving data or writing data, during the concurrent memory access operations. For example, the access control circuit can concurrently (e.g., in parallel and/or contemporaneously) control the page buffers276(0)-276(3) to charge/discharge bitlines, sense data from the two or more memory planes272(0)-272(3), and/or latch the data. Based on the signals received from local media controller135, the plane driver circuits274(0) and278(0)-278(2) that are coupled to the memory planes272(0)-272(3) associated with the group of memory command and address command pairs can select blocks of memory or memory cells from the associated memory plane272(0)-272(3), for memory operations, such as read, program, and/or erase operations. The plane driver circuits274(0) and278(0)-278(2) can drive different respective global access lines associated with a respective memory plane272(0)-272(3). As an example, the primary plane driver circuit274(0) can drive a first voltage on a first global access line associated with the memory plane272(0), the secondary driver circuit278(0) can drive a second voltage on a third global access line associated with the memory plane272(1), the secondary driver circuit278(1) can drive a third voltage on a seventh global access line associated with the memory plane272(2), etc., and other voltages can be driven on each of the remaining global access lines. In some examples, pass voltages can be provided on all access lines except an access line associated with a page of a memory plane272(0)-272(3) to be accessed. The local media controller135and the plane driver circuits274(0) and278(0)-278(2) can allow different respective pages, and the page buffers276(0)-276(3) within different respective blocks of memory cells, to be accessed concurrently. For example, a first page of a first block of a first memory plane can be accessed concurrently with a second page of a second block of a second memory plane, regardless of page type. The page buffers276(0)-276(3) can provide data to or receive data from the local media controller135during the memory access operations responsive to signals from the local media controller135and the respective memory planes272(0)-272(3). The local media controller135can provide the received data to memory sub-system controller115. It will be appreciated that the memory device130can include more or less than four memory planes, driver circuits, and page buffers. It will also be appreciated that the respective global access lines can include8,16,32,64,128, etc., global access lines. The local media controller135and the plane driver circuits274(0) and278(0)-278(2) can concurrently access different respective pages within different respective blocks of different memory planes when the different respective pages are of a different page type. In one embodiment, local media controller135can configure routing circuitry (e.g., signal routing lines and switches (not shown)) in memory device to couple the plane driver circuits274(0) and278(0)-278(2) to corresponding memory planes272(0)-272(3) to perform memory access operations. For example, local media controller135can configure the routing circuitry to operatively couple primary plane driver circuit274(0) to memory plane272(0) and to operatively couple secondary plane driver circuit278(0) to memory plane272(1) to concurrently perform read operations corresponding to received memory access commands. In another embodiment, local media controller135can configure the routing circuitry to operatively couple primary plane driver circuit274(0) and one or more different secondary plane driver circuits278(0)-278(2) to corresponding memory planes. In another embodiment, local media controller135can configure the routing circuitry to operatively couple two or more of secondary plane driver circuits278(0)-278(2) to corresponding memory planes. In another embodiment, local media controller135can configure the routing circuitry to operatively couple primary plane driver circuit274(0) to two or more of memory planes272(0)-272(3) to perform a program operation or erase operation on the two or more planes in parallel. The reduced footprint area of secondary plane driver circuits278(0)-278(2) relative to that of primary plane driver circuit274(0) allows logic layer280to have a smaller footprint than if a primary plane driver circuit were included for each plane of memory device130. In addition, the scalability of additional memory planes or vertical memory tiers to the memory array of the memory device130is possible without incurring proportional increases to the footprint of the logic layer280. FIG.3is a block diagram illustrating routing circuitry300having one switching configuration for asymmetric plane driver circuits in a multi-plane memory device in accordance with some embodiments of the present disclosure. As illustrated inFIG.3, primary plane driver circuit274(0) and secondary plane driver circuits278(0)-278(2) are each coupled to a corresponding memory plane of memory device130by associated signal routing lines. In one embodiment, there is a number of secondary plane driver circuits278(0)-278(2) which is one less than the number of planes. In one embodiment, there is a number of primary plane driver circuits274(0), wherein a sum of the number of primary plane driver circuits274(0) and the number of secondary plane driver circuits278(0)-278(2) equals the number of planes. Routing circuitry300further includes a first set of switches310and a second set of switches320. Each of the first set of switches310and the second set of switches320can be implemented by a metal-oxide-semiconductor field-effect transistor (MOSFET) device or other type of switching device. In one embodiment, the first set of switches310includes one switching device associated with each of secondary plane driver circuits278(0)-278(2) and positioned along the signal routing line between the secondary plane driver circuit and the corresponding memory plane. Each switching device in the first set of switches310is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the associated one of secondary plane driver circuits278(0)-278(2) with the corresponding memory plane. For example, each switching device in the first set of switches310can be activated (i.e., closed) when the associated one of secondary plane driver circuits278(0)-278(2) is performing a memory access operation (e.g., a read operation) on the corresponding memory plane. The design of routing circuitry300allows one or more of the switching devices in the first set of switches310to be activated concurrently so that one or more of secondary plane driver circuits278(0)-278(2) can concurrently perform memory access operations. In one embodiment, primary plane driver circuit274(0) is directly connected to a corresponding memory plane (e.g., plane0) such that any signal output by primary plane driver circuit274(0) is applied to that plane. In another embodiment, a switching device is present in the signal routing line to selectively couple the primary plane driver circuit274(0) with the corresponding plane. In one embodiment, the second set of switches320includes one switching device associated with each remaining plane (e.g., plane1, plane2, plane3) in memory device130and positioned along the signal routing line between the primary plane driver circuit274(0) and the corresponding memory plane. Each switching device in the second set of switches320is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the primary plane driver circuit274(0) with the corresponding memory plane. For example, each switching device in the second set of switches320can be activated (i.e., closed) when the primary plane driver circuit274(0) is performing a memory access operation (e.g., a program or erase operation) on the corresponding memory plane. The design of routing circuitry300allows one or more of the switching devices in the second set of switches320to be activated concurrently so that primary plane driver circuit274(0) can perform a memory access operation on multiple planes in parallel. During such an operation, each switching device in the first set of switches310can be deactivated (i.e., opened) to decouple the secondary plane driver circuits278(0)-278(2) from the corresponding memory planes. FIG.4is a block diagram illustrating routing circuitry400having one switching configuration for asymmetric plane driver circuits in a multi-plane memory device in accordance with some embodiments of the present disclosure. In one embodiment, the memory device can include multiple primary plane driver circuits, such as primary plane driver circuit274(0) and primary plane driver circuit (474(1). As illustrated inFIG.4, primary plane driver circuits274(0) and474(1) and secondary plane driver circuits278(0)-278(1) are each coupled to a corresponding memory plane of memory device130by associated signal routing lines. Routing circuitry400further includes a first set of switches410and a second set of switches420. Each of the first set of switches410and the second set of switches420can be implemented by a MOSFET device or other type of switching device. In one embodiment, the first set of switches410includes one switching device associated with each of secondary plane driver circuits278(0)-278(1) and positioned along the signal routing line between the secondary plane driver circuit and the corresponding memory plane. Each switching device in the first set of switches410is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the associated one of secondary plane driver circuits278(0)-278(1) with the corresponding memory plane. For example, each switching device in the first set of switches410can be activated (i.e., closed) when the associated one of secondary plane driver circuits278(0)-278(2) is performing a memory access operation (e.g., a read operation) on the corresponding memory plane. The design of routing circuitry400allows one or more of the switching devices in the first set of switches410to be activated concurrently so that one or more of secondary plane driver circuits278(0)-278(1) can concurrently perform memory access operations. In one embodiment, primary plane driver circuit274(0) is directly connected to a corresponding memory plane (e.g., plane0) such that any signal output by primary plane driver circuit274(0) is applied to that plane. Similarly, primary plane driver circuit474(1) is directly connected to a corresponding memory plane (e.g., plane2) such that any signal output by primary plane driver circuit474(1) is applied to that plane. In another embodiment, switching devices are present in the signal routing lines to selectively couple primary plane driver circuits274(0) and474(1) with the corresponding planes. In one embodiment, the second set of switches420includes one switching device associated with each remaining plane (e.g., plane1, plane3) in memory device130and positioned along the signal routing line between one of primary plane driver circuit274(0) and474(1) and the corresponding memory plane. Each switching device in the second set of switches420is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the primary plane driver circuit274(0) or474(1) with the corresponding memory plane. For example, each switching device in the second set of switches420can be activated (i.e., closed) when the primary plane driver circuit274(0) or474(1) is performing a memory access operation (e.g., a program or erase operation) on the corresponding memory plane. The design of routing circuitry400allows one or more of the switching devices in the second set of switches420to be activated concurrently so that primary plane driver circuit274(0) or474(1) can perform a memory access operation on multiple planes in parallel. During such an operation, each switching device in the first set of switches410can be deactivated (i.e., opened) to decouple the secondary plane driver circuits278(0)-278(2) from the corresponding memory planes. FIG.5is a block diagram illustrating routing circuitry500having one switching configuration for asymmetric plane driver circuits in a multi-plane memory device in accordance with some embodiments of the present disclosure. As illustrated inFIG.5, secondary plane driver circuits278(0)-278(2) and578(3) are each coupled to a corresponding memory plane of memory device130by associated signal routing lines. In this embodiment, primary plane driver circuit274(0) does not have any one corresponding memory plane. Routing circuitry500further includes a first set of switches510and a second set of switches520. Each of the first set of switches510and the second set of switches520can be implemented by a MOSFET device or other type of switching device. In one embodiment, the first set of switches510includes one switching device associated with each of secondary plane driver circuits278(0)-278(2) and578(3) and positioned along the signal routing line between the secondary plane driver circuit and the corresponding memory plane. Each switching device in the first set of switches510is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the associated one of secondary plane driver circuits278(0)-278(2) and578(3) with the corresponding memory plane. For example, each switching device in the first set of switches510can be activated (i.e., closed) when the associated one of secondary plane driver circuits278(0)-278(2) and578(3) is performing a memory access operation (e.g., a read operation) on the corresponding memory plane. The design of routing circuitry500allows one or more of the switching devices in the first set of switches510to be activated concurrently so that one or more of secondary plane driver circuits278(0)-278(2) and578(3) can concurrently perform memory access operations. In one embodiment, primary plane driver circuit274(0) is not directly connected to a corresponding memory plane. In one embodiment, the second set of switches520includes one switching device associated with each plane (e.g., plane0, plane1, plane2, plane3) in memory device130and positioned along the signal routing line between the primary plane driver circuit274(0) and the corresponding memory plane. Each switching device in the second set of switches520is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the primary plane driver circuit274(0) with the corresponding memory plane. For example, each switching device in the second set of switches520can be activated (i.e., closed) when the primary plane driver circuit274(0) is performing a memory access operation (e.g., a program or erase operation) on the corresponding memory plane. The design of routing circuitry500allows one or more of the switching devices in the second set of switches520to be activated concurrently so that primary plane driver circuit274(0) can perform a memory access operation on multiple planes in parallel. During such an operation, each switching device in the first set of switches510can be deactivated (i.e., opened) to decouple the secondary plane driver circuits278(0)-278(2) and578(3) from the corresponding memory planes. FIG.6is a block diagram illustrating routing circuitry600having one switching configuration for asymmetric plane driver circuits and global wordline driver circuits in a multi-plane memory device in accordance with some embodiments of the present disclosure. As illustrated inFIG.6, primary plane driver circuit274(0) and secondary plane driver circuits278(0)-278(2) are each coupled to a corresponding one of global wordline (WL) driver circuits680(0)-680(3) by associated signal routing lines. In turn, each one of global wordline driver circuits680(0)-680(3) is coupled to a corresponding memory plane of memory device130. Routing circuitry600further includes a first set of switches610and a second set of switches620. Each of the first set of switches610and the second set of switches620can be implemented by a MOSFET device or other type of switching device. In one embodiment, the first set of switches610includes one switching device associated with each of secondary plane driver circuits278(0)-278(2) and positioned along the signal routing line between the secondary plane driver circuit and the corresponding one of global wordline driver circuits680(0)-680(3). Each switching device in the first set of switches610is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the associated one of secondary plane driver circuits278(0)-278(2) with the corresponding global wordline driver circuit and memory plane. For example, each switching device in the first set of switches610can be activated (i.e., closed) when the associated one of secondary plane driver circuits278(0)-278(2) is performing a memory access operation (e.g., a read operation) on the corresponding memory plane. The design of routing circuitry600allows one or more of the switching devices in the first set of switches610to be activated concurrently so that one or more of secondary plane driver circuits278(0)-278(2) can concurrently perform memory access operations. InFIG.6, primary plane driver circuit274(0) and global wordline driver680(0) together perform the same functionality as the primary plane driver circuit274(0) inFIG.3, but one advantage is a reduced number of high voltage switches routing circuitry600, as compared to routing circuitry300inFIG.3. This results in a significant area savings for routing circuitry600. InFIG.6, the number of signals between primary plane driver circuit274(0) and global wordline driver680(0) is lower than the number of signals output from the primary plane driver circuit274(0) inFIG.3, which can be the same as the number of wordlines per block. This translates to lower number of switches in routing circuitry600when compared to the number of switches in routing circuitry300(not all switches are illustrated inFIG.3), resulting is more area savings. In one embodiment, primary plane driver circuit274(0) is directly connected to a corresponding global wordline driver circuit680(0) such that any signal output by primary plane driver circuit274(0) is applied to that global wordline driver circuit680(0). In another embodiment, a switching device is present in the signal routing line to selectively couple the primary plane driver circuit274(0) with the corresponding global wordline driver circuit680(0). In one embodiment, the second set of switches620includes one switching device associated with each remaining global wordline driver circuit680(1)-680(3) in memory device130and positioned along the signal routing line between the primary plane driver circuit274(0) and the corresponding global wordline driver circuit. Each switching device in the second set of switches620is separately controllable by a control signal (e.g., received from local media controller135or some other control logic) to couple the primary plane driver circuit274(0) with the corresponding global wordline driver circuit. For example, each switching device in the second set of switches620can be activated (i.e., closed) when the primary plane driver circuit274(0) is performing a memory access operation (e.g., a program or erase operation) on the corresponding memory plane. The design of routing circuitry600allows one or more of the switching devices in the second set of switches620to be activated concurrently so that primary plane driver circuit274(0) can perform a memory access operation on multiple planes in parallel. During such an operation, each switching device in the first set of switches610can be deactivated (i.e., opened) to decouple the secondary plane driver circuits278(0)-278(2) from the corresponding global wordline driver circuits. FIG.7is a flow diagram of an example method of operation of asymmetric plane driver circuits in a multi-plane memory device in a memory sub-system in accordance with some embodiments of the present disclosure. The method700can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method700is performed by local media controller135and asymmetric plane driver circuits150ofFIG.1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At operation705, a memory access command is received. For example, processing logic (e.g., local media controller135) can receive a first memory access command directed to a memory device, such as memory device130. In one embodiment, the first memory access command is a program command, which can be received from a controller, such as memory sub-system controller115, some other component of memory sub-system110, or from an external component, such as host system120. In one embodiment, the program command specifies at least one of a logical or physical address associated with data to be programmed to memory device130. The logical or physical address can correspond to one or more blocks of data to be stored on one or more planes, such as memory planes272(0)-272(3), of a memory array270of the memory device130. In one embodiment, local media controller135can maintain a mapping of memory addresses to each of the memory planes272(0)-272(3). At operation710, routing circuitry is configured. For example, the processing logic can configure switching devices within the routing circuitry to couple a primary plane driver circuit, such as primary plane driver circuit274(0), to one or more memory planes, such as planes272(0)-272(3) to perform a program operation corresponding to the program command. In one embodiment, local media controller135, or other control logic, causes a control signal to be applied to one or more switching devices of a second set of switches320in routing circuitry300to activate those switching devices to couple primary plane driver circuit274(0) to one or more of memory planes272(0)-272(3) on which data can be programmed in parallel. Depending on the embodiment, memory device can include routing circuitry300,400,500,600, or some other routing circuitry, any of which can be configured as described above. At operation715, a program operation is performed. For example, the processing logic can cause primary plane driver circuit274(0) to perform the program operation on one or more of memory planes272(0)-272(3) of memory device130. During the program operation, a program voltage is applied to selected wordlines of the planes of memory device130, in order to program a certain level of charge to the selected memory cells on the wordlines representative of a desired value, which can be specified in the memory request command received at operation705. At operations720and725, memory access commands are received. For example, processing logic (e.g., local media controller135) can receive a second memory access command and a third memory access command directed to the memory device, such as memory device130. In one embodiment, the second and third memory access commands are read commands, which can be received from the controller, such as memory sub-system controller115, some other component of memory sub-system110, or from an external component, such as host system120. In one embodiment, the read commands each specify at least one of a logical or physical address associated with data to be read from memory device130. Each logical or physical address can correspond to one or more blocks of data stored on one or more planes, such as memory planes272(0)-272(3), of a memory array270of the memory device130. In one embodiment, the first read command is associated with (i.e., directed to) a first plane (e.g., plane0272(0)) and the second read command is associated with (i.e., directed to) a second plane (e.g., plane1272(1)) of memory device130. At operation730, routing circuitry is configured. For example, the processing logic can configure switching devices within the routing circuitry to couple one or more secondary plane driver circuits, such as secondary plane driver circuits278(0)-278(3), to corresponding memory planes, such as planes272(1)-272(3) to perform read operations corresponding to the read commands. In one embodiment, local media controller135, or other control logic, causes a control signal to be applied to one or more switching devices of a first set of switches310in routing circuitry300to activate those switching devices to couple secondary plane driver circuit278(0) to memory plane272(1). Primary plane driver circuit274(0) remains directly connected to memory plane272(0). Depending on the embodiment, memory device can include routing circuitry300,400,500,600, or some other routing circuitry, any of which can be configured as described above. At operation735, read operations are performed. For example, the processing logic can cause primary plane driver circuit274(0) to perform a first read operation on memory plane272(0) and secondary plane driver circuit278(0) to perform a second read operation on memory plane272(1) concurrently (i.e., at least partially overlapping in time). During the read operations, a read voltage is applied to selected wordlines of the planes of memory device130, in order to determine the level of charge stored at the selected memory cells on the wordlines, where the level of charge is representative of a stored value. In other embodiments, depending on which planes the memory commands received at operation725are directed, a different combination of primary plane driver circuit274(0) and/or secondary plane driver circuits278(1)-278(3) can be used. For example, primary plane driver circuit274(0) and any of secondary plane driver circuits278(1)-278(3) can perform concurrent read operations on the memory planes of memory device130, or two or more of secondary plane driver circuits278(1)-278(3) can perform concurrent read operations without involving primary plane driver circuit274(0). At operation740, a memory access command is received. For example, processing logic (e.g., local media controller135) can receive a fourth memory access command directed to a memory device, such as memory device130. In one embodiment, the fourth memory access command is an erase command, which can be received from the controller, such as memory sub-system controller115, some other component of memory sub-system110, or from an external component, such as host system120. In one embodiment, the erase command specifies at least one of a logical or physical address associated with data to be erased from memory device130. The logical or physical address can correspond to one or more blocks of data to be stored on one or more planes, such as memory planes272(0)-272(3), of a memory array270of the memory device130. At operation745, routing circuitry is configured. For example, the processing logic can configure switching devices within the routing circuitry to couple the primary plane driver circuit, such as primary plane driver circuit274(0), to one or more memory planes, such as planes272(0)-272(3) to perform an erase operation corresponding to the erase command. In one embodiment, local media controller135, or other control logic, causes a control signal to be applied to one or more switching devices of a second set of switches320in routing circuitry300to activate those switching devices to couple primary plane driver circuit274(0) to one or more of memory planes272(0)-272(3) from which data can be erased in parallel. Depending on the embodiment, memory device can include routing circuitry300,400,500,600, or some other routing circuitry, any of which can be configured as described above. At operation750, an erase operation is performed. For example, the processing logic can cause primary plane driver circuit274(0) to perform the erase operation on one or more of memory planes2721(0)-272(3) of memory device130. During the erase operation, erase voltages are applied to memory device130in order to erase the programmed value(s). FIG.8illustrates an example machine of a computer system800within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system800can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to local media controller135ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system800includes a processing device802, a main memory804(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory806(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system818, which communicate with each other via a bus830. Processing device802represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device802can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device802is configured to execute instructions826for performing the operations and steps discussed herein. The computer system800can further include a network interface device808to communicate over the network820. The data storage system818can include a machine-readable storage medium824(also known as a computer-readable medium, such as a non-transitory computer-readable medium) on which is stored one or more sets of instructions826or software embodying any one or more of the methodologies or functions described herein. The instructions826can also reside, completely or at least partially, within the main memory804and/or within the processing device802during execution thereof by the computer system800, the main memory804and the processing device802also constituting machine-readable storage media. The machine-readable storage medium824, data storage system818, and/or main memory804can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions826include instructions to implement functionality corresponding to local media controller135ofFIG.1). While the machine-readable storage medium824is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 67,249 |
11861237 | DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings. FIG.1is a block diagram of a storage device20according to an exemplary embodiment of the present disclosure. Referring toFIG.1, the storage device20according to the embodiment of the present disclosure includes a storage controller100(e.g., a memory controller) and a nonvolatile memory device200. According to some exemplary embodiments, a host10(e.g., a host device) connected to the storage device20may include portable electronic devices such as personal/portable computers, personal digital assistants (PDAs), portable multimedia players (PMPs) and smartphones, high definition televisions (HDTVs), and the like. According to some exemplary embodiments, the storage device20may be implemented as an internal memory embedded in an electronic device and may be, for example, a universal flash storage (UFS) memory device, an embedded multi-media card (eMMC), or a solid state drive (SSD). In some embodiments, the storage device20may be implemented as an external memory that can be inserted into or removed from an electronic device and may be, for example, a UFS memory card, compact flash (CF), secure digital (SD), micro-SD, mini-SD, extreme digital (xD), or a memory stick. The nonvolatile memory device200may be a NAND flash memory, a NOR flash memory, a resistive random access memory (RRAM), a phase-change memory (PRAM), a magnetoresistive random access memory (MRAM), a ferroelectric random access memory (FRAM), or the like. The storage controller100is connected to the host10and the nonvolatile memory device200. The storage controller100is configured to provide an interface between the nonvolatile memory device200and the host10. For example, the storage controller100provides a control signal CMD and an address ADD to the nonvolatile memory device200. For example, the control signal CMD could be based on request received from the host10such as a read or write request. In addition, the storage controller100exchanges data with the nonvolatile memory device200. For example, the storage controller100may receive data and a write request from the host10and write the data to the nonvolatile memory device200. For example, the storage controller100may apply a control signal CMD to the nonvolatile memory200in response to receiving a read request from the host10, the nonvolatile memory device200may read data in response to the applied control signal CMD and output the read data to the storage controller100. In response to a request from the host10, the storage controller100accesses the nonvolatile memory device200. The storage controller100may control read, write (or program), erase, and background operations of the nonvolatile memory device200. For example, to control the read operation, the storage controller100may transmit a read control signal CMDreadand the address ADD to the nonvolatile memory device200. For example, to control the write operation, the storage controller100may transmit a write control signal CMDwriteand data to be written. For example, to control the erase operation, the storage controller100may transmit an erase control signal CMDeraseand the address ADD. In addition, the storage controller100may perform background operations such as wear leveling, garbage collection and bad block managing on the nonvolatile memory device200. For example, the wear leveling may include ensuring that no memory block is written more than a certain number of times. For example, the garbage collection may include copying valid pages of a several memory blocks to a single memory block and then erasing the several blocks to free up space. For example, the bad block managing may include keeping track of memory blocks storing codewords that could not be corrected and avoiding use of these memory blocks for future write operations. In some embodiments, the storage controller100may control the nonvolatile memory device200to read data by applying the same read voltage to a selected word line. The nonvolatile memory device200may read stored data using the read voltage having a predetermined threshold voltage and transfer the read data to the storage controller100whenever the data is read. The read data may be transferred to the storage controller100on a page-by-page basis. For example, the nonvolatile memory device200could include a page buffer, overwrite the page buffer with a next page of the read data, output the contents of the page buffer to the storage controller100, and repeat this until all of the read data has been transferred to the storage controller100. In an exemplary embodiment, the storage controller100may access a first state of data stored in a memory cell, perform an operation on a value mapped to the first state, and in-place update the result of the operation to a second state. In the present specification, an in-place update may refer to a case where a memory cell in which data before an operation is stored is the same as a memory cell in which the data after the operation is written (programmed or updated). In an exemplary embodiment, an in-place update overwrites a memory cell that is already storing data without first erasing the memory cell. FIG.2is a block diagram of the storage controller100illustrated inFIG.1. Referring toFIG.2, the storage controller100according to an exemplary embodiment includes a host interface110, a processing unit120, a memory130, a register140, a programmable logic150, and a nonvolatile memory interface160. The elements of the storage controller100are connected to each other through a data bus101. The data bus101may include a plurality of channels. In an exemplary embodiment, the channels may indicate communication paths driven independently of each other and respectively communicate with connected devices based on the same communication method. The host interface110may be connected to the host100. According to an exemplary embodiment, the host interface110may be based on at least one of various interfaces such as double data rate (DDR), low-power DDR (LPDDR), universal serial bus (USB), multimedia card (MMC), peripheral component interconnection (PCI), PCI-express (PCI-E), advanced technology attachment (ATA), serial-ATA (SATA), parallel-ATA (PATA), small computer small interface (SCSI), enhanced small disk interface (ESDI), integrated drive electronics (IDE), mobile industry processor interface (MIPI), nonvolatile memory-express (NVM-e), and universal flash storage (UFS). The processor120may perform an operation control and an operation on each element of the storage controller100in response to a write command, a read command, an erase command or other commands related to the operation of the storage device20received from the host10. According to an exemplary embodiment of the inventive concept, the processor120performs a multiplier-accumulator (MAC) operation for a convolution operation necessary for a neural network. In an exemplary embodiment, a MAC operation includes a multiply operation and an addition operation. For example, a first weight of a node of an artificial neural network associated with a first input edge can be multiplied by first data input received through the first edge to generate a first result using a multiplier of the MAC operation, a second weight of the node associated with a second input edge can be multiplied by second data input received through the second edge to generate a second result using the multiplier, and the first and second results can be added together to generate an output of the node using an accumulator of the MAC operation. The memory130stores nonvolatile data necessary for the operation of the storage controller100. According to some embodiments, the memory130may include a cache, a read only memory (ROM), a programmable read only memory (PROM), an erasable PROM (EPROM), an electrically erasable programmable read only memory (EEPROM), a phase-change RAM (PRAM), a flash memory, a static RAM (SRAM), or a dynamic RAM (DRAM). The register140may be a working memory that temporarily stores write data received from the host10or read data received from the nonvolatile memory device200and operation results generated during a control operation of the processing unit120. The register140may also be referred to as a buffer memory. The programmable logic150may perform some of the operations performed by the processing unit120. For example, some operation of the processing unit120may be offloaded to the programmable logic150. According to an exemplary embodiment of the inventive concept, the programmable logic150is a programmable logic device (PLD) including a plurality of gate arrays. The PLD may be used to design a digital circuit that performs a specific operation. The nonvolatile memory interface160may also be referred to as a nonvolatile memory controller and may access the nonvolatile memory200to control the operation of each of a plurality of nonvolatile memories. In an exemplary embodiment, the nonvolatile memory interface160may be connected to the nonvolatile memory200through at least one channel to write, read or erase data. The nonvolatile memory device200is provided as a storage medium of the storage device20. For example, the nonvolatile memory device200may be configured as a NAND-type flash memory having a large storage capacity. FIG.3is a block diagram illustrating the nonvolatile memory device200ofFIG.1in more detail. Referring toFIG.3, the nonvolatile memory device200according to an exemplary embodiment of the present disclosure includes a memory cell array260, a control logic210, a row decoder250, a page buffer220, and an input/output buffer230. The memory cell array260includes a plurality of memory blocks BLK0 through BLKn−1. Each of the memory blocks BLK0 through BLKn−1 includes a plurality of pages. Each of the pages includes a plurality of memory cells. Each of the memory cells is disposed at an intersection of a word line WL and a bit line BL. The memory cell array260may include a memory cell region corresponding to a main memory and a flag cell region. In the main memory that stores data, each memory cell may be a multi-level cell that stores two or more bits of data. Each memory cell may store a plurality of bit data. Each memory cell may include one or more data sets. Each data set may include two or more states to be mapped. According to an exemplary embodiment of the inventive concept, when receiving an operation command from the host10, the storage controller100performs an operation on a value of one of the memory cells in a first state to generate a result and sets the one memory cell to a second state corresponding to the result of the operation to perform an in-place update. Here, the second state may belong to the same memory cell as the first state but may also belong to the same data set as a data set to which the first state belongs or may belong to a data set adjacent to the data set to which the first state belongs. This will be described in detail later with reference toFIGS.7A and7B. According to an exemplary embodiment of the inventive concept, the memory cell array260includes flag cells that store various information about the memory cells. According to some embodiments, a flag cell may include one or more states corresponding to the number of data sets included in a memory cell. For example, if there are three data sets, the flag cell may include four states including an erase state and may be 2 bit-flag data corresponding to the four states. According to an exemplary embodiment of the inventive concept, a flag cell includes degradation information of a plurality of states included in a memory cell. The degradation information flag cell may be flag data of one or more bits. The control logic210controls the overall operation of the nonvolatile memory device200. The control logic210may be configured to control a high voltage generator240. That is, the control logic210may control the high voltage generator240to generate high voltages necessary for write, read and erase operations in response to the control signal CMD from the storage controller100(seeFIG.1). For example, during a read operation, the control logic210applies a read voltage Vrd and a read pass voltage Vread to the memory cell array260through the row decoder250. In addition, the control logic210transfers the address ADD received from the storage controller100(seeFIG.1) to each of the row decoder250and the page buffer220. The page buffer220operates as a write driver or a sense amplifier depending on operation mode. For example, the page buffer220operates as a sense amplifier during a read operation. During a read operation, the page buffer220receives one page of data from the memory cell array260. Specifically, the page buffer220receives one page of least significant bit (LSB) data or most significant bit (MSB) data corresponding to a page address from the memory cell array260. The input/output buffer230is configured to exchange data with an external device. Data received from the external device is transferred to the page buffer220through data lines DL. Data received from the page buffer220is output to the external device (e.g., the storage controller100). For example, the input/output buffer230may transfer read data to the storage controller100. For example, the input/output buffer230may include a well-known element such as a data buffer. A first memory block BLK0 in the memory cell array260includes a plurality of memory cell groups (not illustrated). Memory cells disposed in one row may form one memory cell group (e.g., a page). In addition, the memory cell group may be connected to one of the word lines WL. For example, the first memory block BLK0 may include first through mthpages Page 1 through Page M. Each of the pages Page 1 through Page M may include first through kthsectors sector 1 through sector k. Each of the sectors sector 1 through sector k includes a plurality of memory cells sharing one word line (not illustrated). Each of the memory cells may be a multi-level cell that stores a plurality of bits. Although only the first memory block BLK0 is illustrated inFIG.3, all of the first through nthmemory blocks BLK0 through BLKn−1 may be configured identically. In the case of a NAND flash memory, read and write operations are performed on a cell-by-cell basis in an in-place update method. FIG.4illustrates various methods of mapping dispersions of memory cells.FIGS.5and6illustrate linear mapping values of a memory cell according to an exemplary embodiment of the inventive concept. In the illustrated embodiments, it is assumed that one memory cell is a quadruple-level cell (QLC) capable of storing 4 bits. However, this is merely an exemplary embodiment, and the present disclosure is not limited thereto. Embodiments of the present disclosure are applicable to any multi-level cell that stores two or more bits of data. Referring toFIG.4, when a memory cell is a 4-bit multi-level cell, the memory cell may be in one of an erase state E and first through fifteenth states P1 through P15 and may be programmed (written) to any one of the states. The horizontal axis ofFIG.4represents threshold voltages of memory cells, and the states may be divided by threshold voltages RP1 through RP15, respectively. That is, results of programming in the erase state E and the first through fifteenth states P1 through P15 during a program operation may be divided by sequentially applying the threshold voltages RP1 through RP15 to a selected word line. When a first read voltage RP1 is applied to a control gate of a memory cell, the memory cell is turned on if it is in the erase state E, but is turned off if it is in the first state P1. When the memory cell is turned on, a current flows through the memory cell. When the memory cell is turned off, no current flows through the memory cell. Therefore, data stored in a memory cell may be distinguished according to whether the memory cell is turned on. Logic level allocation of data may vary according to embodiments. According to some embodiments, when a memory cell is turned on by applying the first read voltage RP1, data ‘1’ may be stored, and when the memory cell is turned off, ‘0’ may be stored. Alternatively, according to an embodiment, when a memory cell is turned on, data ‘0’ may be stored, and when the memory cell is turned off, ‘1’ may be stored. In the case of a QLC, data including 4 bits may be allocated to each state as illustrated in the drawing. In an embodiment, data ‘1111’ may be allocated to the erase state E, and data ‘1110’ may be allocated to the first state P1. However, data allocated to each state is not limited to the illustrated example and can be changed and then allocated accordingly. A plurality of bit pages may be included for data access to a multi-level cell. A QLC may be divided into four bit pages1P through4P to output data. The bit pages1P through4P may output data through page buffers U, M, L and F, respectively. States of one memory cell may be expressed using various linear mapping methods. A value mapped to each state is a linear value. That is, the erase state E and the first through nthstates P1 through Pn may have values that sequentially and constantly increase. Alternatively, the memory cell may be linearly mapped in a way that includes one or more data sets. Each of the data sets may include two or more states. The storage controller100performs a write operation in a direction from a low threshold voltage to a high threshold voltage. Therefore, the storage controller100may store the result of the operation only in a right direction. That is, assuming that a value before the operation is stored in a first state and a value after the operation is stored in a second state, the second state may have a greater threshold voltage than the first state. In other words, the second state may be disposed at a position shifted to the right of the first state. The second state is not shifted to the left due to the nature of the write (program) operation of a nonvolatile memory. Each of a plurality of states of a memory cell may be mapped to a decimal number, a hexadecimal number, or a value of some other number system according to some embodiments. The states of a memory cell illustrated inFIG.4may be respectively linearly mapped to values of 0 to 15 as decimal numbers (DEC) according to some embodiments or may be respectively linearly mapped to values of 0 to 9 and A to F as hexadecimal numbers (HEX) according to some embodiments. In this case, since the values (0 to 15) mapped to the decimal numbers or the values mapped to the hexadecimal numbers do not overlap each other, they may refer to one data set. Alternatively, referring to an embodiment illustrated inFIG.5, a memory cell may be linearly mapped to eight data sets, each including two states mapped to 0 and 1, respectively. The values of 0 and 1 in a data set are linear values, and set numbers 0 through 7 of data sets 0 through 7 may also increase linearly with each other. Alternatively, referring to an embodiment illustrated inFIG.6, the fifteen states of a memory cell excluding the erase state E may be mapped to three data sets −2 through 2, each including five linear values. In this case, since the erase state E not included in the data sets is a state before the memory cell is programmed, it may be mapped to Invalid. Each data set ofFIG.6includes five values. Values respectively mapped to states included in one data set may be variously set. For example, while the values are set to 2, −1, 0, 1 and 2 inFIG.6, a data set could instead be set to 0, 1, 2, 3 and 4. However, the present disclosure is not limited thereto, and the values may also be mapped to other linear values required according to system design. A storage device according to an exemplary embodiment of the present disclosure can perform a write operation on a nonvolatile memory on a cell-by-cell basis by mapping a plurality of data sets having a plurality of values to a plurality of states of a memory cell and performing an operation based on the mapped values. FIGS.7A and7Bare diagrams for explaining a method of operating a storage device according to an exemplary embodiment of the inventive concept. Referring toFIG.7A, in the illustrated example, values V in one data set may be set to −2, −1, 0, 1 and 2 and may be respectively linearly mapped to states P except for an erase state. For example, data set 0, data set 1 and data set 2 may be sequentially mapped to states P1 through P15 in this direction. Specifically, data set 0 is mapped to states P1 through P5, data set 1 is mapped to states P6 through P10, and data set 2 is mapped to states P11 through P15. In addition, the values V of −2, −1, 0, 1 and 2 are respectively mapped to states P1 through P5, states P6 through P10, or states P11 through P15 in each data set. The storage device first starts an operation in state P1. For example, a memory cell is in state P1 prior to starting the operation and state P1 is associated with data set 0. A current value mapped to state P1 is −2. For example, the memory cell could be interpreted as storing a value of −2 prior to the operation. Adding (+) 4 in state P1 results in a value of 2, causing a shift to state P5 mapped to the value of 2. For example, if the operation includes adding 4 to the value stored in the memory cell, a first voltage could be applied to the memory cell through a bit line to set the memory cell to state P5, which is associated with data set 0. Subtracting (−) 3 from the value of 2 in a next operation results in a value of −1. For example, if the operation further includes subtracting 3 from the value of 2 stored in the memory cell, a second voltage higher than the first voltage could be applied to the memory cell through a bit line to set the memory cell to state P7, which is associated with data set 1. Due to the programming nature of a nonvolatile memory, the storage device shifts to state P7 of a next adjacent data set 1 instead of shifting to state P2 mapped to the value of −1. If the value of −1 is multiplied by 2 in a next operation, the current value becomes −2. The storage device shifts to state P11 of a next adjacent data set 2 mapped to the current value of −2. For example, if the operation further includes multiplying 2 by the value of −1 stored in the memory cell, a third voltage higher than the second voltage could be applied to the memory cell through a bit line to set the memory cell to state P11, which is associated with data set 2. If the value of −2 is divided by 2 in a next operation, the current value becomes −1. Since values in the same data set are mapped in an increasing direction, the storage device shifts to state P12. For example, if the operation further includes dividing the value of −2 stored in the memory cell by 2, a fourth voltage higher than the third voltage could be applied to the memory cell through a bit line to set the memory cell to state P11, which is associated with data set 2. Referring toFIG.7B, in the illustrated example, values V in one data set may be set to 0, 1, 2, 3 and 4 and may be respectively linearly mapped to states P except for an erase state. For example, data set 0, data set 1 and data set 2 may be sequentially mapped to states P1 through P15 in this direction. Specifically, data set 0 may be mapped to states P1 through P5, data set 1 may be mapped to states P6 through P10, and data set 2 may be mapped to states P11 through P15. In addition, the values V of 0, 1, 2, 3 and 4 may be respectively mapped to states P1 through P5, states P6 through P10, or states P11 through P15 in each data set. The storage device first starts an operation in state P1. A current value mapped to state P1 is 0. Adding (+) 4 in state P1 results in a value of 4, causing a shift to state P5 mapped to the value of 4. Subtracting (−) 3 from the value of 4 in a next operation results in a value of 1. Due to the programming nature of a nonvolatile memory, the storage device shifts to state P7 of a next adjacent data set 1 instead of shifting to state P2 mapped to the value of 1. If the value of 1 is multiplied by 2 in a next operation, the current value becomes 2. The storage device shifts to state P8 mapped to the current value of 2. If the value of 2 is divided by 2 in a next operation, the current value becomes 1. Since values in the same data set cannot be mapped in an increasing direction, the storage device shifts to state P12. That is, the storage device may perform an operation on a value mapped to each state, check position information of a second state to which a result value of the operation is mapped, and update the position information of the second state. Here, it is checked whether the result value is the second state in the increasing direction from the value of a first state. Then, the result value is written (updated or overwritten) to the second state. In an exemplary embodiment, when the control logic210is instructed to update a memory cell from a first value to a second value, the control logic210determines a current data set associated with the memory cell, determines a current state of the memory cell within the current data set (e.g., determines current position of current state within current data state), determines whether the memory cell can be shifted right from the current state (e.g., current position) to a new first state (e.g., a new first position) within the current data set to set the memory cell to the second value, and applies an appropriate voltage to the memory cell to set the memory cell to the new first state upon determining that the memory cell can be shifted right to the new first state. For example, if the memory cell is currently in state P1 and is to be updated to −1, since the current data set is 0 and a shift right from state P1 within data set 0 is capable of reaching state P2 having a value of −1, the control logic210can apply an appropriate voltage to the memory cell to set the memory cell to state P2. If the control logic210determines that the memory cell cannot be shifted right to the new first state within the current data set to set the memory cell to the second value, the control logic210determines a new second state (e.g., a new second position) within the next data set to set the memory cell to the second value and applies an appropriate voltage to the memory cell to set the memory cell to the new second state. For example, if memory cell is in state P2 and is to be updated to −2, since the current data set is 0 and a shift right within data set 0 is not capable of reaching a −2, the control logic210determines that state P6 within next data set 1 allows the memory cell to be set to −2, and thus the control logic210can apply an appropriate voltage to the memory cell to set the memory cell to state P6. FIG.8illustrates a memory cell and a flag cell of a storage device according to an exemplary embodiment of the inventive concept.FIG.9is a diagram for explaining a method of operating the storage device illustrated inFIG.8according to an exemplary embodiment of the inventive concept. Referring toFIG.8, a nonvolatile memory device may include a memory cell region and a flag cell region. The flag cell region may store state information corresponding to each memory cell. A flag cell may be a multi-level cell according to some embodiments. According to some embodiments, the state information may include at least one of operation information, and degradation information of each memory cell. The operation information is information indicating whether a state storing a current value is included in a data set belonging to a memory cell. More specifically, the operation information is information indicating whether the current value has passed through the data set. The operation information flag cell may include a number of bits corresponding to the number of data sets included in a memory cell. In the case of a QLC illustrated in the drawing, since there are three data sets, the operation information flag cell may be 2 bits. Specifically, data set 0 may correspond to flag cell state P1, data set 1 may correspond to flag cell state P2, and data set 2 may correspond to flag cell state P3. The degradation information is information indicating the degree to which each memory cell has been degraded according to its operation. That is, the degradation information is log information about the progress of degradation and may include, for example, a program/erase (PE) cycle, a read count, a retention time, a program time and a program/read temperature, but the degradation information according to the current embodiment is not limited thereto. In an embodiment, a flag cell for the PE cycle may be one or more bits. In a memory cell, a data set to which a state belongs in which the number of PE cycles exceeds a predetermined value (>A) may be determined to be invalid, and the other data sets may be determined to be valid. In this state, the memory cell may be driven. Flag cell states when operations are performed inFIG.8will now be described in more detail. First, for the current value of −2 (P1), only flag cell state P1 corresponding to data set 0 to which state P1 of a memory cell (QLC) belongs is updated to 1, and flag cell states P2 and P3 are updated to 0. In a first operation, if 4 is added to the current value of −2 (P1), the current value becomes 2 (P5). In consideration of the aging of the memory cell, flag cell states P1 and P2 corresponding to data set 0 and data set 1 before and after memory cell state P5 are updated to 1, and flag cell state P3 is updated to 0. In a second operation, if 3 is subtracted from the current value of 2 (P5), the current value becomes −1 (P7), flag cell states P1 and P2 corresponding to data set 1 to which memory cell state P7 belongs are updated to 1, and flag cell state P3 is updated to 0. In a third operation, if the current value of −1 (P7) is multiplied by 2, the current value becomes −2 (P11), and flag cell states P1, P2 and P3 corresponding to data set 2 to which memory cell state P11 belongs are updated to 1. In a fourth operation, if the current value of −2 (P11) is divided by 2, the current value becomes −1 (P12), and flag cell states P1, P2 and P3 corresponding to data set 2 to which memory cell state P12 belongs are updated to 1. Although the state information of the flag cell illustrated inFIG.8is 1 when a data set is activated and 0 when the data set is not activated, it may also be mapped differently according to other embodiments. That is, the flag cell may determine the range of read voltages in a read operation on a nonvolatile memory. When the storage device accesses the current value to perform an operation, it only has to check the flag cell and then apply a threshold voltage value corresponding to data set 1 corresponding to flag cell state P1. Therefore, the efficiency of the read operation can be improved. FIGS.10through12are flowcharts illustrating a method of operating a storage device according to an exemplary embodiment of the inventive concept. The storage device may perform an operation based on a value mapped to each state of a memory cell and write (or overwrite) the result of the operation to the memory cell. That is, the storage device may write data to a nonvolatile memory device on a cell-by-cell basis, in other words, may in-place update data to a corresponding memory cell. Referring toFIG.10, in the case of a write operation, when the storage device performs an operation on a current value of a memory cell in a first state to generate a result and receives a write (program) command for writing the result of the operation (operation S10), the memory device performs a write operation on the memory cell to set the memory cell to a second state corresponding to the result of the operation (operation S11). The storage device updates a state of a flag cell corresponding to a data set to which the second state belongs (operation S12). Referring toFIG.11, in the case of a read operation, when the storage device intends to read a current value of a memory cell in a first state, the storage device first reads a flag cell corresponding to a memory cell indicated by an address received together with a read command (operation S20). After checking data set activation information in the flag cell, the storage device accesses the memory cell by applying a read voltage within a threshold voltage range of an activated data set (operations S21through S21n). For example, if the threshold voltage range includes several read voltages, the storage device may apply one or more of these voltages to the memory cell. Referring toFIG.12, in the case of an erase operation, the storage device sets a memory cell to an erase state E. Therefore, the storage device erases a memory cell (operation S31) while resetting a flag cell corresponding to the memory cell to the erase state E. That is, according to the embodiment ofFIGS.10through12, when the storage device performs a read, write, or erase operation on a memory cell, it may update information about the operation to a corresponding flag cell. FIG.13is a diagram for explaining a method of operating a storage device when a nonvolatile memory device is degraded according to an exemplary embodiment of the inventive concept. Referring toFIG.13, degradation information may be stored in a flag cell region. According to some embodiments, the degradation information may be a single-level cell or a multi-level cell. As for the PE cycle as an embodiment of the degradation information, a flag cell corresponding to the PE cycle may have a corresponding bit for each data set. The degradation information flag cell may be stored as 0 when the number of PE cycles for a memory cell is less than a predetermined number A and may be written as 1 when the number of PE cycles exceeds the predetermined number A. In the illustrated embodiment, when the number of PE cycles for data set 2 exceeds the predetermined number A, the storage device writes1to the flag cell as the degradation information for data set 2. The storage device reads the degradation information flag cell first and then accesses a corresponding memory cell to perform a read, write or erase operation. For example, in a read operation, the storage device may disable all states P11 through P15 belonging to data set 2 based on the degradation information flag cell and re-map threshold voltages and data sets to the other states E and P0 through P10. Since a gap between the threshold voltages is very small in the case of a multi-level cell, if the threshold voltages are mapped to the remaining states except for the disabled states, the gap between the threshold voltages may be increased, thereby improving read reliability (W1<W2). FIG.14illustrates an example in which a storage device according to an exemplary embodiment of the inventive concept is applied to a neural network device1000. Referring toFIG.14, the neural network device1000may be implemented as various types of devices such as a personal computer, a server device, a mobile device, and an embedded device. Specifically, the neural network device1000may be, but is not limited to, a smartphone, a tablet device, an augmented reality (AR) device, an Internet of things (IoT) device, an autonomous vehicle, robotics, or a medical device that performs voice recognition, image recognition, image classification, etc. using a neural network. Further, the neural network device1000may be, but is not limited to, a dedicated hardware accelerator mounted on the above devices or a hardware accelerator such as a neural processing unit (NPU), a tensor processing unit (TPU) or a neural engine that is a dedicated module for driving the neural network. The neural network device1000includes a processor1120and a memory1110. In the neural network device1000, only the elements related to the current embodiments are illustrated. Therefore, one of ordinary skill in the art would understand that other general-purpose elements can be included in addition to the elements illustrated inFIG.14. The processor1120controls overall functions for executing the neural network device1000. For example, the processor1120controls the overall operation of the neural network device1000by executing programs stored in the memory1110of the neural network device1000. The processor1120may be implemented as a central processing unit (CPU), a graphics processing unit (GPU), or an application processor (AP) included in the neural network device1000, but the present disclosure is not limited thereto. The memory1110is hardware that stores various data processed in the neural network device1000. For example, the memory1110may store data processed by the neural network device1000and data to be processed. In addition, the memory1110may store applications, drivers, etc. to be driven by the neural network device1000. According to some embodiments, the memory1110may be the nonvolatile memory device200illustrated inFIG.1. According to an embodiment, the memory1110as a nonvolatile memory may include a random access memory (RAM) such as a DRAM or an SRAM, a ROM, an EEPROM, a CD-ROM, a Blu-ray or other optical disk storage, a hard disk drive (HDD), an SSD, or a flash memory. The processor1120may read/write neural network data such as image data, feature map data or kernel data from/to the memory1110and execute the neural network (e.g., an artificial neural network) using the read/written data. When the neural network is executed, the processor1120may repeatedly perform a convolution operation between an input feature map and a kernel in order to generate data about an output feature map. The processor1120may operate similarly to the storage controller100illustrated inFIG.1. The processor1120may perform a very large number (operation count) of convolution operations ranging from hundreds of millions to tens of billions, and the number of times that the processor1120accesses the memory1110to perform a convolution operation may increase dramatically. The neural network device1000according to an embodiment may include neuromorphic hardware. The neuromorphic hardware may be disposed separately from the memory1110or may be part of the memory1110according to some embodiments. The neuromorphic hardware may perform convolutional neural network (CNN) mapping. The neuromorphic hardware may perform an operation using only an on-chip memory without using an external memory. For example, the neuromorphic hardware may perform CNN mapping using only an on-chip memory without using an external memory (e.g., an off-chip memory). Therefore, it may perform an operation without a memory update during image processing. FIG.15is a cross-section diagram for a non-volatile memory device according to an exemplary embodiment of the inventive concept. Referring toFIG.15, a memory device400may have a chip-to-chip (C2C) structure. The memory device400is one exemplary embodiment of the memory device referring toFIG.1andFIG.3. The C2C structure may refer to a structure formed by manufacturing an upper chip including a cell region CELL on a first wafer, manufacturing a lower chip including a peripheral circuit region PERI on a second wafer, different from the first wafer, and then connecting the upper chip and the lower chip in a bonding manner. For example, the bonding manner may include a method of electrically connecting a bonding metal formed on an uppermost metal layer of the upper chip and a bonding metal formed on an uppermost metal layer of the lower chip. For example, when the bonding metals may be formed of copper (Cu), the bonding manner may be a Cu—Cu bonding, and the bonding metals may also be formed of aluminum or tungsten. Each of the peripheral circuit region PERI and the cell region CELL of the memory device40may include an external pad bonding area PA, a word line bonding area WLBA, and a bit line bonding area BLBA. The peripheral circuit region PERI may include a first substrate210, an interlayer insulating layer215, a plurality of circuit elements220a,220b, and220cformed on the first substrate210, first metal layers230a,230b, and230crespectively connected to the plurality of circuit elements220a,220b, and220c, and second metal layers240a,240b, and240cformed on the first metal layers230a,230b, and230c. In an example embodiment, the first metal layers230a,230b, and230cmay be formed of tungsten having relatively high resistance, and the second metal layers240a,240b, and240cmay be formed of copper having relatively low resistance. In an example embodiment illustrated inFIG.15, although the first metal layers230a,230b, and230cand the second metal layers240a,240b, and240care shown and described, they are not limited thereto, and one or more metal layers may be further formed on the second metal layers240a,240b, and240c. At least a portion of the one or more metal layers formed on the second metal layers240a,240b, and240cmay be formed of aluminum or the like having a lower resistance than those of copper forming the second metal layers240a,240b, and240c. The interlayer insulating layer215may be disposed on the first substrate210and cover the plurality of circuit elements220a,220b, and220c, the first metal layers230a,230b, and230c, and the second metal layers240a,240b, and240c. The interlayer insulating layer215may include an insulating material such as silicon oxide, silicon nitride, or the like. Lower bonding metals271band272bmay be formed on the second metal layer240bin the word line bonding area WLBA. In the word line bonding area WLBA, the lower bonding metals271band272bin the peripheral circuit region PERI may be electrically connected to upper bonding metals371band372bin the cell region CELL in a bonding manner, and the lower bonding metals271band272band the upper bonding metals371band372bmay be formed of aluminum, copper, tungsten, or the like. The cell region CELL may include at least one memory block. The cell region CELL may include a second substrate310and a common source line320. On the second substrate310, a plurality of word lines331to338(i.e.,330) may be stacked in a direction (a Z-axis direction), perpendicular to an upper surface of the second substrate310. At least one string select line and at least one ground select line may be arranged on and below the plurality of word lines330, respectively, and the plurality of word lines330may be disposed between the at least one string select line and the at least one ground select line. In the bit line bonding area BLBA, a channel structure CH may extend in a direction, perpendicular to the upper surface of the second substrate310, and pass through the plurality of word lines330, the at least one string select line, and the at least one ground select line. The channel structure CH may include a data storage layer, a channel layer, a buried insulating layer, and the like, and the channel layer may be electrically connected to a first metal layer350cand a second metal layer360c. For example, the first metal layer350cmay be a bit line contact, and the second metal layer360cmay be a bit line. In an example embodiment, the bit line360cmay extend in a first direction (a Y-axis direction), parallel to the upper surface of the second substrate310. In an example embodiment illustrated inFIG.15, an area in which the channel structure CH, the bit line360c, and the like are disposed may be defined as the bit line bonding area BLBA. In the bit line bonding area BLBA, the bit line360cmay be electrically connected to the circuit elements220cproviding a page buffer393in the peripheral circuit region PERI. For example, the bit line360cmay be connected to upper bonding metals371cand372cin the cell region CELL, and the upper bonding metals371cand372cmay be connected to lower bonding metals271cand272cconnected to the circuit elements220cof the page buffer393. In the word line bonding area WLBA, the plurality of word lines330may extend in a second direction (an X-axis direction), parallel to the upper surface of the second substrate310, and may be connected to a plurality of cell contact plugs341to347(i.e.,340). The plurality of word lines330and the plurality of cell contact plugs340may be connected to each other in pads provided by at least a portion of the plurality of word lines330extending in different lengths in the second direction. A first metal layer350band a second metal layer360bmay be connected to an upper portion of the plurality of cell contact plugs340connected to the plurality of word lines330, sequentially. The plurality of cell contact plugs340may be connected to the circuit region PERI by the upper bonding metals371band372bof the cell region CELL and the lower bonding metals271band272bof the peripheral circuit region PERI in the word line bonding area WLBA. The plurality of cell contact plugs340may be electrically connected to the circuit elements220bproviding a row decoder394in the peripheral circuit region PERI. In an example embodiment, operating voltages of the circuit elements220bproviding the row decoder394may be different than operating voltages of the circuit elements220cproviding the page buffer393. For example, operating voltages of the circuit elements220cproviding the page buffer393may be greater than operating voltages of the circuit elements220bproviding the row decoder394. A common source line contact plug380may be disposed in the external pad bonding area PA. The common source line contact plug380may be formed of a conductive material such as a metal, a metal compound, polysilicon, or the like, and may be electrically connected to the common source line320. A first metal layer350aand a second metal layer360amay be stacked on an upper portion of the common source line contact plug380, sequentially. For example, an area in which the common source line contact plug380, the first metal layer350a, and the second metal layer360aare disposed may be defined as the external pad bonding area PA. Input-output pads205and305may be disposed in the external pad bonding area PA. Referring toFIG.15, a lower insulating film201covering a lower surface of the first substrate210may be formed below the first substrate210, and a first input-output pad205may be formed on the lower insulating film201. The first input-output pad205may be connected to at least one of the plurality of circuit elements220a,220b, and220cdisposed in the peripheral circuit region PERI through a first input-output contact plug203, and may be separated from the first substrate210by the lower insulating film201. In addition, a side insulating film may be disposed between the first input-output contact plug203and the first substrate210to electrically separate the first input-output contact plug203and the first substrate210. Referring toFIG.15, an upper insulating film301covering the upper surface of the second substrate310may be formed on the second substrate310, and a second input-output pad305may be disposed on the upper insulating layer301. The second input-output pad305may be connected to at least one of the plurality of circuit elements220a,220b, and220cdisposed in the peripheral circuit region PERI through a second input-output contact plug303. According to embodiments, the second substrate310and the common source line320may not be disposed in an area in which the second input-output contact plug303is disposed. Also, the second input-output pad305may not overlap the word lines330in the third direction (the Z-axis direction). Referring toFIG.15, the second input-output contact plug303may be separated from the second substrate310in a direction, parallel to the upper surface of the second substrate310, and may pass through the interlayer insulating layer315of the cell region CELL to be connected to the second input-output pad305. According to embodiments, the first input-output pad205and the second input-output pad305may be selectively formed. For example, the memory device400may include only the first input-output pad205disposed on the first substrate210or the second input-output pad305disposed on the second substrate310. Alternatively, the memory device400may include both the first input-output pad205and the second input-output pad305. A metal pattern in an uppermost metal layer may be provided as a dummy pattern or the uppermost metal layer may be absent, in each of the external pad bonding area PA and the bit line bonding area BLBA, respectively included in the cell region CELL and the peripheral circuit region PERI. In the external pad bonding area PA, the memory device400may include a lower metal pattern273a, corresponding to an upper metal pattern372aformed in an uppermost metal layer of the cell region CELL, and having the same shape as the upper metal pattern372aof the cell region CELL, in an uppermost metal layer of the peripheral circuit region PERI. In the peripheral circuit region PERI, the lower metal pattern273aformed in the uppermost metal layer of the peripheral circuit region PERI may not be connected to a contact. Similarly, in the external pad bonding area PA, an upper metal pattern, corresponding to the lower metal pattern formed in an uppermost metal layer of the peripheral circuit region PERI, and having the same shape as a lower metal pattern of the peripheral circuit region PERI, may be formed in an uppermost metal layer of the cell region CELL. The lower bonding metals271band272bmay be formed on the second metal layer240bin the word line bonding area WLBA. In the word line bonding area WLBA, the lower bonding metals271band272bof the peripheral circuit region PERI may be electrically connected to the upper bonding metals371band372bof the cell region CELL by a Cu—Cu bonding. Further, the bit line bonding area BLBA, an upper metal pattern392, corresponding to a lower metal pattern252formed in the uppermost metal layer of the peripheral circuit region PERI, and having the same shape as the lower metal pattern252of the peripheral circuit region PERI, may be formed in an uppermost metal layer of the cell region CELL. A contact may not be formed on the upper metal pattern392formed in the uppermost metal layer of the cell region CELL. In an example embodiment, corresponding to a metal pattern formed in an uppermost metal layer in one of the cell region CELL and the peripheral circuit region PERI, a reinforcement metal pattern having the same shape as the metal pattern may be formed in an uppermost metal layer in another one of the cell region CELL and the peripheral circuit region PERI, and a contact may not be formed on the reinforcement metal pattern. In concluding the detailed description, those skilled in the art will appreciate that many variations and modifications may be made to these exemplary embodiments without substantially departing from the principles of the present inventive concept. | 51,241 |
11861238 | DETAILED DESCRIPTION Below, embodiments of the inventive concept may be described in detail and clearly to such an extent that an ordinary one in the art implements the inventive concept. FIG.1is a block diagram illustrating a storage system according to an embodiment of the inventive concept. Referring toFIG.1, a storage system10may include a plurality of hosts11to1nand a storage device100. In an embodiment, the storage system10may include at least one of various information processing devices such as a personal computer, a laptop computer, a server, a workstation, a smartphone, and a tablet PC. Each of the plurality of hosts11to1nmay be configured to access the storage device100. In an exemplary embodiment, the plurality of hosts11to1nmay be different computing nodes configured to operate independently of each other. In an exemplary embodiment, each of the plurality of hosts11to1nmay be a single processor or a multi-core processor included in the corresponding computing node (or computing system). Alternatively, at least some of the plurality of hosts11to1nmay be different processors included in the same computing node (or computing system). Alternatively, the plurality of hosts11to1nmay be processes configured to process different applications. Alternatively, the plurality of hosts11to1nmay be virtual machines running on a computing node. The storage device100may operate under control of each of the plurality of hosts11to1n. For example, the storage device100may include a storage controller110and a nonvolatile memory device120. Under control of each of the plurality of hosts11to1n, the storage controller110may store data in the nonvolatile memory device120or may provide data stored in the nonvolatile memory device120to each of the plurality of hosts11to1n. In an exemplary embodiment, the plurality of hosts11to1nand the storage device100may communicate with each other based on a PCI-express (Peripheral Component Interconnect express) interface or a PCI-express based NVMe (Nonvolatile Memory Express) interface. An interface between the plurality of hosts11to1nand the storage device100and a structural characteristic of the interface will be described with reference to drawings below. In an exemplary embodiment, the storage device100may be a storage device or a single storage device configured to support a multi-host or a multi-tenant. Each of the plurality of hosts11to1nconfigured to access the storage device100independently of each other may require specific performance depending on a type or an operation manner for the purpose of accessing the storage device100. However, due to a limitation on a physical resource of the storage device100, the storage device100may fail to support the specific performance of each of the plurality of hosts11to1n, under a specific condition (e.g., in the case where a specific host occupies all or most of the physical resource of the storage device100). The storage device100according to an embodiment of the inventive concept may provide minimum performance with each of the plurality of hosts11to1n. For example, the storage controller110of the storage device100may include a performance manager111. The performance manager111may set a weight (i.e., a weight value) of a physical function or a submission queue corresponding to each of the plurality of hosts11to1nand may manage an aggregated value AV based on the set weight or processed I/O information. The performance manager111may schedule a command from each of the plurality of hosts11to1nbased on the aggregated value AV. A configuration and an operation of the performance manager111will be more fully described with reference to drawings below. FIG.2is a block diagram illustrating a storage controller ofFIG.1. Referring toFIGS.1and2, the storage controller110may include the performance manager111, a processor112, an SRAM113, a host interface circuit114, and a nonvolatile memory interface circuit115. The performance manager111may be configured to manage the performance of each of the plurality of hosts11to1nby scheduling commands of the plurality of hosts11to1n. For example, the performance manager111may include a command scheduler111a, an aggregated value table manager111b, and a weight manager111c. The command scheduler111amay be configured to schedule commands from the plurality of hosts11to1n. For example, the command scheduler111amay select a submission queue corresponding to each of the plurality of hosts11to1nbased on an aggregated value table AVT managed by the aggregated value table manager111band may process a command from the selected submission queue. In an exemplary embodiment, the command scheduler111amay be configured to select a submission queue or a physical function corresponding to a relatively small aggregated value or the lowest aggregated value. For example, a command stored in the submission queue associated with the physical function corresponding to a relatively small aggregated value or the lowest aggregate value may be prioritized. The aggregated value table manager111bmay be configured to manage the aggregated value table AVT. The aggregated value table AVT may include an aggregated value of a physical function PF or submission queues SQ corresponding to each of the plurality of hosts11to1n. An aggregated value may indicate a value that is obtained by aggregating a weight managed by the weight manager111cor processed I/O information. That is, assuming that weights of first and second submission queues are identical, in the case where the number of commands processed with regard to the first submission queue is more than the number of commands processed with regard to the second submission queue, an aggregated value corresponding to the first submission queue may be greater than an aggregated value corresponding to the second submission queue. Alternatively, even though the number of commands processed with regard to the first submission queue is more than the number of commands processed with regard to the second submission queue, a relationship between aggregated values of the first and second submission queues may be interchanged depending on weights of the first and second weights. As described above, as the command scheduler111aselects a submission queue or a physical function corresponding to a relatively small aggregated value or the lowest aggregated value, commands of a host that is serviced at a relatively small frequency may be processed. In this case, the overall performance of the plurality of hosts11to1nmay become uniform, or minimum performance of each of the plurality of hosts11to1nmay be secured. The weight manager111cmay be configured to manage a weight necessary to generate the aggregated value AV managed by the aggregated value table manager111b. For example, the storage device100may receive performance information (e.g., bandwidth) from each of the plurality of hosts11to1nin an initialization operation. The performance information may include information of minimum performance of each of the plurality of hosts11to1n. The weight manager111cmay set a weight of the physical function PF or the submission queue SQ corresponding to each of the plurality of hosts11to1n, based on information about the minimum performance of each of the plurality of hosts11to1n. In an embodiment, a weight associated with one physical function PF or one submission queue SQ may include a read weight associated with a read operation, a write weight associated with a write operation, or an additional weight associated with an additional operation (e.g., a garbage collection operation). The above operation of the performance manager111will be more fully described with reference to drawings below. In an exemplary embodiment, the above physical function PF may be a hardware or software component configured to provide a function defined by the NVMe interface standard. Alternatively, the physical function PF may be an NVMe controller configured to support a single PCI-express function. Alternatively, the physical function PF may be a PCI-express function supporting a single root I/O virtualization (SR-IOV) function configured to allow the physical function PF to support one or more dependent virtual function. Below, it is assumed that the physical function PF is an NVMe controller corresponding to at least one of the plurality of hosts11to1n. A configuration of the physical function PF will be more fully described with reference toFIGS.3A to3C. The processor112may control overall operations of the storage controller110. For example, the processor112may be configured to execute various applications (e.g., a flash translation layer (FTL)) on the storage controller110. The SRAM113may be used as a buffer memory, a working memory, or a cache memory of the storage controller110. In an exemplary embodiment, the performance manager111may be implemented in the form of software, hardware, or a combination thereof. In the case where the performance manager111is implemented in the form of software, information associated with the performance manager111may be stored in the SRAM113, and the performance manager111stored in the SRAM113may be executed by the processor112. The host interface circuit114may communicate with the plurality of hosts11to In in compliance with a given communication protocol. In an exemplary embodiment, the given interface protocol may include at least one of various host interfaces such as a PCI-express (Peripheral Component Interconnect express) interface, an NVMe (nonvolatile memory express) interface, a SATA (Serial ATA) interface, a SAS (Serial Attached SCSI) interface, and a UFS (Universal Flash Storage) interface, but the inventive concept is not limited thereto. To describe the technical idea of the inventive concept, below, it is assumed that the host interface circuit114is implemented on an NVMe interface basis. That is, the host interface circuit114may communicate with each of the plurality of hosts11to1nthrough a PCI-express interface based physical layer and may process information received from the plurality of hosts11to1nthrough the NVMe interface based NVMe controller. In an exemplary embodiment, the NVMe controller may be included in the host interface circuit114, and the NVMe controller may correspond to the plurality of hosts11to1n. In an exemplary embodiment, the NVMe controller may be a physical NVMe controller, and a plurality of virtual NVMe controller may be created and may be running on the physical NVMe controller. In this case, each of the plurality of virtual NVMe controller may be associated with a corresponding host of the plurality of hosts11to1n. In an exemplary embodiment, in the case where the performance manager111is implemented in the form of hardware, the performance manager111may be included in the host interface circuit114or may be included in an NVMe controller of the host interface circuit114. The storage controller110may communicate with the nonvolatile memory device120through the nonvolatile memory interface circuit115. In an exemplary embodiment, the nonvolatile memory interface circuit115may be a NAND interface, and the NAND interface may support a multi-way/multi-channel of a plurality of nonvolatile memories included in the nonvolatile memory device120. FIGS.3A to3Care diagrams for describing a physical function of a storage device ofFIG.1. For brevity of illustration and convenience of description, below, components that are unnecessary to describe a configuration, a structure, and a function of the physical function PF will be omitted. Below, the term “physical function PF” is used to describe the technical idea of the inventive concept. The physical function PF may refer to an NVMe controller corresponding to each of the plurality of hosts11to1n. When the NVMe controller is a physical NVMe controller, a plurality of virtual NVMe controller may be running on the physical NVMe controller, and the physical function PF may refer to a virtual NVMe controller running on the physical NVMe controller. Also, for convenience of description, the term “physical function PF” may be used, but the physical function PF may be interchangeable with a configuration or a term of the NVMe controller (or the virtual NVMe controller). The NVMe controller may be implemented in the form of software, hardware, or a combination thereof. Alternatively, the physical function PF may indicate a PCI-express configured to support the SR-IOV function. The SR-IOV may indicate a function that allows one physical function to support one or more dependent virtualization functions. That is, below, the physical function PF may correspond to at least one of the plurality of hosts11to1nand may be understood as being configured to process a command of the corresponding host of the plurality of hosts11to1nor a command of a submission queue managed by the corresponding host. For convenience of description, below, it is assumed that the storage device100communicates with three hosts11,12, and13, but the inventive concept is not limited thereto. Referring toFIGS.1to3C, the first to third hosts11to13may issue commands CMD1to CMD3for processing corresponding operations, respectively. For example, the first host11may issue the first command CMD1, and the first command CMD1thus issued may be queued in a first submission queue SQ1. The second host12may issue the second command CMD2, and the second command CMD2thus issued may be queued in a second submission queue SQ2. The third host13may issue the third command CMD3, and the third command CMD3thus issued may be queued in a third submission queue SQ3. An exemplary embodiment is illustrated inFIG.3Aas each of the first to third hosts11to13manages one submission queue, but the inventive concept is not limited thereto. For example, each of the first to third hosts11to13may manage a plurality of submission queues. Alternatively, each of the first to third hosts11to13may further manage a completion queue configured to receive completions associated with a plurality of submission queues. Alternatively, each of the first to third hosts11to13may issue an administrative command and may further manage an administration queue and an administration completion queue configured to receive a completion associated with an administrative command. In an exemplary embodiment, the submission queue, the completion queue, the administration queue, the administration completion queue, etc. may be included in a controller memory buffer (CMB) (e.g., the SRAM113or a separate memory buffer (not illustrated)) of the storage device100. Alternatively, the submission queue, the completion queue, the administration queue, the administration completion queue, etc. may be included in a host memory buffer (HMB) of a corresponding host. The storage device100may communicate with the first to third hosts11to13. In an exemplary embodiment, the storage device100may communicate with the first to third hosts11to13through an interface (e.g., NVMe over PCI-express) based on a physical layer of the PCI-express interface. Alternatively, the storage device100may communicate with the first to third hosts11to13through a network based interface (e.g., NVMe-oF: NVMe over Fabrics) such as a fibre channel or a remote direct random access memory (RDMA). Below, to describe an embodiment of the inventive concept clearly, it is assumed that the storage controller110communicates with the first to third hosts11to13through the NVMe over PCI-express interface. The storage device100may communicate with the first to third hosts11to13through various types of physical layers. First to third physical functions PF1to PF3may respectively correspond to the first to third hosts11to13. For example, the first physical function PF1may indicate a first NVMe controller configured to communicate with the first host11and to process the first command CMD1from the first host11. The second physical function PF2may indicate a second NVMe controller configured to communicate with the second host12and to process the second command CMD2from the second host12. The third physical function PF3may indicate a third NVMe controller configured to communicate with the third host13and to process the third command CMD3from the third host13. Each of the first to third physical functions PF1to PF3may perform an operation of the nonvolatile memory device120based on a command from the corresponding host. In an exemplary embodiment, the nonvolatile memory device120may be managed by using a logically divided namespace NS or a physically or logically divided nonvolatile memory (NVM) set. Each of the first to third physical functions PF1to PF3may perform an operation corresponding to a command with respect to a corresponding namespace or a corresponding NVM set. The performance manager111may control operations of the first to third physical functions PF1to PF3based on the aggregated value table. For example, the performance manager111may schedule physical functions, which will process a command, from among the first to third physical functions PF1to PF3based on the aggregated value table. In an exemplary embodiment, as illustrated inFIG.3A, the first to third physical functions PF1to PF3may communicate with the first to third hosts11to13through one physical port PT. A storage controller110amay include one physical port PT, the first to third physical functions PF1to PF3, and the performance manager111. The first to third physical functions PF1to PF3may communicate with the first to third hosts11to13through one physical port PT. The physical port PT may be a physical layer configured to support the PCI-express interface. In an exemplary embodiment, each of the first to third physical functions PF1to PF3may support dependent virtual functions. In an exemplary embodiment, at least one of the first to third physical functions PF1to PF3may be a dependent virtual function. Alternatively, as illustrated inFIG.3B, the first to third physical functions PF1to PF3may communicate with the first to third hosts11to13through a plurality of physical ports PT1to PT3. For example, as illustrated inFIG.3B, a storage controller110bmay include the first to third physical ports PT1to PT3, the first to third physical functions PF1to PF3, and the performance manager111. Each of the first to third physical ports PT1to PT3may be an independent physical layer configured to support the PCI-express interface. The first physical function PF1may communicate with the first host11through the first physical port PT1, the second physical function PF2may communicate with the second host12through the second physical port PT2, and the third physical function PF3may communicate with the third host13through the third physical port PT3. In an exemplary embodiment, as illustrated inFIG.3C, at least one host (e.g., the first host11) of the first to third hosts11to13may communicate with at least two physical functions. For example, as illustrated inFIG.3C, a storage controller110cmay include 0-th to third physical ports PT0to PT3, 0-th to third physical functions PF0to PF3, and the performance manager111. The first host11may communicate with the 0-th physical port PT0through the 0-th physical port PT0and may communicate with the first physical port PT1through the first physical port PT1. That is, one host may communicate with at least two physical functions. In an exemplary embodiment, first to third NVMe controllers corresponding to the first to third physical functions PF1to PF3, and the physical ports PT1, PT2, and PT3may be included in the host interface circuit114or may be implemented on the host interface circuit114. The communication between the storage device100and the hosts11to13and the configuration of the physical function PF, which are described with reference toFIGS.3A to3C, are exemplary, and the inventive concept is not limited thereto. As described above, a plurality of physical functions PF may indicate NVMe controllers respectively corresponding to a plurality of hosts, and the plurality of physical functions PF may be configured to communicate with the corresponding hosts through one physical port or through individual physical ports. That is, below, it may be understood that one physical function PF corresponds to one host. FIG.4is a flowchart illustrating an operation of a storage device ofFIG.1. Below, for convenience of description, a description will be given as one host manages two submission queues. For example, it is assumed that the first physical function PF1corresponds to the first host11and the first host11manages two submission queues SQ11and SQ12. In this case, the first physical function PF1may be configured to process the commands CMD from the two submission queues SQ11and SQ12. However, the inventive concept is not limited thereto. For example, one host may further manage a plurality of submission queues or any other command queues, and one physical function PF may be configured to process commands associated with the plurality of submission queues. Below, for convenience of description, the expression “the submission queue of the physical function” is used. The submission queue of the physical function may mean a submission queue that is managed by a host corresponding to the physical function. Below, for convenience of description, it is assumed that the storage device100communicates with the first to third hosts11to13and the first to third hosts11to13correspond to the first to third physical functions PF1to PF3, respectively. The first to third physical functions PF1to PF3are described with reference toFIGS.3A to3C, and thus, a detailed description thereof will not be repeated here. The above description is given only to describe an embodiment of the inventive concept, and the inventive concept is not limited thereto. Referring toFIGS.1,2,3A, and4, in operation S10, the storage device100may receive performance information from the first to third hosts11to13, respectively. For example, each of the first to third hosts11to13may be a computing node, a processor, or an application (or a process), which is driven independently. In this case, each of the first to third hosts11to13may require performance of a given level depending on a scheme to drive or an application to be driven. The storage device100may receive the performance information from the first to third hosts11to13, respectively. In an exemplary embodiment, the performance information may include information about minimum performance and maximum performance of each of the first to third hosts11to13. The minimum performance may indicate information about minimum performance that each of the first to third hosts11to13requires, and the maximum performance may indicate maximum performance or restrictive maximum performance that each of the first to third hosts11to13requires. The storage device100according to an embodiment of the inventive concept may provide each of the first to third hosts11to13with performance that is the minimum performance or higher and is the maximum performance or lower. In an exemplary embodiment, the performance information of each of the first to third hosts11to13may be received upon initializing the storage device100or communicating with the storage device100for the first time. In operation S20, the storage device100may set a weight value WV based on the performance information. For example, the performance manager111(in particular, the weight manager111c) of the storage controller110may set the weight value WV of each of the first to third physical functions PF1to PF3based on the performance information of each of the first to third hosts11to13or information about the minimum performance included in the performance information. For example, when the minimum performance of each of the first to third hosts11to13are 1 GB/s, 2 GB/s, and 3 GB/s, a ratio of the weight values WV of the first to third physical functions PF1to PF3may be 3:2:1 or 6:3:2. In an exemplary embodiment, the weight value WV set in operation S20may be a weight that is used to aggregate information of inputs/outputs performed by commands from a host, and weight values associated with a maintenance operation that is performed in the storage device100may be different. In operation S30, the storage device100may process a command based on the aggregated value table AVT. For example, the performance manager111(in particular, the command scheduler111a) of the storage controller110may select a physical function or a submission queue having the lowest aggregated value of the aggregated value table AVT. The performance manager111may fetch a command from a submission queue corresponding to the selected physical function or from the selected submission queue. In an exemplary embodiment, the fetched command may be processed by any other component (e.g., the processor112or the nonvolatile memory interface circuit115) of the storage controller110. In operation S40, the storage device100may update the aggregated value table AVT based on the set weight value and I/O information about the processed command. For example, the performance manager111(in particular, the aggregated value table manager111b) may update a corresponding entry of the aggregated value table AVT, based on a product of the I/O information about the processed command and the corresponding weight value. In an exemplary embodiment, the I/O information may indicate an I/O size (i.e., a size of input/output data) corresponding to the processed command. A method of selecting the physical function PF or the submission queue SQ based on the aggregated value table AVT and a method of updating the aggregated value table AVT will be more fully described with reference to drawings below. As described above, the storage device100according to an embodiment of the inventive concept may manage aggregated values of a plurality of physical functions PF or a plurality of submission queues SQ by using the aggregated value table AVT. The storage device100may select the physical function PF or the submission queue SQ corresponding to the lowest aggregated value based on the aggregated value table AVT and may process or schedule a command based on the selected physical function or submission queue. Accordingly, because a command of a host serviced at a relatively small ratio (in this case, a ratio to which information about minimum performance of each host is applied) is preferentially processed, uniform performance may be provided to each host, or the minimum performance of each host may be secured. Also, a specific situation in which a specific host occupies all or most of the physical resource of the storage device100may be prevented. FIG.5is a diagram illustrating an aggregated value table ofFIG.2. For brevity of illustration and convenience of description, a data structure of the aggregated value table AVT will be briefly described, but the inventive concept is not limited thereto. Referring toFIGS.1,2,3A, and5, an aggregated value table AVTa may include identifier information about the physical function PF, identifier information about the submission queue SQ, aggregated value information about the submission queue SQ, and a plurality of weight information (e.g., a write weight, a read weight, and any other weight). That is, in the case where the storage device100communicates with the first to third hosts11to13and the first to third hosts11to13manage pairs of submission queues SQ11and SQ12, SQ21and SQ22, and SQ31and SQ32, respectively, the aggregated value table AVTa may include information about the first physical function PF1corresponding to the first host11, information about the two submission queues SQ11and SQ12corresponding to the first physical function PF1, aggregated value information AV_SQ11and AV_SQ12respectively corresponding to the two submission queues SQ11and SQ12, and weight information WW1, RW1, and OW1corresponding to the first physical function PF1. The aggregated value table AVTa may further include information on the second physical function PF2and the third physical function PF3, and information SQ21and SQ22, AV_SQ21and AV_SQ22, and WW2, RW2, and OW2corresponding to the second physical function PF2and information SQ31and SQ32, AV_SQ31and AV_SQ32, and WW3, RW3, and OW3corresponding to the third physical function PF3. The information about the second and third physical functions PF2and PF3is described above, and thus, additional description will be omitted to avoid redundancy. FIG.6is a flowchart illustrating operation S30and operation S40ofFIG.4in detail. An operation method of the storage device100according to the flowchart ofFIG.6may be performed by using the aggregated value table AVTa described with reference toFIG.5. Referring toFIGS.1,2,3A, and5, operation S30ofFIG.4may include operation S131to operation S133ofFIG.6. In operation S131, the storage device100may select a submission queue corresponding to the lowest aggregated value from among a plurality of submission queues SQ, based on the aggregated value table AVTa. For example, as described with reference toFIG.5, the aggregated value table AVTa may include the submission queue aggregated values AV_SQ11to AV_SQ32associated with the plurality of submission queues SQ11to SQ32. The performance manager111may search for the lowest submission queue aggregated value of the plurality of submission queue aggregated values AV_SQ11to AV_SQ32and may select a submission queue corresponding to the lowest value. In an exemplary embodiment, that a submission queue aggregated value is the lowest means that a ratio (in this case, a ratio to which information about minimum performance of a corresponding host is applied) by which a corresponding submission queue is serviced is the lowest. That is, a command of a submission queue serviced at a relatively small ratio may be preferentially processed by selecting a submission queue corresponding to the lowest submission queue aggregated value. In operation S132, the storage device100may fetch a command from the selected submission queue. For example, the performance manager111(in particular, the command scheduler111a) may perform a scheduling operation such that a command included in the submission queue SQ selected in operation S131is fetched. In operation S133, the storage device100may process the fetched command. For example, the storage controller110may allow the nonvolatile memory device120to perform an operation corresponding to the fetched command. In an exemplary embodiment, the storage controller110may perform the corresponding operation on a namespace or a nonvolatile memory (NVM) set of the nonvolatile memory device120corresponding to the submission queue of the fetched command or on a namespace or a nonvolatile memory (NVM) set managed by a physical function associated with the fetched command. In an exemplary embodiment, operation S133may be performed by the processor112or the nonvolatile memory interface circuit115of the storage controller110. Operation S40ofFIG.4may include operation S141ofFIG.6. In operation S141, the storage device100may update the submission queue aggregated value AV_SQ of the aggregated value table AVT, based on a weight corresponding to the fetched command and I/O information corresponding to the fetched command. For example, the performance manager111(in particular, the aggregated value table manager111b) may update the submission queue aggregated value AV_SQ of the aggregated value table AVT, based on a product of the weight corresponding to the fetched command and the I/O information corresponding to the fetched command. FIGS.7A to7Care diagrams for describing an operation of a storage device according to the flowcharts ofFIGS.5and6. For convenience of description, components that are unnecessary to describe an operation (in particular, command scheduling and aggregated value managing operations) of the storage device100according to the inventive concept are omitted. Also, for convenience of description, it is assumed that the storage device100communicates three hosts, each of the three hosts manages two submission queues, and the three hosts correspond to the first to third physical functions PF1to PF3, respectively. That is, the first physical function PF1may correspond to the submission queues SQ11and SQ12, the second physical function PF2may correspond to the submission queues SQ21and SQ22, and the third physical function PF3may correspond to the submission queues SQ31and SQ32. Also, for convenience of description, it is assumed that all commands queued in each submission queue are write command and the storage device100performs write operations in response to the write commands. Also, it is assumed that a ratio of minimum performance of the three hosts is 2:3:6. In this case, a write weight corresponding to the first physical function PF1may be 3, a write weight corresponding to the second physical function PF2may be 2, and a write weight corresponding to the third physical function PF3may be 1. The conditions and numerical values described above may be to describe the technical idea of the inventive concept, and the inventive concept is not limited thereto. Referring toFIGS.1,2, and7A to7C, as illustrated inFIG.7A, in operation {circle around (1)}, the command scheduler111aof the performance manager111may select one of the plurality of submission queues SQ11to SQ32based on the aggregated value table AVTa. For example, the command scheduler111amay search for the lowest submission queue aggregated value of the submission queue aggregated values AV_SQ11to AV_SQ32of the aggregated value table AVTa and may select a submission queue having the lowest submission queue aggregated value. In the embodiment ofFIG.7A, the submission queue SQ22corresponding to the second physical function PF2may be selected. Afterwards, in operation {circle around (2)}, the command scheduler111amay fetch a command CMD22from the selected submission queue SQ22. In an exemplary embodiment, it is assumed that I/O information (e.g., a size of write data) of the fetched command CMD22is “3”. Afterwards, in operation {circle around (3)}, the command scheduler111amay process the fetched command CMD22. In an exemplary embodiment, the fetched command CMD22may be processed by any other component (e.g., an FTL driven by the processor112, the nonvolatile memory interface circuit115, or any other components) of the storage controller110. In an exemplary embodiment, an operation corresponding to the fetched command CMD22may be processed at an area, which is associated with the fetched command CMD22, the selected submission queue SQ22, or a physical function corresponding to the selected submission queue SQ22, from among areas (e.g., a namespace or an NVM set) of the nonvolatile memory device120. Afterwards, as illustrated inFIG.7B, in operation {circle around (4)}, the aggregated value table manager111bmay update the aggregated value table AVTa based on I/O information and a weight. For example, as described above, the I/O information of the fetched command CMD22may be “3”, and the write weight WW2of the second physical function PF2associated with the selected submission queue SQ22may be “2”. In this case, the aggregated value table manager111bmay update the aggregated value table AVT by aggregating a product of the processed I/O information (i.e., “3”) and the write weight WW2(i.e., “2”) at the submission queue aggregated value AV_SQ22(i.e., “1”). That is, AV_SQ22may be updated to “1+(3*2)=7”. That is, the aggregated value table AVTa may be updated by aggregating a product of processed I/O information (or an I/O size) and a corresponding weight at a current submission queue aggregated value. According to the update operation described above, a magnitude of a submission queue aggregated value may increase whenever a command of a submission queue is processed or a service associated with a submission queue is provided. Afterwards, in operation {circle around (5)}, the command scheduler111amay select a submission queue having the lowest submission queue aggregated value from the updated aggregated value table AVTa. In the embodiment ofFIG.7B, the submission queue SQ12corresponding to the first physical function PF1may be selected. Afterwards, in operation {circle around (6)}, the command scheduler111amay fetch a command CMD12from the selected submission queue SQ12. In an exemplary embodiment, it is assumed that I/O information of the fetched command CMD12is “2”. Afterwards, in operation {circle around (7)}, the command scheduler111amay process the fetched command CMD12. Operation {circle around (5)}, operation {circle around (6)}, and operation {circle around (7)} ofFIG.7Bare similar to operation {circle around (1)}, operation {circle around (2)}, and operation {circle around (3)} described with reference toFIG.7Aexcept that selected submission queues are different from each other and fetched commands are different from each other, and thus, additional description will be omitted to avoid redundancy. Afterwards, as illustrated inFIG.7C, in operation {circle around (8)}, the aggregated value table manager111bmay update the aggregated value table AVTa based on processed I/O information and a weight. For example, as in operation {circle around (4)} described with reference toFIG.7B, the aggregated value table manager111bmay update the aggregated value table AVT by aggregating a product of processed I/O information (i.e., “2”) and the write weight WW1(i.e., “3”) at a current submission queue aggregated value AV_SQ12(i.e., “2”). That is, AV_SQ12may be updated to “2+(2*3)=8”. In an exemplary embodiment, as described above, the storage device100may provide a service associated with each of a plurality of hosts or a command associated with each of the plurality of host, by repeatedly performing the following operations: selecting a submission queue based on the aggregated value table AVT, fetching and processing a command from the selected submission queue, and updating the aggregated value table AVTa based on I/O information of the processed command and a corresponding weight. In an exemplary embodiment, as described above, because the storage device100performs command scheduling based on an aggregated value table, a submission queue serviced at a relatively small ratio may be preferentially performed prior to any other submission queues. Accordingly, the storage device100may provide uniform performance to each of a plurality of hosts or may secure minimum performance of each of the plurality of hosts. FIG.8is a diagram illustrating an aggregated value table ofFIG.2. For brevity of illustration and convenience of description, a data structure of the aggregated value table AVT will be briefly described, but the inventive concept is not limited thereto. Also, for convenience of description, a difference between an aggregated value table AVTb ofFIG.8and the aggregated value table AVTa described with reference toFIG.5will be mainly described. Referring toFIGS.3A and8, the aggregated value table AVTb may include identifier information about the physical function PF, aggregated value information about the physical function PF, identifier information about the submission queue SQ, aggregated value information about the submission queue SQ, and a plurality of weight information (e.g., a write weight, a read weight, and any other weight). That is, compared to the aggregated value table AVTa ofFIG.5, the aggregated value table AVTb may further include physical function aggregated values AV_PF1, AV_PF2, and AV_PF3respectively associated with the first to third physical functions PF1to PF3. The physical function aggregated values AV_PF1, AV_PF2, and AV_PF3may be managed or updated in a way that is similar to the way to manage or update the submission queue aggregated values AV_SQ11to AV_SQ32. For example, the physical function aggregated value AV_PF1of the first physical function PF1may be managed and updated by aggregating a product of I/O information of a processed command and a corresponding weight when a command from the submission queue SQ11or SQ12corresponding to the first physical function PF1is processed. This is similar to the submission queue aggregated value updating method described above, and thus, additional description will be omitted to avoid redundancy. In an exemplary embodiment, one physical function aggregated value (e.g., AV_PF1) may be equal to a sum of corresponding submission queue aggregated values (e.g., AV_SQ11and AV_SQ12). Alternatively, one physical function aggregated value (e.g., AV_PF1) may be different from a sum of corresponding submission queue aggregated values (e.g., AV_SQ11and AV_SQ12). FIG.9is a flowchart illustrating operation S30and operation S40of the flowchart ofFIG.4in detail. For convenience of description, below, it is assumed that the storage device100communicates with the first to third hosts11to13, the first to third hosts11to13correspond to the first to third physical functions PF1to PF3, respectively, and one host manages two submission queues. A description associated with the assumption is given with reference toFIG.4, and thus, additional description will be omitted to avoid redundancy. The above description is given as examples for describing the technical idea of the inventive concept, and the inventive concept is not limited thereto. Referring toFIGS.1,2,3A,8, and9, operation S30ofFIG.4may include operation S231to operation S234ofFIG.9. In operation S231, the storage device100may select a physical function having the lowest physical function aggregated value AV_PF from among the first to third physical functions PF1to PF3, based on the aggregated value table AVTb. For example, the performance manager111(in particular, the command scheduler111a) may select a physical function having the lowest physical function aggregated value of physical function aggregated values AV_PF1to AV_PF3of the aggregated value table AVTb illustrated inFIG.8. In operation S232, the storage device100may select a submission queue having the lowest submission queue aggregated value from among submission queues of the selected physical function, based on the aggregated value table AVTb. For example, the performance manager111(in particular, the command scheduler111a) may select a submission queue having the lowest submission queue aggregated value from among submission queues of the selected physical function, based on the aggregated value table AVTb illustrated inFIG.8. In detail, it is assumed that the physical function selected in operation S231is the second physical function PF2. In this case, in operation S232, the performance manager111may select a submission queue having the lowest value of the submission queue aggregated values AV_SQ21and AV_SQ22of the submission queues SQ21and SQ22corresponding to the second physical function PF2. Afterwards, the storage device100may perform operation S233and operation S234, and operation S233and operation S234are similar to operation S122and operation S123ofFIG.6, and thus, additional description will be omitted to avoid redundancy. Operation S40ofFIG.4may include operation S241ofFIG.9. In operation S241, the storage device100may update the submission queue aggregated value AV_SQ and the physical function aggregated value AV_PF of the aggregated value table AVTb, based on I/O information corresponding to a fetched command (or a processed command) and a corresponding weight. For example, the performance manager111(in particular, the aggregated value table manager111b) may update the aggregated value table AVTb by aggregating a product of the I/O information corresponding to the processed command and the corresponding weight at each of the submission queue aggregated value AV_SQ and the physical function aggregated value AV_PF. FIGS.10A to10Care diagrams for describing an operation of a storage device according to the flowchart ofFIG.9. For convenience of description, components that are unnecessary to describe an operation (in particular, command scheduling and aggregated value managing operations) of the storage device100according to the inventive concept are omitted. The physical functions PF1to PF3, the submission queues SQ11and SQ12, SQ21and SQ22, and SQ31and SQ32, and the weights WW1to WW3ofFIGS.10A to10Care described with reference toFIGS.7A to7C, and thus, additional description will be omitted to avoid redundancy. Referring toFIGS.1,2, and10A to10C, in operation {circle around (1)}, the command scheduler111amay select a physical function having the lowest value of the physical function aggregated values AV_PF1to AV_PF3based on the aggregated value table AVTb. For example, in the embodiment illustrated inFIG.10A, because the first physical function aggregated value AV_PF1of “8” is the lowest, the first physical function PF1may be selected. In operation {circle around (2)}, the command scheduler111amay select a submission queue having the lowest submission queue aggregated value from among submission queues (e.g., SQ11and SQ12) of the selected physical function (e.g., PF1). In the embodiment ofFIG.10A, because the submission queue aggregated value AV_SQ12(=“2”) of the submission queue SQ12of the submission queues SQ11and SQ12is the lowest, the command scheduler111amay select the submission queue SQ12. That is, through operation {circle around (1)} and operation {circle around (2)}, the command scheduler111amay select the submission queue SQ12of the first physical function PF1. This may mean that commands of the first host11corresponding to the first physical function PF1are processed at a relatively small frequency and commands of the submission queue SQ12of the submission queues SQ11and SQ12of the first host11are processed at a relatively small frequency. Afterwards, the command scheduler111aperforms operation {circle around (3)} and operation {circle around (4)}. Operation {circle around (3)} and operation {circle around (4)} ofFIG.10Aare similar to operation {circle around (2)} (i.e., a command fetch operation) and operation {circle around (3)} (i.e., a command processing operation) ofFIG.7A, and thus, additional description will be omitted to avoid redundancy. Next, as illustrated inFIG.10B, in operation {circle around (5)}, the aggregated value table manager111bmay update the aggregated value table AVTb based on I/O information (e.g., a size of write data) (e.g., “2”) of the processed command CMD12and the corresponding weight WW1(=“3”). For example, the aggregated value table manager111bmay aggregate a product (i.e., “2*3”) of the I/O information (e.g., a size of write data) (e.g., “2”) of the processed command CMD12and the corresponding weight WW1(=“3”) at each of the submission queue aggregated value AV_SQ12of the selected submission queue SQ12and the physical function aggregated value AV_PF1of the selected physical function PF1. In an exemplary embodiment, the aggregated value table manager111bmay add the product (i.e., “2*3”) of the I/O information (e.g., a size of write data) (e.g., “2”) of the processed command CMD12and the corresponding weight WW1(=“3”) to each of the submission queue aggregated value AV_SQ12of the selected submission queue SQ12and the physical function aggregated value AV_PF1of the selected physical function PF1. In this case, AV_PF1may be updated to “8+(2*3)=14”, and AV_SQ12may be updated to “2+(2*3)=8”. Afterwards, in operation {circle around (6)}, the command scheduler111amay select a physical function having the lowest physical function aggregated value AV_PF from the updated aggregated value table AVTb. In the embodiment ofFIG.10B, referring to the aggregated value table AVTb updated in operation {circle around (5)}, because the third physical function aggregated value AV_PF3of “11” is the lowest, the command scheduler111amay select the third physical function PF3corresponding to the third physical function aggregated value AV_PF3. Afterwards, in operation {circle around (7)}, the command scheduler111amay select a submission queue having the lowest value from among the submission queues SQ31and SQ32of the selected physical function PF3. In the embodiment ofFIG.10B, because the submission queue aggregated value AV_SQ32(=“4”) of the submission queue SQ32of the submission queues SQ31and SQ32of the third physical function PF3is the lowest, the submission queue SQ32may be selected. Afterwards, the command scheduler111aperforms operation {circle around (8)} and operation {circle around (9)}. Operation {circle around (8)} and operation {circle around (9)} ofFIG.10Bare similar to operation {circle around (2)} (i.e., a command fetch operation) and operation {circle around (3)} (i.e., a command processing operation) ofFIG.7A, and thus, additional description will be omitted to avoid redundancy. Afterwards, as illustrated inFIG.10C, in operation {circle around (10)}, the aggregated value table manager111bmay update the aggregated value table AVTb based on I/O information (e.g., a size of write data) (e.g., “5”) of the processed command CMD32and the corresponding weight WW3(=“1”). For example, as in {circle around (5)} operation ofFIG.10B, the aggregated value table manager111bmay aggregate a product of the I/O information (e.g., a size of write data) (e.g., “5”) of the processed command CMD32and the corresponding weight WW3(=“3”) at each of the physical function aggregated value AV_PF3of the selected physical function PF3and the submission queue aggregated value AV_SQ32of the selected submission queue SQ32. In an exemplary embodiment, the aggregated value table manager111bmay add the product of the I/O information (e.g., a size of write data) (e.g., “5”) of the processed command CMD32and the corresponding weight WW3(=“3”) to each of the physical function aggregated value AV_PF3of the selected physical function PF3and the submission queue aggregated value AV_SQ32of the selected submission queue SQ32. In this case, AV_PF3may be updated to “11+(5*1)=16”, and AV_SQ32may be updated to “4+(5*1)=9”. Although not illustrated in drawing, as in the above description, the command scheduler111aand the aggregated value table manager111bmay perform command scheduling and aggregated value table updating. That is, in the following operation, the second physical function PF2having the physical function aggregated value AV_PF of “12” may be selected, and the submission queue SQ22having the submission queue aggregated value AV_SQ of “1” from among the submission queues SQ21and SQ22of the second physical function PF2may be selected. As described above, the storage device100according to an embodiment of the inventive concept may select or schedule a command to be processed, based on an aggregated value (e.g., I/O information to which a weight is applied) corresponding to I/O information of a processed command, in units of a physical function and in units of a submission queue. Accordingly, because a command associated with a physical function (or a host) and a submission queue serviced at a relatively small frequency is preferentially processed, performance of a plurality of hosts may be uniform, or minimum performance of each of the plurality of hosts may be secured. FIG.11is a flowchart illustrating operation S20, operation S30, and operation S40of the flowchart ofFIG.4in detail. For convenience of description, below, it is assumed that the storage device100communicates with the first to third hosts11to13, the first to third hosts11to13correspond to the first to third physical functions PF1to PF3, respectively, and one host manages two submission queues. A description associated with the assumption is given with reference toFIG.4, and thus, additional description will be omitted to avoid redundancy. The above description is given as examples for describing the technical idea of the inventive concept, and the inventive concept is not limited thereto. Referring toFIGS.1,2,3A, and11, operation S20ofFIG.4may include operation S321to operation S322ofFIG.11. In operation S321, the storage device100may determine whether garbage collection is performed or required with regard to a specific physical function. For example, the storage device100may perform a separate maintenance operation (e.g., a garbage collection operation or a read reclaim operation) due to a physical limitation of the nonvolatile memory device120. The above maintenance operation may be an operation that is processed within the storage device100. With regard to the same command, an internal workload may vary depending on whether the above maintenance operation is performed. For example, it is assumed that a workload associated with a first write command is “1” in the case where garbage collection corresponding to the first physical function PF1is not performed. In this case, it is assumed that a workload associated with the first write command increases to “4” in the case where garbage collection corresponding to the first physical function PF1is performed. In an exemplary embodiment, a magnitude of a workload increased by the execution of the garbage collection may be determined by a write amplification factor WAF of the storage device100. That is, the execution of the maintenance operation of the storage device100may have an influence on performance to be serviced to a plurality of hosts. When it is determined that the maintenance operation such as garbage collection GC is performed with regard to the specific physical function PF, in operation S322, the storage device100may update weight values corresponding to the specific physical function PF. For example, the performance manager111(in particular, the weight manager111c) may update a weight value corresponding to a physical function experiencing the maintenance operation or requiring the maintenance operation. In an exemplary embodiment, a write weight value corresponding to a physical function experiencing the maintenance operation or requiring the maintenance operation may be increased. Afterwards, the storage device100may perform operation S331to operation S334and operation S341, and operation S331to operation S334and operation S341are similar to operation S231to operation234and operation S241FIG.9, and thus, additional description will be omitted to avoid redundancy. FIGS.12A to12Care diagrams for describing an operation of a storage device according to the flowchart ofFIG.11. For convenience of description, components that are unnecessary to describe an operation (in particular, command scheduling and aggregated value managing operations) of the storage device100according to the inventive concept are omitted. The physical functions PF1to PF3, the submission queues SQ11and SQ12, SQ21and SQ22, and SQ31and SQ32, and the weights WW1to WW3ofFIGS.12A to12Care described with reference toFIGS.7A to7C, and thus, additional description will be omitted to avoid redundancy. However, for convenience of description, it is assumed that all the first to third write weight values WW1to WW3respectively corresponding to the first to third physical functions PF1to PF3ofFIGS.12A to12Care “1”. Referring toFIGS.1,2, and12A to12C, as illustrated inFIG.12A, in operation {circle around (1)}, the weight manager111cmay check that the first physical function PF1experiences or requires the garbage collection GC. For example, depending on an internal policy of the storage device100, the storage device100may perform or require the garbage collection GC with regard to an area corresponding to the first physical function PF1from among areas (e.g., a namespace, an NVM set, or physical memory blocks) of the nonvolatile memory device120. In this case, the weight manager111cmay recognize that the garbage collection GC is performed or required with regard to the first physical function PF1. In operation {circle around (2)}, the weight manager111cmay update the first write weight WW1of the first physical function PF1experiencing or requiring the garbage collection GC. For example, the weight manager111cmay update the first write weight WW1corresponding to the first physical function PF1from “1” to “4”. In an exemplary embodiment, an update rate of the first write weight WW1may be determined based on the write amplification factor WAF that is required to perform the maintenance operation. Afterwards, the command scheduler111aperforms operation {circle around (3)}, operation {circle around (4)}, operation {circle around (5)}, and operation {circle around (6)}. Operation {circle around (3)} to operation {circle around (6)} ofFIG.12Aare similar to operation {circle around (1)} to operation {circle around (4)} ofFIG.10A, and thus, additional description will be omitted to avoid redundancy. Next, as illustrated inFIG.12B, in operation {circle around (7)}, the aggregated value table manager111bmay update the aggregated value table AVTb based on I/O information (e.g., a size of write data) (e.g., “2”) of the processed command CMD12and the updated weight WW1(=“4”). For example, the aggregated value table manager111bmay aggregate a product of the I/O information (e.g., a size of write data) (i.e., “2”) of the processed command CMD12and the update weight WW1(i.e., “4”) at each of the physical function aggregated value AV_PF1and the submission queue aggregated value AV_SQ12. In an exemplary embodiment, the aggregated value table manager111bmay add the product of the I/O information (e.g., a size of write data) (i.e., “2”) of the processed command CMD12and the update weight WW1(i.e., “4”) to each of the physical function aggregated value AV_PF1and the submission queue aggregated value AV_SQ12. In this case, AV_PF1may be updated to “8+(2*4)=14”, and AV_SQ12may be updated to “2+(2*4)=10”. In an exemplary embodiment, in the case where garbage collection is not considered and the weight WW1associated with the first physical function PF1is not updated, the command scheduler111amay again select the submission queue SQ12of the first physical function PF1in the case of processing a next command. In this case, because the first physical function PF1is in a state where an internal workload increases due to garbage collection, the processing of the next command may be relatively delayed. In contrast, as described above, in the case where the weight WW1associated with the first physical function PF1is updated in consideration of garbage collection, the command scheduler111amay select the submission queue SQ32of the third physical function PF3in the case of processing a next command. In this case, even though the workload of the first physical function PF1is increased, because a command associated with the third physical function PF3is processed, a performance difference between a plurality of hosts may be decreased, or minimum performance of each of the plurality of hosts may be secured. In an exemplary embodiment, as illustrated inFIG.12C, in operation {circle around (8)}, the weight manager111cmay check that the garbage collection GC associated with the first physical function PF1is completed. In this case, in operation {circle around (9)}, the weight manager111cmay update or recover the first write weight WW1corresponding to the first physical function PF1from “4” to “1”. That is, in the following operation of the storage device100, the above operations may be performed based on the first write weight WW1again updated. As described above, the storage device100according to an embodiment of the inventive concept may provide uniform performance to each of a plurality of hosts or may secure minimum performance of each of the plurality of hosts, by considering an internal workload occurring due to a maintenance operation, as well as an external workload serviced with regard to each of the plurality of hosts. The above command scheduling or arbitration operation of the storage device100may be called “weighted fair queuing (WFQ) arbitration”. As described above, according to an embodiment of the inventive concept, the storage device100may set weights to a plurality of hosts, a plurality of physical functions, or a plurality of submission queues based on performance information (e.g., information about minimum performance) about a plurality of hosts and may manage aggregated values respectively associated with the plurality of hosts, the plurality of physical functions, or the plurality of submission queues based on information about the set weight and the processed I/O. The storage device100may perform command scheduling on commands from the plurality of hosts based on the aggregated values thus managed. Accordingly, a command from a host serviced at a relatively small frequency may be first processed. In this case, the performance of each of the plurality of hosts may become uniform, or minimum performance of each of the plurality of hosts may be secured. In an exemplary embodiment, the storage device100according to an embodiment of the inventive concept may perform the command scheduling described with reference toFIGS.1to12Cin response to a command or a control from at least one of the plurality of hosts or from any other external computing node. That is, the storage device100may change a command scheduling under control of the plurality of hosts. FIG.13is a block diagram illustrating a storage controller according to an embodiment of the inventive concept.FIG.14is a diagram illustrating a token management table ofFIG.13. For convenience of description, additional description associated with the components described above will be omitted to avoid redundancy. In an exemplary embodiment, a storage controller210ofFIG.13may be configured to communicate with the hosts11to1nand the nonvolatile memory device120ofFIG.1. Referring toFIGS.13and14, the storage controller210may include a performance manager211, a processor212, an SRAM213, a host interface circuit214, and a nonvolatile memory interface circuit215. The processor212, the SRAM213, the host interface circuit214, and the nonvolatile memory interface circuit215are described above, and thus, additional description will be omitted to avoid redundancy. In the embodiment described with reference toFIGS.1to12C, the performance manager111may operate to secure minimum performance of each of the plurality of hosts11to1n. In contrast, the performance manager211ofFIG.13may be configured to limit maximum performance of each of the plurality of hosts11to1n. In other words, the performance manager211ofFIG.13may process a command such that performance of each of the plurality of hosts11to1ndoes not exceed the given maximum performance. For example, the performance manager211may include a command scheduler211a, a completion manager211d, and a timer211e. The command scheduler211amay be configured to schedule commands from the plurality of hosts11to1n. In an exemplary embodiment, the command scheduler211amay be configured to schedule commands based on the scheduling method (i.e., a scheduling method based on an aggregated value of the physical function PF or the submission queue SQ or a WFQ scheduling method) described with reference toFIGS.1to12C. A configuration and an operation of the command scheduler211aare described above, and thus, additional description will be omitted to avoid redundancy. The completion manager211dmay be configured to manage an I/O completion indicating a completion of an operation corresponding to a fetched command. For example, after completing an operation corresponding to a specific command, the storage controller210may transmit an I/O completion corresponding to the specific command to a corresponding host. In this case, a timing to transmit the I/O completion to the corresponding host may be managed by the completion manager211d. When the operation corresponding to the specific command is completed, the completion manager211dmay selectively transmit the I/O completion to the corresponding host, based on a token management table TMT. In detail, as illustrated inFIG.14, the token management table TMT may include information TKU1, TKU2, and TKU3about token units of the plurality of physical functions PF1, PF2, and PF3, reference time information T1, T2, and T3, and information “a”, “b”, and “c” about the number of tokens. In this case, it is assumed that the maximum performance of the first physical function PF1is 1000 MB/s, a first token unit TKU1corresponding to the first physical function PF1is 4 KB, and a reference time T1is 1 ms. In this case, the first number “a” of tokens corresponding to the first physical function PF1may be set to “250”. In an exemplary embodiment, the token unit may indicate a minimum input/output unit supported by a corresponding physical function or a multiple of the minimum input/output unit. In the case where a command associated with the first physical function PF1is completed, the completion manager211dmay determine whether a token of the first physical function PF1corresponding to the completed command remains, based on the token management table TMT. In the case where a remaining token exists, the completion manager211dmay transmit an input/output completion token to a host (e.g., 11) corresponding to the first physical function PF1and may subtract the number of tokens corresponding to a size (e.g., an input/output size) of the transmitted input/output completion from the number of tokens of the token management table TMT. In other words, in the case where an input/output size of the completed command associated with the first physical function PF1is 4 KB, the completion manager211dmay decrease the number of tokens of the token management table TMT associated with the first physical function PF1by “1”. In the case where a remaining token exists or in the case where the number of remaining tokens is less than the number of tokens corresponding to the size of the input/output completion, the completion manager211dmay not transmit the input/output completion to the corresponding host. For example, that a token associated with the first physical function PF1is absent from the token management table TMT may mean that performance of the first physical function PF1reaches or comes close to the maximum performance. That is, in the case where a remaining token exists or in the case where the number of remaining tokens is less than the number of tokens corresponding to the size of the input/output completion, the completion manager211dmay not transmit the input/output completion to the corresponding host, and thus, performance of the first physical function PF1may be prevented from exceeding the maximum performance. In other words, as the completion manager211dcontrols a timing to transmit the input/output completion based on the token management table TMT, performance of each of a plurality of hosts may be limited to the given maximum performance. Like the above description, the number “b” of tokens associated with the physical function PF2may be determined based on the maximum performance of the physical function PF2, a token unit TKU2, and a reference time T2, the number “c” of tokens associated with the physical function PF3may be determined based on the maximum performance of the physical function PF3, a token unit TKU3, and a reference time T3, and the completion manager211dmay limit the maximum performance of each of the physical functions PF2and PF3based on the token management table TMT. In an exemplary embodiment, that the input/output completion is not transmitted may mean that an interrupt is not transmitted to the corresponding host. For example, the storage controller210may write the input/output completion in a corresponding completion queue and may transmit the interrupt to the corresponding host. In this case, the interrupt may be a signal indicating that the input/output completion is written in the corresponding completion queue. In response to the interrupt, the corresponding host may recognize that the input/output completion is written in the completion queue and may perform a corresponding operation. That is, in the case where the storage controller210does not transmit the interrupt to the corresponding host, the corresponding host may fail to process an operation associated with the input/output completion. Accordingly, as the storage controller210does not transmit the interrupt to the corresponding host, performance of the corresponding host may be limited. In other words, the storage controller210may control a timing to transmit the input/output completion by adjusting a timing to transmit the interrupt to the corresponding host. In an exemplary embodiment, that the input/output completion is not transmitted may mean that the input/output completion is not written in the corresponding completion queue. That is, the storage controller210may control a timing to transmit the input/output completion by controlling a timing to transmit the input/output completion in the corresponding completion queue. The timer211emay be configured to count a given time. For example, the timer211emay be configured to count a system clock or an operating clock to count a time elapsed from a specific time point or a given time interval. In an exemplary embodiment, the timer211emay be configured to count reference times (e.g., the reference time information T1, T2, and T3included in the token management table TMT) of a plurality of physical functions. In an exemplary embodiment, in the case where the timer211eexpires with regard to a specific physical function (e.g., PF1) (in other words, in the case where the first reference time T1of the first physical function PF1elapses from the specific time point), the completion manager211dmay refill the number “a” of tokens of the first physical function PF1included in the token management table TMT. As described above, the performance manager211may selectively transmit the input/output completion associated with the specific command to the corresponding host based on the token management table TMT. Performance of the corresponding host may be limited (e.g., to the maximum performance or lower) by adjusting a timing to transmit the input/output completion. FIG.15is a flowchart illustrating an operating method of a storage controller ofFIG.13. A maximum performance limiting operation of the storage controller210will described with reference toFIG.15. However, the inventive concept is not limited thereto. For example, as described with reference toFIGS.1to12C, the storage controller210may perform command scheduling (i.e., an operation of securing the minimum performance of each of a plurality of hosts or a WFQ scheduling method), based on aggregated values respectively associated with the plurality of physical functions PF or the plurality of submission queues SQ. Referring toFIGS.13to15, in operation S410, the storage controller210may complete the command CMD. For example, based on the method described with reference toFIGS.1to12C, the storage controller210may fetch a command from a specific submission queue and may process the fetched command. In operation S420, the storage controller210may determine whether the number of remaining tokens of the physical function PF corresponding to the processed command is more than “0”, based on the token management table TMT. In an exemplary embodiment, a reference value of “0” may be replaced with the number corresponding to an input/output size of the processed command. When the number of remaining tokens is more than “0” (or is more than the number corresponding to the input/output size of the processed command), the storage controller210may perform operation S450. When the number of remaining tokens is “0” (or the number corresponding to the input/output size of the processed command), the storage controller210may determine whether a timer expires with regard to a physical function corresponding to the processed command. For example, the storage controller210may determine whether a reference time elapses from a specific time point, with regard to the physical function corresponding to the processed command. When it is determined that the timer does not expire, the storage controller210may continue operation S430. When it is determined that the timer expires, in operation S440, the storage controller210may refill the number of tokens of the physical function corresponding to the processed command. In operation S450, the storage controller210may transmit an input/output completion corresponding to the completed command to a corresponding host. For example, the storage controller210may write the input/output completion in a corresponding completion queue and may transmit an interrupt to the corresponding host. In operation S460, the storage controller210may decrease the number of tokens of the token management table TMT by an input/output size of the input/output completion. For example, it is assumed that the input/output completion corresponds to a command of the first physical function PF1and has an input/output size of 16 KB. Under the assumption, in the case where a token unit of the first physical function PF1is 4 KB, the storage controller210may decrease “4” tokens (i.e., 16 KB/4 KB=4) at the token management table TMT. As described above, the storage controller210according to an embodiment of the inventive concept may manage a token of each of the plurality of physical functions PF through the token management table TMT. The storage controller210may control a timing to transmit an input/output completion, based on the number of tokens remaining in the token management table TMT. That is, in the case where a token of a specific physical function is absent from the token management table TMT or in the case where a token is insufficient, the storage controller210may not transmit an input/output completion associated with the specific physical function to the corresponding host. After a timer expires with regard to the specific physical function, the storage controller210may refill a token of the specific physical function at the token management table TMT and may then resume the transmission of the input/output completion. Accordingly, performance of a specific physical function or a specific host may be prevented from exceeding given performance (i.e., the maximum performance thereof may be limited). FIGS.16A and16Bare diagrams for describing an operation of a storage controller ofFIG.13. For convenience of description, an operation of limiting performance of a specific host (e.g., a host corresponding to the first physical function PF1) will be described. However, the inventive concept is not limited thereto. For example, the storage controller210may be configured to limit performance of each of any other hosts, based on a method to be described with reference toFIGS.16A and16B. Referring toFIGS.13and16A, in operation {circle around (1)}, the command scheduler211amay fetch a command from the submission queue SQ11corresponding to the first physical function PF1. In an exemplary embodiment, the command scheduler211amay select the submission queue SQ11based on the method (e.g., a method of securing minimum performance by using an aggregated value or a WFQ scheduling method) described with reference toFIGS.1to12Cand may fetch the command CMD from the selected submission queue SQ11. Although not illustrated in drawing, as described with reference toFIGS.1to12C, as the command CMD is fetched from the submission queue SQ11, aggregated values of the submission queue SQ11and the first physical function PF1may be updated. In operation {circle around (2)}, the fetched command may be processed. In operation {circle around (3)}, an operation corresponding to the fetched command may be completed. In operation {circle around (4)}, the completion manager211dmay check the number of remaining tokens of the first physical function PF1corresponding to the fetched command CMD. In the embodiment ofFIG.16A, the number of remaining tokens of the first physical function PF1may be “10”. In this case, because a remaining token exists, the completion manager211dmay decrease the number of remaining tokens of the first physical function PF1by a size of an input/output completion (i.e., an input/output size of the processed command). For example, in the case where a token unit of the first physical function PF1is 4 KB and a size of the input/output completion is 16 KB, the number of tokens of the first physical function PF1in the token management table TMT may decrease from “10” to “6”. In operation {circle around (5)}, the completion manager211dmay transmit the input/output completion to a corresponding host. For brevity of illustration, an example is illustrated inFIG.16Aas the input/output completion is written in a completion queue CQ11, but the inventive concept is not limited thereto. For example, as described above, the completion manager211dmay write the input/output completion in the completion queue CQ11and may transmit an interrupt to the corresponding host. Next, as illustrated inFIG.16B, in operation {circle around (6)}, the timer211emay expire. For example, the timer211emay be configured to count a reference time (e.g., 1 ms) of the first physical function PF1. That the timer211eexpires may mean that the reference time (e.g., 1 ms) elapses from a specific time point (e.g., a time point at which the timer211einitiates counting). In the case where the timer211eof the first physical function PF1expires, in operation {circle around (7)}, the completion manager211dmay refill the number of tokens of the token management table TMT associated with the first physical function PF1. For example, the number of tokens of the first physical function PF1is in a state of decreasing from “10” to “6” through the operation ofFIG.16A. The completion manager211dmay refill the number of tokens of the first physical function PF1in the token management table TMT from “6” to “10”. In an exemplary embodiment, the number of tokens to be refilled may be determined based on maximum performance of a corresponding physical function, a token unit, and a reference time. In an exemplary embodiment, after a token(s) of the corresponding physical function is refilled, the timer211emay resume a counting operation. FIG.17is a timing diagram for describing an operation of a storage controller ofFIG.13. For convenience of description, it is assumed that the storage controller210continues to process commands associated with the first physical function PF1, the number of tokens of the first physical function PF1is “5”, and one command has an input/output size corresponding to one token unit. That is, one input/output completion corresponding to processing of one command may correspond to one token. Referring toFIGS.13and17, at a 0-th time t0, the timer211emay initiate counting. At the 0-th time t0, the number of tokens of the first physical function PF1may be “5”. At a first time t1, one command associated with the first physical function PF1may be completely processed. In response to that the command is completely processed, the storage controller210may transmit one input/output completion to a corresponding host and may decrease the number of tokens of the first physical function PF1by “1” (i.e., 5→4). Afterwards, at each of second to fifth times t2, t3, t4, and t5, one command may be completely processed; whenever the command is completely processed, the storage controller210may transmit one input/output completion to the corresponding host and may decrease the number of tokens of the first physical function PF1by “1” (i.e., 4→3→2→1→0). At the fifth time t5, the number of remaining tokens of the first physical function PF1may be “0”. That is, that all tokens of the first physical function PF1are used during a time interval from the first time t1to the fifth time t5may mean that performance of the first physical function PF1reaches the maximum performance. At a sixth time t6, one command associated with the first physical function PF1may be completely processed. In this case, at the sixth time t6, the number of remaining tokens of the first physical function PF1may be “0”. That is, because all tokens of the first physical function PF1are used during a time interval from the first time t1to the fifth time t5, the storage controller210may not transmit an input/output completion associated with the command completely processed at the sixth time t6to the host or may not transmit an interrupt associated with the input/output completion to the host. Afterwards, when the first reference time T1elapses from the 0-th time t1(i.e., a timer expires with regard to the first physical function PF1), the storage controller210may refill a token(s) for the first physical function PF (i.e., 0→5). After token refilling is performed on the first physical function PF1, the storage controller210may transmit the input/output completion associated with the command completely processed at the sixth time t6to the host. As such, the number of tokens of the first physical function PF1may decrease by “1” (i.e., 5→4). As described above, the storage controller210according to an embodiment of the inventive concept may limit performance of each of a plurality of hosts respectively corresponding to a plurality of physical functions to the maximum performance or lower, by managing tokens of the plurality of physical functions. In this case, a resource of a storage device may be prevented from being occupied by a specific host, and thus, performance of the plurality of hosts may be distributed uniformly or evenly. FIG.18is a diagram for describing a configuration to receive performance information ofFIG.4. For brevity of illustration, indexes of minimum performance and maximum performance of each of the first to third physical functions PF1, PF2, and PF3are illustrated inFIG.18, but the inventive concept is not limited thereto. For example, performance indexes of any other hosts or any other physical functions may be used. Referring toFIGS.1and18, the plurality of hosts11to1nmay provide the storage device100with the performance information or the performance indexes illustrated inFIG.18. For example, the first host11may correspond to the first physical function PF1, and the first host11may provide the storage device100with information about the minimum performance and the maximum performance of the first physical function PF1. Likewise, the second and third hosts12and13may respectively correspond to the second and third physical functions PF2and PF3, and the second and third hosts12and13may provide the storage device100with information about the minimum performance and the maximum performance of the second and third physical functions PF2and PF3, respectively. In an exemplary embodiment, the minimum performance information may include information about sequential write minimum performance SW_min1to SW_min3, sequential read minimum performance SR_min1to SR_min3, random write minimum performance RW_min1to RW_min3, and random read minimum performance RR_min1to RR_min3of the first to third physical functions PF1to PF3. The maximum performance information may include information about sequential write maximum performance SW_max1to SW_max3, sequential read maximum performance SR_max1to SR_max3, random write maximum performance RW_max1to RW_max3, and random read maximum performance RR_max1to RR_max3of the first to third physical functions PF1to PF3. In an exemplary embodiment, the minimum performance information or the maximum performance information may be provided from each of the plurality of hosts11to1nto the storage device100through a “set features” command or vendor commands. In an exemplary embodiment, the storage controller110or210according to an embodiment of the inventive concept may perform the method (i.e., command scheduling or WFQ scheduling using an aggregated value) described with reference toFIGS.1to12Cbased on the information about the minimum performance illustrated inFIG.18. For example, it is assumed that the sequential write minimum performance SW_min1corresponding to the first physical function PF1is 3000 MB/s, the sequential write minimum performance SW_min2corresponding to the second physical function PF2is 1500 MB/s, and maximum performance that the storage device100provides is 6 GB/s. In this case, weights (e.g., write weights) of the first and second physical functions PF1and PF2may be respectively set to “2” and “4” (i.e., a ratio of the weights of the first and second physical functions PF1and PF2may be set to 2:1). In this case, a frequency at which a command is fetched by the first physical function PF1having a relatively small weight may be relatively high. Accordingly, the minimum performance of each of the first and second physical functions PF1and PF2may be secured. Alternatively, the storage controller110or210according to an embodiment of the inventive concept may perform the method (i.e., the method of limiting maximum performance through token management) described with reference toFIGS.13to17based on the information about the maximum performance illustrated inFIG.18. For example, it is assumed that a reference time of the first and second physical functions PF1and PF2is 1 ms, a token unit is 4 KB, the sequential write maximum performance SW_max1of the first physical function PF1is 4000 MB/s, and the sequential write maximum performance SW_max2of the second physical function PF2is 2000 MB/s. In this case, the initial number of tokens of the first physical function PF1may be set to “1000”, and the initial number of tokens of the second physical function PF2may be set to “500”. That is, the maximum performance corresponding to each of the first and second physical functions PF1and PF2may be limited by differently setting the initial numbers of tokens of the first and second physical functions PF1and PF2. FIG.19is a block diagram illustrating an SSD system to which a storage system according to the inventive concept is applied. Referring toFIG.19, an SSD system1000may include a host1100and a storage device1200. The storage device1200may exchange signals SIG with the host1100through a signal connector1201and may be supplied with a power PWR through a power connector1202. The storage device1200includes a solid state drive (SSD) controller1210, a plurality of nonvolatile memories1221to122n, an auxiliary power supply1230, and a buffer memory1240. The SSD controller1210may control the plurality of nonvolatile memories1221to122nin response to the signals SIG received from the host1100. The plurality of nonvolatile memories1221to122nmay operate under control of the SSD controller1210. The auxiliary power supply1230is connected with the host1100through the power connector1202. The auxiliary power supply1230may be charged by the power PWR supplied from the host1100. When the power PWR is not smoothly supplied from the host1100, the auxiliary power supply1230may power the storage device1200. The buffer memory1240may be used as a buffer memory of the storage device1200. In an exemplary embodiment, the buffer memory1240may be used as the controller memory buffer CMB of the storage device100described with reference toFIGS.1to19. In an exemplary embodiment, the host1100may be a multi-host or multi-tenant described with reference toFIGS.1to12C, the storage device1200may be the storage device100described with reference toFIGS.1to12Cor may schedule commands of the host1100(or a multi-host) based on the operation methods described with reference toFIGS.1to12C. FIG.20is a block diagram illustrating an electronic device to which a storage system according to the inventive concept is applied. Referring toFIG.20, an electronic device2000may include a main processor2100, a touch panel2200, a touch driver integrated circuit2202, a display panel2300, a display driver integrated circuit2302, a system memory2400, a storage device2500, an audio processor2600, a communication block2700, and an image processor2800. In an exemplary embodiment, the electronic device2000may be one of various electronic devices such as a personal computer, a laptop computer, a server, a workstation, a portable communication terminal, a personal digital assistant (PDA), a portable media player (PMP), a digital camera, a smartphone, a tablet computer, and a wearable device. The main processor2100may control overall operations of the electronic device2000. The main processor2100may control/manage operations of the components of the electronic device2000. The main processor2100may process various operations for the purpose of operating the electronic device2000. The touch panel2200may be configured to sense a touch input from a user under control of the touch driver integrated circuit2202. The display panel2300may be configured to display image information under control of the display driver integrated circuit2302. The system memory2400may store data that are used for an operation of the electronic device2000. For example, the system memory2400may include a volatile memory such as a static random access memory (SRAM), a dynamic RAM (DRAM), or a synchronous DRAM (SDRAM), and/or a nonvolatile memory such as a phase-change RAM (PRAM), a magneto-resistive RAM (MRAM), a resistive RAM (ReRAM), or a ferroelectric RAM (FRAM). The storage device2500may store data regardless of whether a power is supplied. For example, the storage device2500may include at least one of various nonvolatile memories such as a flash memory, a PRAM, an MRAM, a ReRAM, and a FRAM. For example, the storage device2500may include an embedded memory and/or a removable memory of the electronic device2000. The audio processor2600may process an audio signal by using an audio signal processor2610. The audio processor2600may receive an audio input through a microphone2620or may provide an audio output through a speaker2630. The communication block2700may exchange signals with an external device/system through an antenna2710. A transceiver2720and a modulator/demodulator (MODEM)2730of the communication block2700may process signals exchanged with the external device/system, based on at least one of various wireless communication protocols: long term evolution (LTE), worldwide interoperability for microwave access (WiMax), global system for mobile communication (GSM), code division multiple access (CDMA), Bluetooth, near field communication (NFC), wireless fidelity (Wi-Fi), and radio frequency identification (RFID). The image processor2800may receive a light through a lens2810. An image device2820and an image signal processor2830included in the image processor2800may generate image information about an external object, based on a received light. In an exemplary embodiment, the storage device2500may be the storage device100described with reference toFIGS.1to12Cand may support a multi-host or multi-tenant. For example, based on the operation method described with reference toFIGS.1to12C, the storage device2500may satisfy performance requirements on a plurality of cores included in the main processor2100or may satisfy performance requirements on a plurality of processes executable by the main processor2100. Alternatively, the storage device2500may communicate with an external device or an external host through the communication block2700and may satisfy performance requirements on the main processor2100, the external device, or the external host based on the method described with reference toFIGS.1to12C. FIG.21is a block diagram illustrating a data center to which a storage system according to an embodiment of the inventive concept is applied. Referring toFIG.21, a data center3000may include a plurality of computing nodes (or servers)3100to3400. The plurality of computing nodes3100to3400may communicate with each other over a network NT. In an exemplary embodiment, the network NT may be a storage dedicated network such as a storage area network (SAN) or may be an Internet network such as TCP/IP. In an exemplary embodiment, the network NT may include at least one of various communication protocols such as Fibre channel, iSCSI protocol, FCoE, NAS, and NVMe-oF. The plurality of computing nodes3100to3400may include processors3110,3210,3310, and3410, memories3120,3220,3320, and3420, storage devices3130,3230,3330, and3430, and interface circuits3140,3240,3340, and3440. For example, the first computing node3100may include the first processor3110, the first memory3120, the first storage device3130, and the first interface circuit3140. In an exemplary embodiment, the first processor3110may be implemented with a single core or a multi-core. The first memory3120may include a memory such as a DRAM, an SDRAM, an SRAM, a 3D XPoint memory, an MRAM, a PRAM, a FeRAM, or a ReRAM. The first memory3120may be used as a system memory, a working memory, or a buffer memory of the first computing node3100. The first storage device3130may be a high-capacity storage medium such as a hard disk drive (HDD) or a solid state drive (SSD). The first interface circuit3140may be a network interface controller (NIC) configured to support communication over the network NT. In an exemplary embodiment, the first processor3110of the first computing node3100may be configured to access the first memory3120based on a given memory interface. Alternatively, in an embodiment of a shared memory architecture, the first processor3110of the first computing node3100may be configured to access the memories3220,3320, and3420of the remaining computing nodes3200,3300, and3400over the network NT. The first interface circuit3140may include a network switch (not illustrated) configured to control or support an access to a shared memory (i.e., memories of any other computing nodes). In an exemplary embodiment, the first processor3110of the first computing node3100may be configured to access the first storage device3130based on a given storage interface. Alternatively, the first processor3110of the first computing node3100may be configured to access the storage devices3230,3330, and3430of the remaining computing nodes3200,3300, and3400over the network NT. The first interface circuit3140may include a network switch (not illustrated) configured to control or support an access to storage devices of any other computing nodes. In an exemplary embodiment, the storage devices3130to3430respectively included in the plurality of computing nodes3100to3140may constitute one RAID volume. Operations of the second to fourth computing nodes3200to3400may be similar to the operation of the first computing node3100described above, and thus, additional description will be omitted to avoid redundancy. In an exemplary embodiment, various applications may be executed at the data center3000. The applications may be configured to execute an instruction for data movement or copy between the computing nodes3100to3400or may be configured to execute instructions for combining, processing, or reproducing a variety of information present on the computing nodes3100to3400. In an exemplary embodiment, the applications may be executed by one of the plurality of computing nodes3100to3400included in the data center3000, or the applications may be distributed and executed between the plurality of computing nodes3100to3400. In an exemplary embodiment, the data center3000may be used for high-performance computing (HPC) (e.g., finance, petroleum, materials science, meteorological prediction), an enterprise application (e.g., scale out database), a big data application (e.g., NoSQL database or in-memory replication). In an exemplary embodiment, at least one of the plurality of computing nodes3100to3400may be an application server. The application server may be configured to execute an application configured to perform various operations at the data center3000. At least one of the plurality of computing nodes3100to3400may be a storage server. The storage server may be configured to store data that are generated or managed at the data center3000. In an exemplary embodiment, the plurality of computing nodes3100to3400included in the data center3000or portions thereof may be present at the same site or at sites physically separated from each other and may communicate with each other over network NT based on the wireless communication or wired communication based network NT. In an exemplary embodiment, the plurality of computing nodes3100to3400included in the data center3000may be implemented by the same memory technology or may be implemented by different memory technologies. Although not illustrated in drawing, at least a part of the plurality of computing nodes3100to3400of the data center3000may communicate with an external client node (not illustrated) over the network NT or over any other communication interface (not illustrated). At least a part of the plurality of computing nodes3100to3400may automatically process a request (e.g., data store or data transfer) depending on a request of the external client node or may process the request at any other computing node. In an exemplary embodiment, the number of computing nodes3100to3400included in the data center3000is exemplary, and the inventive concept is not limited thereto. Also, in each computing node, the number of processors, the number of memories, and the number of storage devices are exemplary, and the inventive concept is not limited thereto. In an exemplary embodiment, the plurality of computing nodes3100to3400may be the plurality of hosts described with reference toFIGS.1to12C, and each of the storage devices3130to3430respectively included in the plurality of computing nodes3100to3400may be a storage device configured to support a multi-host or multi-tenant described with reference toFIGS.1to12C. Based on the operation method described with reference toFIGS.1to19, each of the storage devices3130to3430respectively included in the plurality of computing nodes3100to3400may be configured to secure the minimum performance of each of the plurality of computing nodes3100to3400and to limit the maximum performance. According to the inventive concept, a storage device may secure minimum performance of each of a plurality of hosts. Alternatively, the storage device may prevent a specific host from occupying a physical resource of the storage device by limiting maximum performance corresponding to performance of each of the plurality of hosts. Accordingly, a storage device having improved performance and an operation method of the storage device are provided. While the inventive concept has been described with reference to exemplary embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the inventive concept as set forth in the following claims. | 99,169 |
11861239 | DETAILED DESCRIPTION Embodiments of the present disclosure are directed to encoding and decoding data bits stored in multiple memory cells of a memory array. In certain memory systems, memory cells of a memory array each store an encoding of three logical bits, e.g., are configured as triple-level cell (TLC) memory cells. In order to further increase the number of logical bits encoded per cell, four-bits-per-cell technology is a straightforward way to do so, as no additional encoding or decoding is necessary. However, read window bandwidth (RWB) becomes significantly tighter when adding eight (“8”) additional threshold voltage (Vt) levels, e.g., going from encoding 8 logical states to encoding 16 logical states in each memory cell. This RWB refers to the amount of voltage that separates two neighboring voltage distributions from each other. The narrower the RWB, the harder to resolve the Vt level of transition between two logical states (e.g., data bits) of the memory cell. Overly narrow RWB can thus result in higher bit error rates when reading data out of each memory cell that has been so converted. Similarly, in other memory systems, the memory cells of a memory array each store encodings for four logical bits, e.g., are configured as quad-level cell (QLC) memory cells. In order to further increase the number of logical bits stored per cell, five-bits-per-cell technology is a straightforward way to do so, as no additional encoding or decoding is necessary. However, as in the increased bits in the TLC memory cells discussed above, the RWB becomes significantly tighter when adding 16 additional Vt levels, e.g., going from 16 logical states to 32 logical states encoded in each memory cell. Overly narrow RWB can thus result in higher bit error rates when reading data out of each memory cell that has been so converted. Aspects of the present disclosure address the above and other deficiencies through storing an encoding for an intermediate number of logical bits, such as three and a half (“3.5”) logical bits per cell, and thus 7 logical bits per pair of memory cells, in the memory systems configured with TLC memory cells and four and a half (“4.5”) logical bits per cell, and thus 9 logical bits per pair of memory cells, in the memory systems configured with QLC memory cells. Because both of two memory cells are programmed with an encoding corresponding to a set of three logical bits, e.g., one and a half (“1.5”) bits per cell, to make these strategies possible, additional encoding and decoding can be employed in order to program logical bits to and read the programmed logical bits from a pair of memory cells. In one embodiment, the two memory cells are a pair of adjacent memory cells. For ease of explanation, the pair of memory cells is referred to as a first memory cell and a second memory cell of a memory array. In various embodiments, to avoid the need for 4 bits of control data (to encode/decode 7 bits for each memory cell of the TLC embodiment) or 5 bits of control data (to encode/decode the 9 bits for each memory cell of the QLC embodiment), control logic can encode the above referenced set of three logical bits (e.g., that are base two values) within a combination of the pair of memory cells, e.g., as a first threshold voltage state (or level) stored in the first memory cell and a second threshold voltage state (or level) stored in the second memory cell. Because each of these states can represent one of three different integer values (e.g., 0, 1, or 2, or others), the combined two-state value for the combination of the two memory cells can be translated into the three logical bits, e.g., as the three least significant bits of the logical bits being programmed. This translation can be performed using an integer-to-logical value decoding table, as will be discussed. Because each integer value corresponds to a subset of a series of threshold voltage levels, a low-resolution sense operation can be performed initially to determine, for example, whether the threshold voltage (Vt) level is within one of a set of lower Vt states (corresponding to a zero value), a set of middle Vt states (corresponding to a 1 value), or a set of upper Vt states (corresponding to a 2 value), as will be discussed in more detail. Such a low-resolution read operation can be performed at lower resolution than a standard read operation in order to identify a coarse grouping (lower, middle, or upper) of possible Vt states in which the Vt state of the memory cell resides. As an extension, the first threshold voltage state can also be separately encoded as a second set of logical bits and the second threshold voltage state can also be separately encoded as a third set of logical bits, which when combined with the initial set of logical bits, can represent the programmed logical bits within the combination of the first memory cell and the second cell. Thus, in these embodiments, when the logical bits are being decoded, the control logic causes a first threshold voltage state read out of the first memory cell to be converted to a first integer value and a second threshold voltage state read out of the second memory cell to be converted to a second integer value. The control logic can further translate a combination of the first integer value and the second integer value to the set of three logical bits corresponding to a combination of the first and second threshold voltage states. The control logic can further output, as a group of logical bits to be returned in response to a read request, the set of three logical bits with a second set of logical bits corresponding to the first threshold voltage state and a third set of logical bits corresponding to the second threshold voltage state. In one embodiment, the control logic interprets the first set of three logical bits as the least significant logical bits of a group of logical bits, the third set of logical bits as the most significant logical bits, and the second set of logical bits as the middle logical bits of the group of logical bits, although the ordering of the sets of logical bits can change. In one embodiment, the first memory cell and the second memory cell are each a TLC and the group of logical bits include seven logical bits. In another embodiment, the first memory cell and the second memory cell are each a QLC and the group of logical bits include nine logical bits. In related embodiments, individual logical bits of the second and third sets of logical bits can each be encoded in a series of threshold voltage levels, where each series of threshold voltage levels corresponds to 24 total logical states for the 4.5-bits-per-cell embodiment and to 12 total logical states for the 3.5-bit-per-cell embodiment. A coding table (or other coding data structure) can be stored in the memory device, which can be accessed by the control logic to determine which of three subsets of the series of threshold voltage levels are to be sensed in order to determine the second and third sets of logical bits. Each subset of the three subsets can correspond (e.g., be indexed) to one of the three possible integer states of the first and second threshold voltage states of the first and second memory cells. The coding table can also define valley locations between the series of threshold voltage levels corresponding to bit value boundaries to simplify encoding/decoding the logical bits. A bit value boundary is a valley between threshold voltage levels of the series of threshold voltage levels where a corresponding logical state changes from a low state, e.g., “0” value, to a high state, e.g., “1” value, or vice versa. Only these transitions at bit value boundaries need be sensed, e.g., by a sense amplifier coupled with the control logic, to determine each logical state for the second and third sets of logical bits (should a discrete logical state be requested individually). In this way, by indexing to determine a subset of the series of threshold voltage levels, and then sensing at only the bit value boundaries of a requested logical bit, the memory device need only sense at one or more bit value boundaries within the subset of the series of threshold voltage levels. By way of example, in an indexing embodiment, the control logic determines, using the first integer value to index into the coding table, first valley locations at bit value boundaries of a subset of the series of threshold voltage levels of the first memory cell. This subset can be a first subset, a second subset, or a third subset of the three subsets. The control logic can further cause a first sense amplifier to sense a first threshold voltage level at one of the first valley locations of the first memory cell. The control logic can then determine values of the second set of logical bits corresponding to the first threshold voltage level. Further, in this embodiment, the control logic determines, using the second integer value to index into the coding table, second valley locations at bit value boundaries of a second subset of the series of threshold voltage levels of the second memory cell. In one embodiment, the first and second subsets are the same. In another embodiment, the first and second subsets are different (depending on the integer values). The control logic can further cause a second sense amplifier to sense a second threshold voltage level at one of the second valley locations of the second memory cell and determine values of the third set of logical bits corresponding to the second threshold voltage level. By way of a further example, in a direct-sense embodiment, if the first set of three bits is not needed, the control logic directs a sense amplifier to sense all of the bit value boundaries for each logical bit that is requested. Thus, the control logic can direct one or more sense amplifiers to sense all the bit value boundaries, for each identified logical bit, within the series of threshold voltage levels corresponding to the 12 logical states for the TLC embodiments or to the 24 logical states for the QLC embodiments, for example. Thus, for example, the control logic can direct the sense amplifier(s) to sense the bit value boundaries for logical bit three (“3”) and for logical bit six (“6”), if those are the only two logical bits requested. In this way, the additional logic to determine the first set of logical bits and to index within the three sets of threshold voltage levels is avoided. By employing these two approaches, including a combination of the index-based approach and the direct-sense approach, expensive encoding schemes can be avoided while still increasing the bit per cell capacity by 1 bit for every two cells. While the disclosed embodiments use TLC memory cells and QLC memory cells as examples, one of skill in the art would understand how to extend application to any MLC memory, including MLC memory cells or PLC memory cells. Therefore, advantages of the systems and methods implemented in accordance with some embodiments of the present disclosure include, but are not limited to, an efficient and flexible increase in the number of bits per cell storage capacity in a storage device, such as a NAND memory device. For example, storage capacity of different types of MLC memory cells can be increased by a bit for each pair of memory cells. This storage capacity can be increased with a minimum amount of additional hardware (as will be discussed) together with a small amount of additional logic to resolve the first threshold voltage state of the first memory cell and the second threshold voltage state of the second memory cell. The disclosed encoding/decoding, however, avoids expensive encoding that would require large numbers of control bits to carry out. Further because, of the independently read data bits of the pair of memory cells, a threshold voltage level can be sensed for a logical bit of the first memory cell concurrently with sensing a threshold voltage level for a logical bit of the second memory cell, read latency can be further decreased. Other advantages will be apparent to those skilled in the art of encoding and decoding data stored in memory cells within a memory sub-system discussed hereinafter. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such media or memory devices. The memory device130can be a non-volatile memory device. One example of non-volatile memory devices is a negative-and (NAND) memory device. A non-volatile memory device is a package of one or more dice. Each die can include one or more planes. Planes can be groups into logic units (LUN). For some types of non-volatile memory devices (e.g., NAND devices), each plane consists of a set of physical blocks. Each block consists of a set of pages. Each page include a set of memory cells (“cells”). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values. The memory device130can be made up of bits arranged in a two-dimensional or three-dimensional grid, also referred to as a memory array. Memory cells are etched onto a silicon wafer in an array of columns (also hereinafter referred to as bitlines) and rows (also hereinafter referred to as wordlines). A wordline can refer to one or more rows of memory cells of a memory device that are used with one or more bitlines to generate the address of each of the memory cells. The intersection of a bitline and wordline constitutes the address of the memory cell. A memory sub-system110can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to multiple memory sub-systems110of different types.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. The host system120can provide data to be stored at the memory sub-system110and can request data to be retrieved from the memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include a negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory cells can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple-level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as a 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, or electrically erasable programmable read-only memory (EEPROM). A memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processing device, which includes one or more processors (e.g., processor117), configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., a logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage a memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which is a raw memory device130having control logic (e.g., local media controller135) on the die and a controller (e.g., memory sub-system controller115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. In some embodiments, control logic of the local media controller135stores the first Vt state of the first memory cell in a first page buffer of the page buffers138and converts the first Vt state to a first integer value. The control logic can further store the second Vt state of the second memory cell in a second page buffer of the page buffers138and convert the second Vt state to a second integer value, e.g., so that these integer values (e.g., 0, 1, 2 for each memory cell) can be determined concurrently. The conversion of the Vt states to the integer values can occur through a 3-level column latch discussed with reference toFIG.4BandFIGS.5A-5C. In some embodiments, the integer values can be representations of integer values, e.g., a certain voltage level for each respective integer value, in logic and/or buffered within the 3-level column latches. In these embodiments, the control logic (which also can include control logic of the memory sub-system controller115) can further act on the combined set of the first integer value and the second integer value e.g., to translate a combination of the first and second integer values to the first set of logical bits, as will be explained in more detail. The control logic for combining and translating the integer values can include logic circuits in a data output path, e.g., the page buffers138, and/or input/output (I/O) control212(FIG.2). The control logic can then also separately decode each of the first and second Vt states into the second and third sets of logical bits, as will be explained. FIG.2is a simplified block diagram of a first apparatus, in the form of a memory device130, in communication with a second apparatus, in the form of a memory sub-system controller115of a memory sub-system (e.g., memory sub-system110ofFIG.1), according to an embodiment. Some examples of electronic systems include personal computers, personal digital assistants (PDAs), digital cameras, digital media players, digital recorders, games, appliances, vehicles, wireless devices, mobile telephones and the like. The memory sub-system controller115(e.g., a controller external to the memory device130), can be a memory controller or other external host device. Memory device130includes an array of memory cells204logically arranged in rows and columns. Memory cells of a logical row are typically connected to the same access line (e.g., a word line) while memory cells of a logical column are typically selectively connected to the same data line (e.g., a bit line). A single access line can be associated with more than one logical row of memory cells and a single data line can be associated with more than one logical column. Memory cells (not shown inFIG.2) of at least a portion of array of memory cells204are capable of being programmed to one of at least two target data states. In some embodiments, the array of memory cells204can also store one or more coding data structures238such as encoding tables and decoding tables in order to translate threshold voltage levels read out of memory cells into a series of logical bits (data states) and/or in order to more efficiently know which valleys of a series of threshold voltage (Vt) level at which to sense for particularly-requested logical bits. Row decode circuitry208and column decode circuitry210are provided to decode address signals. Address signals are received and decoded to access the array of memory cells204. Memory device130also includes input/output (I/O) control circuitry212to manage input of commands, addresses and data to the memory device130as well as output of data and status information from the memory device130. An address register214is in communication with I/O control circuitry212and row decode circuitry208and column decode circuitry210to latch the address signals prior to decoding. A command register224is in communication with I/O control circuitry212and control logic of the local media controller135to latch incoming commands. A controller (e.g., the local media controller135internal to the memory device130) controls access to the array of memory cells204in response to the commands and generates status information for the external memory sub-system controller115, i.e., the local media controller135is configured to perform access operations (e.g., read operations, programming operations and/or erase operations) on the array of memory cells204. The local media controller135is in communication with row decode circuitry208and column decode circuitry210to control the row decode circuitry208and column decode circuitry210in response to the addresses. The local media controller135is also in communication with a cache register218. Cache register218latches data, either incoming or outgoing, as directed by the local media controller135to temporarily store data while the array of memory cells204is busy writing or reading, respectively, other data. During a programming operation (e.g., write operation), data can be passed from the cache register218to the data register22for transfer to the array of memory cells204; then new data can be latched in the cache register218from the I/O control circuitry212. During a read operation, data can be passed from the cache register218to the I/O control circuitry212for output to the memory sub-system controller115; then new data can be passed from the data register220to the cache register218. The cache register218and/or the data register220can form (e.g., can form a portion of) a page buffer138of the memory device130, which is illustrated separately for purposes of explanation. The page buffer138can further include sensing devices (not shown inFIG.2) such as one or more sense amplifiers to sense a data state of memory cells of the array of memory cells204, e.g., by sensing a state of a data line connected to each memory cell. A status register222can be in communication with I/O control circuitry212and the local media controller135to latch the status information for output to the memory sub-system controller115. Memory device130receives control signals at the memory sub-system controller115from the local media controller135over a control link232. For example, the control signals can include a chip enable (CE #), a command latch enable (CLE), an address latch enable (ALE), a write enable (WE #), a read enable (RE #), and a write protect (WP #). Additional or alternative control signals (not shown) can be further received over control link232depending upon the nature of the memory device130. Memory device130receives command signals (which represent commands), address signals (which represent addresses), and data signals (which represent data) from the memory sub-system controller115over a multiplexed input/output (I/O) bus234and outputs data to the memory sub-system controller115over I/O bus234. For example, the commands can be received over input/output (I/O) pins [7:0] of I/O bus234at I/O control circuitry212and can then be written into command register224. The addresses can be received over input/output (I/O) pins [7:0] of I/O bus234at I/O control circuitry212and can then be written into address register214. The data can be received over input/output (I/O) pins [7:0] for an 8-bit device or input/output (I/O) pins [15:0] for a 16-bit device at I/O control circuitry212and then can be written into cache register218. The data can be subsequently written into data register220for programming the array of memory cells204. In an embodiment, cache register218can be omitted, and the data can be written directly into data register220. Data can also be output over input/output (I/O) pins [7:0] for an 8-bit device or input/output (I/O) pins [15:0] for a 16-bit device. Although reference can be made to I/O pins, they can include any conductive node providing for electrical connection to the memory device130by an external device (e.g., the memory sub-system controller115), such as conductive pads or conductive bumps as are commonly used. It will be appreciated by those skilled in the art that additional circuitry and signals can be provided, and that the memory device130ofFIG.2has been simplified. It should be recognized that the functionality of the various block components described with reference toFIG.2may not necessarily be segregated to distinct components or component portions of an integrated circuit device. For example, a single component or component portion of an integrated circuit device could be adapted to perform the functionality of more than one block component ofFIG.2. Alternatively, one or more components or component portions of an integrated circuit device could be combined to perform the functionality of a single block component ofFIG.2. Additionally, while specific I/O pins are described in accordance with popular conventions for receipt and output of the various signals, it is noted that other combinations or numbers of I/O pins (or other I/O node structures) can be used in the various embodiments. FIG.3is a set of graphs illustrating three possible threshold voltage (Vt) states of a first memory cell (e.g., Cell A) and of a second memory cell (e.g., Cell B) according to an embodiment. For example, the Vt state (or level) of each of Cell A and Cell B can be located in a lower part of the cell (0-state), the middle part of the cell (1-state), or the upper part of the cell (2-state). These lower, middle, and upper Vt states can be encoded as discussed in more detail with reference toFIG.6(TLC embodiment) andFIG.8(QLC embodiment). Although these three groups of states in these three Figures are illustrated separated, in other embodiments the three groups of states can be consecutively ordered without gaps therebetween. As discussed previously, these 0-state, 1-state, and 2-state Vt values can be converted to integer values. In various embodiments, the control logic can cause a first Vt state of the first memory cell to be converted to a first integer value. Further, the control logic can cause a second Vt state of the second memory cell to be converted to a second integer value. The control logic can then translate, using a decoding table such as Table 1, the combination of the first integer value and the second integer value to a set of three logical bits corresponding to the combination of the first and second Vt states. TABLE 1Dual-Cell Vt StateLogical Bits (Data)Cell ACell BBit_0Bit_1Bit_2000100101102001101111111012100201012100022000 In some embodiments, the 0-State is less than −1 volt (V), the 1-State is between 0.3-1.2V, and the 2-State is between 2.0 and 2.9V in a three-level memory cell, although other voltage ranges are envisioned that can be stored as three Vt levels, and buffered in the one or more page buffer(s)138while being programmed to or read out of the memory cell. These voltage ranges may especially be shifted and broadened to make room for 12 Vt states (FIG.6) in TLCs or for 24 Vt states (FIG.8) in QLCs. Table 1 illustrates a decoding table according one of many possible embodiments of decoding, which the control logic can access in order to perform a translation between the combination of the first and second integer values and the three logical bits. In some embodiments, while there are nine possible combinations of the three logical bits, only eight combinations may be used for logical data states. Thus, in the embodiment of Table 1, if Cells A and B express an imaginary “2-2” state as illustrated in the last row in Table 1, the 2-2 state is instead identified and auto-translated to a “2-1” state for consistency. Both the 2-1 and 2-2 states result in “0 0 0” logical bit values, so the result is the same. FIG.4Ais a schematic block diagram of the first and second memory cells (e.g., Cell A and Cell B) of a memory array404that are coupled to a single word line (WL) according to an embodiment. The memory array404could be the same as, or a subpart of, the array of memory cells204discussed with reference toFIG.2. In at least some embodiments, the bit line from the first memory cell (Cell A) is coupled with a first page buffer438A and the bit line from the second memory cell (Cell B) is coupled with a second page buffer438B. The first page buffer438A can include a first sense amplifier440A and the second page buffer438B can include a second sense amplifier440B. The control logic can be coupled with the first and second page buffers438A and438B in these embodiments, and can thus direct the first and second sense amplifiers440A and440B to sense various Vt states (e.g., the first Vt state and the second Vt state) from the first and second memory cells, which can be temporarily stored in the first and second page buffers438A and438B, respectively. FIG.4Bis a schematic block diagram of a compact three-level column latch450to enable reading out a three-level state of a memory cell according to an embodiment. The three-level column latch450can be a compact, intelligent three-level column latch that can be operatively coupled with (or integrated within) each of the first sense amplifier440A and the second sense amplifier440B, for example. The three-level column latch450, for example, can be triggered to read three-level data out of or program the three-level data to each of the first memory cell (Cell A) and the second memory cell (Cell B), where the three-level data is sensed or programmed in the form of the integer values, so as to be detectable by the control logic for translation to/from logical bits of data. For example, the three-level column latch450can sense and temporarily store the three-level data (e.g., the integer values corresponding to the Vt state of a memory cell) in a pair of flip-flops, namely a first flip-flop (FF1) and a second flip-flop (FF2) illustrated inFIG.4B. The trigger of storing certain integer values in the FF1 and the FF2 can be a way of converting the first and second Vt states of the first and second memory cells into the first and second integer values, although other logic gates are also envisioned for such conversion, and can be performed in parallel with duplicate circuitry. TABLE 2Intermediate Node3-level dataFF1FF2011101200 TABLE 3Intermediate Node3-level dataFF1FF2000110211 In at least some embodiments, the first and second integer values can be understood and processed as an intermediate code stored in the pair of flip-flops as illustrated in Table 2 (for read operations) and Table 3 (for program operations). More specifically, the three-level data can be programmed into each memory cell based on values stored in FF1 and FF2 according to Table 3. The three-level data can be read out of each memory cell based on values stored into FF1 and FF2 according to Table 2. When programming these integer values to the first and second memory cells, the bit line voltage levels can be adjusted as perFIG.5Bto ensure programming the memory cells to the correct Vt range to be associated with the integer values stored in the FF1 and FF2 flip flops. FIG.5Ais a timing chart for a read operation (e.g., read request) in response to bit lines (BLa's) being selected for the memory cells according to an embodiment. In various embodiments, the flip-flips FF1 and FF2 detect whether the memory cell stores integer value “0” and integer value “2,” respectively, else the integer value is assumed to be data “1.” For a precise sensing of the threshold voltage, the bit lines are pre-charged to 1.3V and a power supply voltage supplied to the flip-flips is made to be clamped at 2V during a sensing period. FIG.5Bis a timing chart of a program operation of a memory cell using three-level encoding in response to the bit lines (BLa's) being selected according to an embodiment. Program pulses (e.g., 16.5V˜19.3V) are applied to the selected control gate (CG). To get a program speed of “1” programming to be close to that of 2V, programming the bit line voltage of “1” programming is raised to 1.6V. After each program operation, a program verify operation is carried out.FIG.5Cis a timing chart of a program verify operation for the program operation ofFIG.5Baccording to an embodiment. The intermediate codes stored in the column latches are modified such that the “1” or “2” programming are respectively executed on only memory cells in which data “1” or “2” has not been successfully programmed. FIG.6is a graph of four threshold voltage levels capable of being programmed to lower, middle, and upper portions of a first memory cell and a second memory cell according to the 3.5-bits-per cell embodiment. Three and a half (“3.5”) logical bits can be encoded per cell by encoding 7 bits in a pair of TLC memory cells, each having 12 Vt states. A total of 144 (e.g., 12×12) discrete Vt states are possible in two different cells, which can be a pair of adjacent memory cells. Of the 144 Vt states, 128 combined Vt states can be used to express 7 logical bits. Unlike encoding 3 logical bits or 4 logical bits per cell, an encoding/decoding scheme is employed to control read and program operations with 12 Vt states encoded in each of the pair of memory cells. From 7 logical bits of user data, two 4-bits of control data would need to be employed, one for the first memory cell and another for the second memory cell, which imparts a heavy cost for encoding and decoding. Instead of employing 128 Vt states to encode/decode all 7 logical bits, the first memory cell can encode two (“2”) logical bits that do not need to be combined with data encoded in another cell and 1.5 logical bits that are to be combined with logical bits encoded in the second memory cell. Similarly, the second memory cell can store two (“2”) logical bits that do not need to be combined with logical bits encoded in another cell and 1.5 logical bits that are to be combined with the 1.5 logical bits encoded in the first memory cell. The encoding and decoding of 3 logical bits across the two memory cells was discussed with reference toFIG.4BandFIGS.5A-5C. FIG.7is a graph illustrating a coding data structure700for translating threshold voltage levels across the 12 levels (FIG.6) of each memory cell into seven (“7”) logical values of data for a combination of the first and second memory cells according to the TLC embodiment. The coding data structure700can be stored on the memory device130(e.g., the coding data structure238stored in the memory array204) or in the local memory119of the controller115, for example. The coding data structure700, which can be a table in one embodiment, includes a series of threshold voltage (Vt) levels in the left-most column (e.g., Vt states 0 through 11) associated with both the first memory cell (Cell A) and the second memory cell (Cell B). The coding data structure700then includes the encoding of logical bits in the subsequent columns for each of the first memory cell (Cell A) and the second memory cell (Cell B). In at least some embodiments, the first Vt state of the first memory cell is converted to a integer value of 0, 1, or 2, illustrated as the first column of the encoding columns for the first memory cell (Cell A). As discussed, the control logic can convert each Vt state using a low-resolution sense operation sufficient to determine in which coarse grouping of Vt states each Vt state resides, and thus a integer value of 0, 1, or 2 or the like. More specifically, there are a set of predefined coarse Vt ranges (e.g., lower, middle, upper), each one corresponding to a coarse integer value. During a low-resolution read, the control logic determines in which of the set of predefined coarse Vt ranges (e.g., lower, middle, upper) the Vt of each memory cell is located, and assigns the memory cell a corresponding integer value, e.g., a 0 value for lower, a 1 value for middle, or a 2 value for upper, although different integer values are possible as well. Subsequent columns for the first memory cell are logical bit encodings for logical bits3and4. Thus, the integer value converted for the first memory cell can be used as an index to determine first valley locations at bit value boundaries of a subset of the series of threshold voltage levels of the first memory cell. As a reminder, a bit value boundary is a valley between threshold voltage levels of the series of threshold voltage levels where a corresponding logical state changes from a low state, e.g., “0” value, to a high state, e.g., “1” value, or vice versa, illustrated with the short horizontal lines inFIG.7. Further, the “subset” can be understood to be a subset of the Vt states illustrated as rows across the coding data structure700. Control logic can then cause sensing at these first valley locations to determine logical bits3and4. So, for example, if the integer value is zero (“0”), the valley sense locations are illustrated as the horizontal lines in the top left quadrant of the coding data structure700. This narrows down sensing to valleys located between the bit value boundaries, thus a single valley location for logical bit3and two valley locations for logical bit4. In these embodiments, the second Vt state of the second memory cell is converted to a integer value of 0, 1, or 2, illustrated as the first column of the encoding columns for the second memory cell (Cell B). As discussed, the second Vt state can be determined using a low-resolution sense operation sufficient to determine in which grouping of Vt states the second Vt state resides, and thus a integer value of 0, 1, or 2. Subsequent columns for the second memory cell are logical bit encoding for logical bits5and6. Thus, the integer value converted for the second memory cell can be used as an index to determine second valley locations at bit value boundaries of a second subset of the series of threshold voltage levels of the second memory cell. Control logic can then cause sensing at these second valley locations to determine logical bits5and6. So, for example, if the integer value is two (“2”), the valley sense locations are illustrated as the horizontal lines in the bottom right quadrant of the coding data structure700. This narrows down sensing to valleys located between the bit value boundaries, thus two valley locations for logical bit5and a single valley location for logical bit6. In at least some embodiments, as discussed previously, the integer values (0, 1, or 2) converted for each of the first memory cell and the second memory cell can be combined and translated to the set of three logical bits using a decoding table such as Table 1. The first Vt state of the first memory cell can then be further translated, using the coding data structure700, to logical bits3and4. The second Vt state of the second memory cell can then be further translated, using the coding data structure700, to logical bits5and6. For example, to determine a particular (or “fine”) Vt state, the control logic can identify a subset of predefined fine Vt ranges that corresponds with the previously identified coarse Vt range, e.g., the bottom four Vt ranges in the lower of Cell A, the middle four Vt ranges in the middle of Cell A, or the highest four Vt ranges in the upper of Cell A, as illustrated inFIG.6. Each fine Vt range corresponds to a particular Vt state, so the control logic can perform sense operations in the valleys between each fine Vt range to determine the particular Vt state of any particular memory cell. Thus, each of the logical bits3,4,5, and6can be related to such a particular Vt state. In some embodiments, if the first set of three logical bits (e.g., converted from integer values 0, 1, 2) is not needed or not addressed, e.g., because a read request requests for other than the least-significant logical bits, the control logic can direct a sense amplifier to sense at all of the bit value boundaries for each logical bit that is requested. Thus, the control logic can direct one or more sense amplifiers to sense the valley locations at all the bit value boundaries within the series of threshold voltage levels across the 12 logical states illustrated in the coding data structure700for each requested logical bit. For example, if logical bits3and6are requested, the control logic can direct sensing at the four valley locations (horizontal lines) associated with bit value boundaries for logical bit3and sensing at the five valley locations (horizontal lines) associated with bit value boundaries for logical bit6. These sensing operations should result in determination of the logical bits3and6(as either a “1” or “0” for each logical bit), which can be returned to the host system120in response to the read request without concern about the three least-significant logical bits. Further in reference to both the indexing embodiments and the direct read embodiments associated withFIGS.6-7, the values of logical bit3and logical bit5can be determined concurrently because these logical bits are encoded in the first and second memory cells, respectively. Further, values of logical bit4and logical bit6can be concurrently determined for the same reason. In this way, the control logic can be adapted to determine the bit values of a combined pair of memory cells with higher throughput and lower latency. FIG.8is a graph of eight threshold voltage levels capable of being programmed to lower, middle, and upper portions of a first memory cell and a second memory cell according to the 4.5-bits-per cell embodiment. Four and a half (“4.5”) bits per cell can be encoded per cell by encoding 9 bits in a pair of QLC memory cells, each having 24 Vt states. A total of 576 (e.g., 24×24) discrete Vt states are possible in two different cells, which can be a pair of adjacent memory cells. Of the 576 Vt states, 512 combined Vt states can be used to express 9 logical bits. Unlike encoding 4 logical bits or 5 logical bits per cell, an encoding/decoding scheme is employed to control read and program operations with 24 Vt states stored to each of the pair of memory cells. From 9 logical bits of user data, two 5-bits of control data would need to be employed, one for the first memory cell and another for the second memory cell, which imparts a heavy cost for encoding and decoding. Instead of employing 512 Vt states to encode/decode all 9 logical bits, the first memory cell can store three (“3”) logical bits that do not need to be combined with logical bits of another cell and 1.5 logical bits that are to be combined with logical bits of the second memory cell. Similarly, the second memory cell can store three (“3”) logical bits that do not need to be combined with logical bits of another cell and 1.5 logical bits that are to be combined with the 1.5 logical bits of the first memory cell. The encoding and decoding of 3 logical bits across the two memory cells was discussed with reference toFIG.4BandFIGS.5A-5C. FIG.9is a graph illustrating a coding data structure900for translating threshold voltage levels across the 24 levels (FIG.8) of each memory cell into nine logical values of data for a combination of the first and second memory cells according to the QLC embodiment. The coding data structure900can be stored on the memory device130(e.g., the coding data structure238stored in the memory array204) or in the local memory119or the controller115, for example. The coding data structure900, which can be a table in one embodiment, includes a series of threshold voltage (Vt) levels in the left-most column (e.g., Vt states 0 through 23) associated with both the first memory cell (Cell A) and the second memory cell (Cell B). The coding data structure900then includes the encoding of logical bits in the subsequent columns for each of the first memory cell (Cell A) and the second memory cell (Cell B). In at least some embodiments, the first Vt state of the first memory cell is converted to a integer value of 0, 1, or 2, illustrated as the first column of the encoding columns for the first memory cell (Cell A). As discussed, the first Vt state can be determined using a low-resolution sense operation sufficient to determine in which grouping of Vt states the first Vt state resides, and thus a integer value of 0, 1, or 2. Subsequent columns for the first memory cell are logical bit encodings for logical bits3,4, and5. Thus, the integer value converted for the first memory cell can be used as an index to determine first valley locations at bit value boundaries of a subset of the series of threshold voltage levels of the first memory cell. As a reminder, a bit value boundary is a valley between threshold voltage levels of the series of threshold voltage levels where a corresponding logical state changes from a low state, e.g., “0” value, to a high state, e.g., “1” value, or vice versa, illustrated with the short horizontal lines inFIG.9. Further, the “subset” can be understood to be a subset of the Vt states illustrated as rows across the coding data structure700. Control logic can then cause sensing at these first valley locations to determine logical bits3,4, and5. So, for example, if the integer value is zero (“0”), the valley sense locations are illustrated as the horizontal lines in the top left quadrant of the coding data structure900. This narrows down sensing to valleys located between the bit value boundaries, thus two valley locations for each of logical bit3and logical bit4and three valley locations for logical bit5. In these embodiments, the second Vt state of the second memory cell is converted to a integer value of 0, 1, or 2, illustrated as the first column of the encoding columns for the second memory cell (Cell B). As discussed, the second Vt state can be determined using a low-resolution sense operation sufficient to determine in which grouping of Vt states the second Vt state resides, and thus a integer value of 0, 1, or 2. Subsequent columns for the second memory cell are logical bit encoding for logical bits6,7, and8. Thus, the integer value converted for the second memory cell can be used as an index to determine second valley locations at bit value boundaries of a second subset of the series of threshold voltage levels of the second memory cell. Control logic can then cause sensing at these second valley locations to determine logical bit6,7, and8. So, for example, if the integer value is two (“2”), the valley sense locations are illustrated as the horizontal lines in the bottom right quadrant of the coding data structure900. This narrows down sensing to valleys located between the bit value boundaries, thus two valley locations for each of logical bit6and logical bit7and a three valley locations for logical bit8. In at least some embodiments, as discussed previously, the integer values (0, 1, or 2) converted for each of the first memory cell and the second memory cell can be combined and translated to the set of three logical bits using a decoding table such as Table 1. The first Vt state of the first memory cell can then be further translated, using the coding data structure900, to logical bits3,4, and5. The second Vt state of the second memory cell can then be further translated, using the coding data structure900, to logical bits6,7, and8. In some embodiments, if the first set of three logical bits (e.g., converted from integer values 0, 1, 2) is not needed or not addressed, e.g., because a read request requests for other than the least-significant logical bits, the control logic can direct a sense amplifier to sense at all of the bit value boundaries for each logical bit that is requested. Thus, the control logic can direct one or more sense amplifiers to sense the valley locations at all the bit value boundaries within the series of threshold voltage levels across the 24 logical states illustrated in the coding data structure900for each requested logical bit. For example, if logical bits3and6are requested, the control logic can direct sensing at the six valley locations (horizontal lines) associated with bit value boundaries for logical bit3and sensing at the six valley locations (horizontal lines) associated with bit value boundaries for logical bit6. These sensing operations should result in determination of the logical bits3and6(as either a “1” or “0” for each logical bit), which can be returned to the host system120in response to the read request without concern about the three least-significant logical bits. Further in reference to both the indexing embodiments and the direct read embodiments associated withFIGS.8-9, the values of logical bit3and logical bit6can be determined concurrently because these logical bits are encoded in the first and second memory cells, respectively. Further, values of logical bit4and logical bit7can be concurrently determined for the same reason. Finally, values of logical bit5and logical bit8can also be determined concurrently for the same reason. In this way, the control logic can be adapted to determine the bit values of a combined pair of memory cells with higher throughput and lower latency. With additional reference toFIGS.1-2andFIGS.6-9, in some embodiments, a memory device includes a memory array having at least a first memory cell and a second memory cell and storing a coding data structure. The coding data structure can include multiple valley locations at bit value boundaries of a series of threshold voltage levels within the first memory cell and the second memory cell for each of multiple logical bits. The device further includes at least one sense amplifier coupled with the memory array and control logic, which is operatively coupled with the memory array and the sense amplifier. The control logic can reside in at least one or both of the local media controller135and the memory sub-system controller115. In one embodiment, the coding data structure further includes a series of index values related to subsets of the multiple valley locations, each index value corresponding to a threshold voltage level of one of the first memory cell or the second memory cell. In these embodiments, the control logic can receive a read request to determine one or more logical bits, of the multiple logical bits, from threshold voltage levels stored in a combination of the first memory cell and the second memory cell. The control logic can further identify, using the coding data structure, the multiple valley locations along the series of threshold voltage levels for each of the one or more logical bits. The control logic can further cause the at least one sense amplifier to sense the threshold voltage levels at the multiple valley locations within at least one of the first memory cell or the second memory cell associated with each of the one or more logical bits. The control logic can further return, in response to the read request, values of the one or more logical bits based on sensing the threshold voltage levels at the multiple valley locations for each of the one or more logical bits. In one embodiment, the first memory cell and the second memory cell are each a triple-level cell (TLC) and the one or more logical bits include one or more of four most-significant logical bit values of seven logical bits encoded within the first and second memory cells. In another embodiment, the first memory cell and the second memory cell are each a quad-level cell (QLC) and the one or more logical bits include one or more of six most-significant logical bit values of nine logical bits encoded within the first and second memory cells. Further, in response to the read operation including a request for the three least-significant bits of data stored in the combination of the first memory cell and the second memory cell, the control logic can further cause a first threshold voltage state read out of the first memory cell to be converted to a first integer value. The control logic can further cause a second threshold voltage state read out of the second memory cell to be converted to a second integer value. The control logic can further translate a combination of the first integer value and the second integer value state to the three least-significant logical bits. In some embodiments, the above-mentioned translating can include: accessing a decoding table that relates different combinations of the first and second integer values to different combinations of the three least-significant logical bits, and translating, using the decoding table, the combination of the first and second integer values into the three least-significant bits of data. FIG.10is a flow diagram of an example method of decoding data stored in a combination of a first memory cell and a second memory cell according to some embodiments. The method500can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method500the processing logic is control logic located within the local media controller135and/or within the memory sub-system controller115. At operation1010, a first Vt state is converted. For example, the processing logic causes a first threshold voltage state read out of a first memory cell of a memory array to be converted to a first integer value, e.g., one of 0, 1, or 2. At operation1020, a second Vt state is converted. For example, the processing logic causes a second threshold voltage state read out of a second memory cell of the memory array to be converted to a second integer value, e.g., one of 0, 1, or 2 or the like. The conversions in operations1010and1020can be understood in more detail with reference toFIGS.4A-4BandFIGS.5A-5C. At operation1030, the integer values are translated. For example, the processing logic translates a combination of the first integer value and the second integer value to a set of three logical bits, e.g., via use of a decoding table such as Table 1. In one embodiment, this set of three logical bits are the three least-significant bits of the below-referenced group of bits. At operation1040, a group of bits is output. For example, the processing logic outputs, as a group of bits to be returned in response to a read request, the set of three bits with a second set of logical bits corresponding to the first threshold voltage state and a third set of logical bits corresponding to the second threshold voltage state. For example, the second set of logical bits can be directly decoded from the first threshold voltage state and the third set of logical bits can be directly decoded from the second threshold voltage state, e.g., using the coding data structure700ofFIG.7or the coding data structure900ofFIG.9. In one embodiment, the set of three bits is combined with 2 logical bits independently decoded from the first memory cell and 2 bits independently decoded from the second memory cell for a total of 7 logical bits from the pair of memory cells (e.g., the 3.5 bits-per-cell embodiment). In another embodiment, the set of three bits is combined with 3 bits independently decoded from the first memory cell with 3 bits independently decoded from the second memory cell for a total of 9 logical bits from the pair of memory cells (e.g., the 4.5 bits-per cell embodiment). Additional, related embodiments are envisioned using other MLC memory, such as PLC memory cells, so long as multiple memory cells are combined to encode and decode an additional bit between the multiple memory cells. FIG.11illustrates an example machine of a computer system1100within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system1100can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the memory sub-system controller115ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system1100includes a processing device1102, a main memory1104(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory1110(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system1118, which communicate with each other via a bus1130. Processing device1102represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device1102can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device1102is configured to execute instructions1126for performing the operations and steps discussed herein. The instructions1126can further include control logic1127, such as the control logic referenced located within one or both of the local media control135and the memory sub-system controller115. The computer system1100can further include a network interface device1112to communicate over the network1120. The data storage system1118can include a machine-readable storage medium1124(also known as a computer-readable medium) on which is stored one or more sets of instructions1126or software embodying any one or more of the methodologies or functions described herein. The data storage system1118can further include the local media controller135and the page buffer138that were previously discussed. The instructions1126can also reside, completely or at least partially, within the main memory1104and/or within the processing device1102during execution thereof by the computer system1100, the main memory1104and the processing device1102also constituting machine-readable storage media. The machine-readable storage medium1124, data storage system1118, and/or main memory1104can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions1126include instructions to implement functionality corresponding to a controller (e.g., the memory sub-system controller115and/or the local media controller135ofFIG.1). While the machine-readable storage medium1124is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 72,862 |
11861241 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS FIG.1is a general view of a system for producing mail batches according to the invention. Data pertaining to a mail batch are supplied to an output management software10from an enterprise application software12such as an enterprise resource planning (ERP) or a customer relationship management (CRM) software. These data may include a list of customers with their delivery preferences (printed or electronic), postal or email addresses, messages to be printed on documents as well as indications of additional inserts to be included in the mailpieces. For convenience, only the physical delivery channel is represented here. The output management software10prepares a print job corresponding to the mail batch and stores it in a job database14for further processing with a production software16. The print job is typically a large PDF file which may contain tens of thousands of individual documents. In most cases, the documents are already ordered by destination addresses to benefit from postal pre-sort rates, and bear unique mailpiece identifiers, preferably including a job identifier, and/or control codes to be read and interpreted by the inserters. The output management software10prepares a job file for the batch of mailpieces and stores it in the job database14. The job file contains the list of unique mailpiece identifiers to be checked by processing equipment for integrity purposes. In a specific embodiment, the job file also contains the finishing instructions about inserts to be added to the mailpieces and/or addresses to be printed on the envelopes. Ultimately, once the production is started, the job file contains the status of mailpieces (e.g. to be processed, pending, complete, hand-mailed, diverted, cancelled or failed/sent for reprint). Additional instructions for the job can be stored in the job database14. These additional instructions may include specific deadlines for sending the mailpieces, paper and print quality requirements, a variety of inserts to be used and preferred postal services (e.g. First Class, registered or priority mail). The job database may also contain the job parameters (e.g. fold type, insert types, envelope size . . . ) to be used for the batch. The print job together with the job file and the additional instructions constitute a mail job. In a preferred embodiment, the output management software12and the production software16are available as Cloud applications running on physical servers remote from the production site(s). The job database14is also hosted in the same Cloud environment on the same or a different physical server. A supervisor connects to the production software16though a control station18and analyses the mail job to be produced. At this stage, the supervisor may decide to produce the corresponding mail batch on one or several production sites20,30. Each site may include one or more printers22,32and one or more inserters24,26;34,36equipped with scanners24A,26A;34A,36A. For convenience, only one printer per site has been represented. The decision to split the batch may be dictated by various considerations, such as a desire to produce the mailpieces near the destination addresses, the number of pages of each document, the availability of specific inserts at a given site, the capabilities of local equipment (e.g. printers and inserters), or simply load balancing between the production sites. This allocation to one or several production sites may also be automated according to predefined business rules. However the supervisor may always change this allocation if an unexpected event, such as a major malfunction or a shortage of supplies at one of the sites, occurs. Once the mail batch has been allocated, the job file is modified to flag the mailpieces for the production sites they are intended to. In exceptional circumstances, the print job may need some modification to change the mailpiece identifiers and/or the control codes on the documents. This can be the case if a given equipment requires specific barcodes. This is preferably avoided by using standardized barcodes that can be read and interpreted by all inserters used for producing the mail batch. In the system of the invention, each production equipment22,24,26,32,34,36is in bidirectional communication with the production server, using secure protocols such as HTTPS. Once the mail batch has been allocated, the print job is sent to the printer(s) with a page selection corresponding to the mailpieces flagged for the production sites they are intended to. The corresponding documents are printed. Each printer22,32preferably reports the ones successfully printed to the production software16. The documents are then delivered to the folder/inserter(s). An operator may type the job ID or scan a first mailpiece identifier to access the job parameters and prepare the machine for production. For many mailings however, the job parameters are fixed and only the number of pages varies from one mailpiece to another. The operator then places the document in the main feeder and starts the inserter. As the first document enters the inserter, its identifiers are scanned by the associated scanner. The inserter sends a request to the production server including the job and mailpiece identifiers, and also identifying the inserter and the production site. The production software verifies that the job is valid (not cancelled or closed, and allocated to that production site) and that the mailpiece is valid (not cancelled or already processed by another inserter). The production server sends back a confirmation message for the inserter to process the mailpiece, or otherwise an error message for the inserter to divert the document. In a specific embodiment, the confirmation message may include the finishing instructions for that particular mailpiece. The job then continues for the following documents as will be explained hereafter. FIG.2is a flowchart of a method for producing mail batches according to the invention. In this example, there are no finishing instructions in the job file, but merely the list of unique mailpiece identifiers to be checked by the processing equipment's for integrity purposes. Finishing instructions are either the same for all mailpieces, or printed as control codes on the document pages. Control codes are generally printed in the form of separate barcodes or OMR marks to be read and interpreted directly by the inserter. They can also be appended to the mailpiece identifiers in a single barcode. The method starts at act200when the mail batch is allocated to one or several production sites by a supervisor. This allocation can also be performed automatically according to predefined business rules. At act205, the job file is prepared by flagging the mailpieces for a particular production site. This act optionally includes a modification of the print job to change the mailpiece identifiers and/or the control codes on the documents, according to the equipment available at the site. This would only concern mailpieces allocated to that particular production site, and is preferably avoided by using standardized barcodes that can be read and interpreted by all inserters used for producing the mail batch. At act210, the print job is sent to a printer of the production site with a page selection corresponding to the mailpieces flagged for that particular production site. At act215, the documents are printed by the printer. This act optionally includes sending a report about the document successfully printed to the production server when the task is completed or at regular intervals. The printing of a large mail batch can indeed take several hours and be interrupted for various reasons, like the need to reload paper. At act220, the documents are delivered to one or more inserters of the production site. At act225, an operator prepares one inserter for producing the batch. This may include typing a job ID or scanning a first mailpiece identifier to access the job parameters. The inserter may connect to the production software, or this information be communicated by other means. The operator may also select one in a list of preregistered job types. At act230, the operator then places the documents in the main feeder, and optionally inserts in the insert feeder(s), and starts the inserter. At act235, the inserter feeds a first document and scans its identifiers. At act240, the inserter sends an initial request to the production server including the job and mailpiece identifiers, and identifying the inserter and the production site. At act245, the production software verifies that the job identifier is valid (not cancelled or closed, and allocated to that production site). If the answer is yes, the method continues at act255. Otherwise an error message is returned at act248and the inserter stops. At act250, the production software verifies that the mailpiece identifier is valid (not cancelled or already processed). If the answer is yes, the method continues at act255. Otherwise, an error message is returned at act252to divert the document. At act255, the production server sends back a confirmation message for the inserter to process the mailpiece. In a preferred embodiment, the confirmation message includes a list of valid mailpiece identifiers neighbouring the one that have been scanned. The confirmation message may also notify mailpiece identifiers which are no longer valid for the corresponding documents to be diverted by the inserter. At act260, the inserter processes the mailpiece by feeding and collating all subsequent document pages, additional inserts if required, folding and inserting the set into an envelope. At act265, the inserter feeds a second document and scans its identifiers. This act may be performed in parallel, or even before, the completion of act260according to the capacity of the inserter. Indeed, more than one mailpiece may be in-process at the same time. At act270, the inserter checks whether the mailpiece identifier corresponding to the second document is included in the list communicated by the production server at act255. If the answer is yes and the mailpiece identifier is valid, the process loops at act260. Otherwise, the process loops at act240where the inserter sends a second request for valid mailpiece identifiers to the production server, including a second mailpiece identifier. This second request may correspond to the second document fed by the inserter, or to any following document bearing a mailpiece identifier which is not included in the list communicated by the production server at act255. The mailpiece identifier may also be notified as invalid in the confirmation message. In this case the inserter will divert the document like at act252, without requiring any further verification by the production software. At act275, the inserter records the status of mailpiece(s) as complete, failed or diverted. Complete mailpieces are the ones that have been properly inserted. Failed mailpieces are the ones which encountered some malfunction and need to be completed by hands or reprinted. Documents corresponding to invalid mailpiece identifiers are diverted and the mailpieces reported as such. At act280, the inserter sends a status report of the already processed mailpiece(s) to the production server and a new request for more valid mailpiece identifiers at act285. This new request and the status report are preferably included in the same message, to optimize the communication flow. These acts preferably take place at predetermined conditions before the list of valid mailpiece identifiers included in the list sent at act255is exhausted, in order to minimize downtimes. For instance the status is sent when there remain a few dozens of unprocessed mailpieces from that list. The inserter keeps running and processing the mailpieces based on the information previously received and recording their status. If the connection with the production server is temporarily lost for any reason, the latest available status report can be sent along with a new request as soon as the connection is recovered. At act290, the production software determines a new list of valid (and/or invalid) mailpiece identifiers. This list is based on the status report of act280, status reports already received from the same inserter, and status reports from other inserters located at the same or other sites. The production server then sends it back to the inserter, looping at act260. The inserter uses the information received in the new list to process the following mailpieces. Eventually at act295, the operator closes the job at that particular inserter, after all documents belonging to the batch have been inserted, and mailpieces manually completed during the process reported as hand-mailed. This does not prevent other inserters located at the same or other sites to keep producing the same batch simultaneously. The production software keeps monitoring the production of the batch until all mailpieces identifiers in the job file have been reported as complete (or flagged as cancelled). In this manner, the inserter is constantly supplied with new mailpiece identifiers without having to communicate in real-time with the production software for checking each and every mailpiece, and can operate continuously until all documents printed at act220have been processed, by itself or other inserters located at the same production site. The production software keeps a dynamic record of all complete mailpieces included in the batch. Failed mailpieces can be flagged for reprint and the corresponding documents sent as another print job to the printer at the first site they were allocated to, or at another site. Documents missing in the status reports can be identified, and sent for reprint at the same or at a later time. It shall be noted that the same method may be employed for data driven insertion. In this case, finishing instructions may be appended to the list of valid mailpiece identifiers at act260. This is particularly appropriate when for instance specific inserts need to be added or recipient addresses need to be printed on the envelopes. FIG.3is a flowchart of a method for determining valid mailpiece identifiers to be sent to the inserters, as required by the method ofFIG.2at act255and/or290. At act300, the production server receives a request for valid mailpiece identifiers from the inserter. The request includes a job identifier and first mailpiece identifier, and identifies the inserter and the production site. This act occurs after a first document has been fed and its identifiers scanned. Alternatively, the request may include a mailpiece identifier scanned on a second document, or a mailpiece identifier previously sent back to the inserter but not yet scanned, as will be explained thereafter. At act305, the production software verifies that the job identifier is valid (not cancelled or closed, and allocated to that production site). If the answer is yes, the method continues at act310. Otherwise an error message is returned at act308and the inserter stops. Optionally at act310, the production software verifies that a print job corresponding to the job identifier has been sent to that particular production site. If the answer is yes, the method continues at act315. Otherwise an error message is returned at act312and the inserter stops. Indeed, if such print job has not been sent yet, the first document may be a sample used for a test run. The operator may then force the inserter to process a mailpiece. At act315, the production software verifies that the mailpiece identifier is valid (not cancelled or already processed). If the answer is yes, the method continues at act320. Otherwise, an error message is returned at act318for the inserter to divert the document. This act optionally includes verifying that the document corresponding to that mailpiece ID was flagged in the job file for that particular production site, and whether a report about the documents successfully printed has been received. However, such report may not be readily available, or not complete, or the same document may have been printed at several production sites for various reasons. At act320, the production software determines an appropriate search window for the list of mailpiece identifiers to be sent back to the inserter. The window shall contain the mailpiece identifier included in the request received at act300, and a number of neighbouring mailpiece identifiers in the job file. The size of the window depends on various factors, such as the memory available in the inserter or bandwidth limitations. In a typical embodiment, the full list of mailpiece identifiers included in the print job is sent back. This might be inappropriate for very large batches. In another embodiment, for instance when finishing instructions are appended to the list, only a limited number of mailpiece identifiers may be sent back, to optimize the communication flow. More particularly, the list of valid mailpiece identifiers includes at least one mailpiece identifier following the first mailpiece identifier and/or one mailpiece identifier preceding the first mailpiece identifier in the job file. At act325, the production software searches in the job file for mailpiece identifiers neighbouring the mailpiece ID included in the request. The documents are usually printed in sequence with mailpiece identifiers in chronological order. However, many deviations are possible. For instance, the document may be printed in reverse order, starting from the bottom of the PDF file. The documents may be printed in direct order, but the mailpiece identifiers numbered in reverse order in the job file. The document identifiers may also follow another logic, such as customer identifiers, and some numbers be absent from the sequence. In any case, the order of the documents in the job file reflects their order in the print job as prepared by the output management software. In a typical case, when the documents are printed in direct order, the list will include a number of following mailpiece identifiers, but may also include a smaller number of preceding mailpiece identifiers, to cope with documents improperly placed in the feeder or inverted. At act330, the production software verifies that all mailpiece identifiers included in the window are valid, meaning that they have not been reported as already complete by the same or another inserter, completed by hand or cancelled. A mailpiece, or a whole group of mailpieces, may be cancelled if the sender eventually decides not to send them. This may occur after the production of the mail batch has started, in which case the supervisor can flag the mailpieces as cancelled in the job file. Mailpiece identifiers which are no longer valid are notified as such for the corresponding documents to be diverted by the inserter. In a specific embodiment, the list may be filtered to contain only mailpiece identifiers that have been flagged for a particular production site. These identifiers normally correspond to the documents that have been previously sent in a print job to the same site (and successfully printed). This may be useful to optimize to communication flow, notably when finishing instructions are appended to the list. In an ideal world, there is no overlap between the various sites, documents are printed and inserted where intended. However, disruptions during the production of a large batch may render this approach ineffective. Optionally at act335, the production software checks whether any pending mailpiece identifier shall be added to the list. Pending mailpiece identifiers are the ones missing in status reports previously sent by the same of other inserters. This may happen because the corresponding documents were not printed successfully, or some malfunction occurred during insertion and the mailpieces were reported as failed. However this is only appropriate if the corresponding documents have been sent for reprint to that particular production site. This act can be delayed until a significant part of the mail batch has been produced. At act340, the production server sends back a confirmation message to the inserter, including a list of valid mailpiece identifiers determined during the preceding acts for the inserter to process the mailpieces. The confirmation message also includes mailpiece identifiers notified as invalid for the documents to be diverted by the inserter. At act345, the inserter processes the mailpieces by feeding and collating all subsequent document pages, additional inserts if required, folding and inserting the sets into envelopes. Documents corresponding to invalid mailpiece identifiers are diverted. At act350, the inserter records the status of mailpiece(s) as complete, failed or diverted. At act355, the inserter sends a status report of the already processed mailpiece(s) to the production server. If more documents need to be inserted, a new request for more valid mailpiece identifiers may be sent at act360and preferably included in the same message. The new request may include the first mailpiece identifier from the previous list that has not been scanned yet by the inserter. The method loops then at act300. Mailpieces identifiers already included in a previous list and reported as complete may be notified as invalid in the new list. Adversely, mailpieces identifiers included in a previous list may still be valid in the new list. The inserter always takes into account the latest of the status received from the production server or recorded by itself. Eventually at act365, the operator closes the job at that particular inserter, after all documents belonging to the batch have been inserted and mailpieces manually completed during the process reported as such. This does not prevent other inserters located at the same or other sites to keep producing the same batch simultaneously. The production software keeps monitoring the production of the batch until all mailpieces identifiers in the job file have been reported as complete (or flagged as cancelled). The same method can be used if a mailpiece identifier not included in a list previously sent to the inserter is scanned at act345. This may occur if the batch of mailpieces is split between several inserters at the same production site. The method will restart at act300with a new request received from the inserter and including said mailpiece identifier. In a specific embodiment, the inserter will complete the mailpieces in process, report their status and clear mailpiece identifiers included in a previous list that have not been scanned. Otherwise, the inserter may keep these mailpiece identifiers in memory till the job is closed. In a specific embodiment, an indication of the memory space available in the inserter is sent along with the initial or new request for valid mailpiece identifiers, and the size of the search window is determined accordingly. The inserter preferably clears its memory from all mailpiece identifiers (and finishing instructions if applicable) reported as complete at acts280or355. However, this may happen only after the inserter receives an acknowledgement from the production server, so that no information is lost due to connection issues. While in the description the printers are only communicating with the production server, the print job may be sent to the printers by the output management server after the allocation has been done. The print job may also be split in several print jobs intended to the different sites according to the allocation. The system and method of the invention is particularly useful in disaster recovery situations; when there is a major problem at one production site and the job(s) must be redirected to other ones. It ensures that a particular mailpiece will be inserted (if not printed) only once while allowing high production throughput and flexibility in normal circumstances. European patent application 22 305 086.5 is incorporated herein by reference in its entirety. | 24,205 |
11861242 | DESCRIPTION OF THE EMBODIMENTS Hereinafter, an embodiment of the present disclosure will be described in detail with reference toFIGS.1to7. In the present embodiment, an image forming apparatus101when applied to an image forming system1will be described. Schematic Configuration of Image Forming System FIG.1is a diagram of an overall hardware configuration of the image forming system1according to the present embodiment. The image forming system1includes an image forming apparatus101and an external controller102. The image forming apparatus101and the external controller102are communicably connected to each other via an internal LAN105and a video cable106. The external controller102is communicably connected to a client PC103via an external LAN104, and a print instruction is transmitted from the client PC103to the external controller102. A printer driver having a function of converting print data into a print description language processible by the external controller102is installed in the client PC103. A user who performs printing can transmit a print instruction from various kinds of applications via the printer driver. Based on the print instruction from the user, the printer driver installed in the client PC103transmits print data to the external controller102. Upon receipt of the print instruction from the client PC103, the external controller102executes the print instruction by performing data analysis and rasterization processing, and transmitting print data including image information to the image forming apparatus101. A plurality of apparatuses having different functions are connected to the image forming apparatus101to enable complicated printing processing such as bookbinding. In the present embodiment, the image forming apparatus101includes a printing apparatus107, an inspection apparatus108, and a first stacker111and a second stacker112which are large-capacity stackers. Note that the image forming apparatus101may also be referred to as a multi-function machine, a multi-function peripheral, or an MFP. Further, in the present embodiment, a sheet is a recording material on which a toner image is formed, and specific examples thereof include plain paper, a synthetic resin sheet as a substitute for the plain paper, thick paper, a sheet for an overhead projector, and the like. The printing apparatus107forms an image using a toner on a sheet conveyed from a feeding cassette301or302disposed in a lower portion of the printing apparatus107. That is, the printing apparatus107is an example of an image forming unit that forms an image on a sheet based on image information. The inspection apparatus108reads the image on the sheet conveyed from the printing apparatus107and compares the image with a correct image registered in advance to determine whether or not the printed image is normal. That is, the inspection apparatus108is an example of a reading unit that reads an image formed on a sheet by the printing apparatus107. The first stacker111and the second stacker112are large-capacity stackers in which large-capacity sheets can be stacked. Note that, although the image forming system1described inFIG.1has a configuration in which the external controller102is connected to the image forming apparatus101, the present disclosure is not limited to the configuration in which the external controller102is connected thereto. That is, the image forming apparatus101may be connected to the external LAN104to transmit print data that can be processed by the image forming apparatus101from the client PC103. In this case, the image forming apparatus101performs data analysis and rasterization processing, and performs printing processing. Control System of Image Forming System FIG.2is a block diagram illustrating a configuration of the system including the image forming apparatus101, the external controller102, and the client PC103. First, a configuration of the printing apparatus107will be described. The printing apparatus107includes a communication I/F217, a LAN I/F218, a video IF220, an HDD221, a CPU222, a memory223, an operation unit224, a display225, a laser exposure unit227, an imaging unit228, a fixing unit229, and a feeding unit230. The individual components are connected to each other via a system bus231. In the present embodiment, a tandem type full-color printer is described as an example of the printing apparatus107. However, the present disclosure is not limited to the tandem type printing apparatus107, and the printing apparatus107may be another type of image forming apparatus. Also, the present disclosure is not limited to the full-color printing apparatus107, and the printing apparatus107may be a monochrome or mono-color image forming apparatus. The communication interface (I/F)217is connected to the inspection apparatus108, the first stacker111, and the second stacker112via a communication cable249to perform communication for controlling each of those apparatuses. The LAN IF218is connected to the external controller102via the internal LAN105to perform communication for print data and the like. The video I/F220is connected to the external controller102via the video cable106to perform communication for image data and the like. The HDD221is a storage device that stores programs and data. Based on the programs and the like stored in the HDD221, the CPU222comprehensively performs image processing control and printing control. The memory223operates as a work area in which programs and image data necessary when the CPU222performs various kinds of processing are stored. The operation unit224receives various setting inputs and operation instructions from the user. The display225displays setting information of the image forming apparatus101, a processing status of a print job, and the like. The laser exposure unit227is a device that performs primary charging and laser exposure for irradiating a photosensitive drum with laser light in order to transfer a toner image. In the laser exposure unit227, first, the primary charging is performed to charge a surface of the photosensitive drum to have a uniform negative potential. Next, the photosensitive drum is irradiated with laser light by a laser driver while a reflection angle of the laser light is adjusted using a polygon mirror. As a result, a negative charge at an irradiated portion of the photosensitive drum is neutralized, and an electrostatic latent image is formed. The imaging unit228, which is a device for transferring a toner onto a sheet, includes a developing unit, a transfer unit, a toner supply unit, etc., and transfers the toner on the photosensitive drum onto the sheet. In the developing unit, the toner charged negatively from a developing cylinder is attached to the electrostatic latent image on the surface of the photosensitive drum for visualization. In the transfer unit, primary transfer is performed by applying a positive potential to a primary transfer roller and transferring the toner on the surface of the photosensitive drum to a transfer belt, and secondary transfer is performed by applying a positive potential to a secondary transfer outer roller and transferring the toner on the transfer belt to a sheet. The fixing unit229, which is a device for melting and bonding the toner on the sheet to the sheet by heat and pressure, includes a heater, a fixing belt, a pressure belt, etc. The feeding unit230is a device for feeding a sheet, and its operation of feeding or conveying the sheet is controlled by a roller and various sensors. Next, a configuration of the inspection apparatus108will be described. The inspection apparatus108includes a communication I/F232, a CPU233which is an example of a control unit, a memory234, an image reading unit235, a display unit236, an operation unit237, and an image processing unit238, and the individual components are connected to each other via a system bus250. The communication I/F232is connected to the printing apparatus107via the communication cable249to perform communication necessary for control. The memory234is a storage device that store control programs. The image reading unit235reads an image on a conveyed sheet on the basis of an instruction from the CPU233. The CPU233performs various kinds of control necessary for inspection according to the control programs stored in the memory234. The CPU233compares the image read by the image reading unit235with a correct image stored in the memory234to determine whether or not the printed image is normal. That is, the CPU233is an example of a determination unit configured to execute determination processing to determine whether the image read by image reading unit235is a normal image or an abnormal image. The display unit236displays an inspection result, a setting screen, etc. The operation unit237is operated by a user to receive an instruction to change the setting of the inspection apparatus108, register a correct image, or the like. The image processing unit238sets a gain adjustment value according to the image reading of the image reading unit235and the like to reflect the gain adjustment value in an image reading result. Next, a configuration of the first stacker111will be described. The first stacker111includes a communication I/F239, a CPU240, a memory241, and a discharge control unit242, and the individual components are connected to each other via a system bus243. The communication I/F239is connected to the printing apparatus107via the communication cable249to perform communication necessary for control. The CPU240performs various kinds of control necessary for discharging according to control programs stored in the memory241. The memory241is a storage device that store the control programs. Based on an instruction from the CPU240, the discharge control unit242performs control to convey the conveyed sheet to a stack tray331(seeFIG.3), which is a first stack tray, or an escape tray334(seeFIG.3). Next, a configuration of the second stacker112will be described. The second stacker112includes a communication IF244, a CPU245, a memory246, and a discharge control unit247, and the individual components are connected to each other via a system bus248. Since the configuration of each unit is similar to that in the first stacker111, the detailed description thereof will not be repeated. Next, a configuration of the external controller102will be described. The external controller102may also be referred to as an image processing controller, a digital front end, a print server, a DFE, or the like. The external controller102includes a CPU208, a memory209, an HDD210, a keyboard211, a display212, a LAN IF213, a LAN IF214, and a video IF215, which are connected to each other via a system bus216. Based on programs and data stored in the HDD210, the CPU208comprehensively executes processing such as reception of print data from the client PC103, RIP processing, and transmission of print data to the image forming apparatus101. The memory209operates as a work area in which programs and data necessary when the CPU208performs various kinds of processing are stored. The HDD210stores programs and data necessary for operations such as printing processing. The keyboard211is a device for inputting an instruction to operate the external controller102. Information on an application executed by the external controller102or the like can be displayed on the display212by an image signal as a still image or a moving image. The LAN I/F213is connected to the client PC103via the external LAN104to perform communication for a print instruction or the like. The LAN IF214is connected to the image forming apparatus101via the internal LAN105to perform communication for a print instruction or the like. The video IF215is connected to the image forming apparatus101via the video cable106to perform communication for print data or the like. Next, a configuration of the client PC103will be described. The client PC103includes a CPU201, a memory202, an HDD203, a keyboard204, a display205, and a LAN IF206, which are connected to each other via a system bus207. Based on document processing programs and the like stored in the HDD203, the CPU201creates print data and executes a print instruction to comprehensively control each device connected to the system bus. The memory202operates as a work area in which programs and data necessary when the CPU201performs various kinds of processing are stored. The HDD203stores programs and data necessary for operations such as printing processing. The keyboard204is a device for inputting an instruction to operate the client PC103. Information on an application executed by the client PC103or the like is displayed on the display205by an image signal as a still image or a moving image. The LAN I/F206is connected to the external LAN104to perform communication for a print instruction or the like. Although it has been described above that the external controller102and the image forming apparatus101are connected to each other by the internal LAN105and the video cable106, any configuration may be used as long as data necessary for printing can be transmitted and received therebetween. For example, only the video cable may be used for a connection configuration. In addition, each of the memory202, the memory209, the memory223, the memory234, the memory241, and the memory246may be any storage device as long as data and programs can be stored therein. For example, each of the memories may be substituted with a volatile RAM, a non-volatile ROM, a built-in HDD, an external HDD, a USB memory, or the like. Image Forming Apparatus FIG.3is a cross-sectional view of the image forming apparatus101. The configuration and the operation principle of the printing apparatus107are as follows. In the printing apparatus107, various kinds of sheets can be accommodated in the feeding cassettes301and302. Only the uppermost one of the sheets accommodated in each of the feeding cassettes301and302can be separated and conveyed to a sheet conveyance path303. In development stations304to307, toner images are formed using Y, M, C, and K color toners, respectively, to form a color image. In each of the development stations304to307, the photosensitive drum is irradiated with a light beam such as a laser beam modulated according to image data as scanning light after being reflected by a rotating polygon mirror or the like. An electrostatic latent image formed on the photosensitive drum by the laser light is developed by a toner, and a toner image is primarily transferred to an intermediate transfer belt308. By sequentially executing such a series of image forming processes with respect to yellow (Y), magenta (M), cyan (C), and black (K) toners, a full-color image is formed on the intermediate transfer belt308. The intermediate transfer belt308rotates clockwise inFIG.3, and the toner image is secondarily transferred to the sheet conveyed from the sheet conveyance path303at a secondary transfer position309and the sheet with the toner image thereon is conveyed to a first fixing unit311. The first fixing unit311includes a pressing roller and a heating roller to fix the toner image onto the sheet by melting and pressure-bonding the toners when the sheet passes between the rollers. The sheet that having passed through the first fixing unit311is conveyed to a sheet conveyance path315through a sheet conveyance path312. A certain type of sheet may require further melting and pressure-bonding to fix the toner image thereto. In this case, the sheet is conveyed to a second fixing unit313by a sheet conveyance path disposed above the first fixing unit311after passing through the first fixing unit311. After additional melting and pressure-bonding are performed in the second fixing unit313, the sheet is conveyed to a sheet conveyance path315through a sheet conveyance path314. In a case where an image forming mode is a double-side mode, the sheet is conveyed to a sheet reversing path316. After reversed in the sheet reversing path316, the sheet is conveyed to a double-side conveyance path317, and an image is transferred to a back surface of the sheet at the secondary transfer position309. The display225displays a printing status of the image forming apparatus101and information for setting. The sheet having passed through the printing apparatus107is conveyed to the inspection apparatus108. In the inspection apparatus108, a first CIS unit321and a second CIS unit322are arranged to face each other. The first CIS unit321is a CIS unit for reading an upper side of the sheet, and the second CIS unit322is a CIS unit for reading a lower side of the sheet. The inspection apparatus108reads the images on the sheet using the first CIS unit321and the second CIS unit322at a timing when the sheet conveyed to the sheet conveyance path323reaches a predetermined position, and determines whether or not the images are normal on the basis of a reading result. The display unit236displays a result of the inspection performed by the inspection apparatus108and the like. Note that the first CIS unit321and the second CIS unit322are not limited to CIS units as sensors, and may be constituted by other optical system sensors such as CCDs or CMOSs. The sheet conveyance path323is an example of a conveyance unit that conveys a sheet whose images have been read by the inspection apparatus108. First Stacker The first stacker111includes a stack tray331which is an example of a first discharge unit as a tray on which sheets are stacked, and an escape tray334which is an example of a second discharge unit as a sheet discharge tray. The sheet having passed through the inspection apparatus108is conveyed to the first stacker111. The sheets from a sheet conveyance path332are stacked on the stack tray331via a sheet conveyance path333. The escape tray334is a sheet discharge tray used to discharge a sheet whose image is determined to be abnormal by the inspection apparatus108. In a case where the sheet is output to the escape tray334, the sheet is conveyed from the sheet conveyance path332to the escape tray334via a sheet conveyance path335. In a case where the sheet is conveyed to a post-processing apparatus at a stage following the first stacker111, the sheet is conveyed via a sheet conveyance path336. A reversing unit337is provided to reverse the sheet, and is used in a case where the sheet is stacked on the stack tray331. That is, in order to stack the sheet on the stack tray331so that a direction of the sheet when input and a direction of the sheet when output are the same, the sheet is reversed by the reversing unit337. In a case where the sheet is conveyed to the escape tray334or the following post-processing apparatus, the sheet is discharged as it is without performing a reversing operation in the reversing unit337. Second Stacker The second stacker112includes a second stack tray341as a tray on which sheets are stacked, and a tray344. The sheet having passed through the first stacker111is conveyed to the second stacker112. The sheets from a sheet conveyance path342are stacked on the second stack tray341via a sheet conveyance path343. In a case where the sheet is output to the tray344, the sheet is conveyed from the sheet conveyance path342to the tray344via a sheet conveyance path345. A reversing unit347is provided to reverse the sheet, and is used in a case where the sheet is stacked on the second stack tray341. That is, in order to stack the sheet on the second stack tray341so that a direction of the sheet when input and a direction of the sheet when output are the same, the sheet is reversed by the reversing unit347. In a case where the sheet is conveyed to the tray344, the sheet is discharged as it is without performing a reversing operation in the reversing unit347. In the present embodiment, the sheet conveyed by the sheet conveyance path332of the inspection apparatus108is selectively discharged to the stack tray331or the escape tray334. The CPU233of the inspection apparatus108executes determination processing for determining whether the image read by the inspection apparatus108is a normal image or an abnormal image by comparing the image with image information. In addition, based on a result of the determination processing, the CPU233selects either the stack tray331or the escape tray334as a discharge place to which the sheet is discharged. Inspection System The sheet whose image is determined to be abnormal by the inspection apparatus108is discharged to the escape tray334and needs to be subjected to recovery printing. At the time when it is determined that the image is abnormal, images have already been formed on several sheets (referred to as remaining sheets) by the printing apparatus107upstream of the inspection apparatus108. That is, the remaining sheets refer to a plurality of sheets on which image formation is started later than the abnormal sheet at the time when the abnormal image is read. In other words, the remaining sheets include sheets fed by the feeding unit230after the abnormal sheet is fed by the feeding unit230until the image on the abnormal sheet is read by the inspection apparatus108. However, if all the remaining sheets are discharged to the escape tray334and then recovery printing is performed with respect to all the remaining sheets starting from the sheet determined to be abnormal in order to guarantee the order in which the sheets are stacked, productivity is lowered, and all the remaining sheets become waste sheets. Therefore, in the present embodiment, in an image forming job of forming images for a plurality of copies, sheets from a sheet determined to be abnormal to a sheet immediately before the same page number of sheet in a next copy as that of the sheet determined to be abnormal, among sheets on which image formation has already been completed, are discharged to the escape tray334. Then, sheets after and including the same page number of sheet in the next copy as that of the sheet determined to be abnormal are inspected by the inspection apparatus108. If each of the sheets is a normal sheet, the sheet is discharged to the stack tray331. As a result, the number of waste sheets can be reduced without deteriorating productivity. Hereinafter, a processing procedure related to inspection processing in each unit of the image forming system1will be described in detail with reference to flowcharts ofFIGS.4to6. Processing of Printing Apparatus FIG.4is a flowchart showing a flow of processing performed by the printing apparatus107before inspection processing is performed. The processing ofFIG.4is executed by the CPU222of the printing apparatus107. First, the CPU222determines whether or not a print instruction (an image forming job) has been received from the external controller102(step S1). If it is determined that no print instruction has been received from the external controller102(NO in step S1), the CPU222determines again whether or not a print instruction has been received from the external controller102(step S1). If the CPU222determines that that a print instruction has been received from the external controller102(YES in step S1), the printing apparatus107starts printing according to the received image forming job (step S2). Here, the image forming job to be executed will be described, assuming that the number of copies of printing is set to X copies and the number of pages of each copy is set to Y pages. X and Y may be singular or plural. Note that the image forming job refers to image forming instruction details received from a user, which are a series of operations performed on the basis of a print instruction signal (an image forming instruction signal) as follows. That is, the image forming job refers to a period from pre-rotation (preparation operations before image formation) to post-rotation (operations after image formation) after receiving a print command signal (the input of the image formation job), and includes an image formation period and a time interval between sheets (the time when image formation is not performed). The CPU222substitutes1into each of a copy counter I and a page counter J as an initial value (step S3), and prints a J-th page of an I-th copy, and conveys the printed sheet to the inspection apparatus108(step S4). In order to confirm that the printing of all pages in an I-th copy has been completed, the CPU222determines whether or not the page counter J is Y (step S5). If it is determined that the page counter J is not Y (NO in step S5), this indicates that the printing of all pages in the I-th copy has not been completed. In this case, the CPU222increments the page counter J by one (step S6) and prints a next page (step S4). If it is determined that the page counter J is Y (YES in step S5), this indicates that the printing of all pages in the I-th copy has been completed. Next, the CPU222determines whether or not the copy counter I is X in order to confirm that printing of all copies has been completed (step S7). If it is determined that the copy counter I is not X (NO in step S7), this indicates that the printing of all copies has not been completed. In this case, the CPU222increments the copy counter I by one and substitutes1into the page counter J (step S8), and prints a next page (step S4). If it is determined that the copy counter I is X (YES in step S7), the CPU222determines whether or not there is a sheet discharged to the escape tray334in the image forming job that is being executed (step S9). This is determined by the CPU222, for example, based on whether or not the CPU233of the inspection apparatus108has set a sheet discharge place to the escape tray334. If the CPU222determines that there is a sheet discharged to the escape tray334in the image forming job that is being executed (YES in step S9), this indicates that one copy is insufficient as compared with the number of copies of printing set to X copies in the image forming job, as will be described later. Therefore, the CPU222additionally prints one copy and conveys the additionally printed copy to the inspection apparatus108(step S30). Then, the printing of all copies has been completed, and thus, the processing of the printing apparatus107is terminated. If the CPU222determines that there is no sheet discharged to the escape tray334in the image forming job that is being executed (NO in step S9), this indicates that the printing of all copies has been completed. Thus, the processing of the printing apparatus107is terminated. Processing of Inspection Apparatus FIGS.5and6are flowcharts illustrating a flow of inspection processing performed by the inspection apparatus108. The processing ofFIGS.5and6is executed by the CPU233of the inspection apparatus108. The CPU233determines whether or not an inspection end instruction has been received from the printing apparatus107(step S10). If it is determined that an inspection end instruction has been received from the printing apparatus107(YES in step S10), the CPU233ends the processing of the inspection apparatus108. If it is determined that no inspection end instruction has been received from the printing apparatus107(NO in step S10), the CPU233determines whether or not a sheet has been conveyed to the inspection apparatus108(step S11). If it is determined that no sheet has been conveyed to the inspection apparatus108(NO in step S11), the CPU233determines again whether or not an inspection end instruction has been received (step S10). If it is determined that a sheet has been conveyed to the inspection apparatus108(YES in step S11), the CPU233determines whether or not a flag indicating that inspection is not necessary, which is stored in the memory234, is turned off (step S12). If it is determined that the flag indicating that inspection is not necessary, which is stored in the memory234, is turned off (YES in step S12), the CPU233performs inspection processing (determination processing) (step S13). The CPU233compares an image read from the conveyed sheet with the correct image, and determines whether the read image is a normal image or an abnormal image, that is, whether or not an inspection result is OK (step S14). If it is determined that the inspection result is OK (YES in step S14), the CPU233sets a place for discharging the sheet on which the normal image is formed to the stack tray331(step S15), and determines again whether or not an inspection end instruction has been received (step S10). Note that the discharge place set here is notified to the first stacker111via the printing apparatus107, and the sheet is discharged to the stack tray331. If it is determined that the inspection result is not OK (NO in step S14), the CPU233sets a place for discharging the sheet on which the abnormal image is formed to the escape tray334(step S16). The discharge place set here is notified to the first stacker111via the printing apparatus107, and the sheet is discharged to the escape tray334. In order to confirm that the copy including the inspected sheet is a last copy of the image forming job, the CPU233determines whether or not the copy counter I is X (step S17). If the CPU233determines that the copy counter I is not X (NO in step S17), which indicates that the copy including the inspected sheet is not a last copy of the image forming job, the processing proceeds to step S18. In order to determine whether or not a next sheet includes the same page image as the sheet for which the inspection result is NG (No Good in the specification and Figs.), the CPU233compares a copy counter I and a page counter J of the next sheet with the copy counter I and the page counter J of the sheet for which the inspection result is NG, respectively. Specifically, the CPU233determines whether or not the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG (step S18). Here, in a case where the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG, this indicates that an image on the next sheet is the same as the normal image of the sheet for which the inspection result is NG, that is, one copy includes only one sheet. Therefore, if it is determined that the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG (YES in step S18), the CPU233determines again whether or not an inspection end instruction has been received (step S10). On the other hand, in step S18, if the copy counter I of the next sheet=the copy counter I of the sheet for which the inspection result is NG or the page counter J of the next sheet≠the page counter J of the sheet for which the inspection result is NG, an image on the next sheet is different from the normal image of the sheet for which the inspection result is NG. Therefore, if it is not determined that the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG (NO in step S18), the CPU233turns on the flag indicating that inspection is not necessary, which is stored in the memory234(step S19). This results in NO in step S12, and accordingly, no determination processing is performed in step S13. The copy counter I and the page counter J of the sheet for which the inspection result is NG are stored in the memory234(step S20), and the CPU233determines again whether or not an inspection end instruction has been received (step S10). In addition, if the CPU233determines that the copy counter I is X in step S17(YES in step S17), the copy including the sheet on which the abnormal image is formed is a last copy of the image forming job. Therefore, the CPU233discharges all the remaining sheets to the escape tray334(step S21), and starts reprinting the sheets from the inspection NG sheet on which the abnormal image is formed (step S22). Thereafter, the CPU233determines again whether or not an inspection end instruction has been received (step S10). In addition, if it is determined in step S12that the flag indicating that inspection is not necessary, which is stored in the memory234, is not turned off (NO in step S12), the CPU233sets a sheet discharge place to the escape tray334(step S23). The discharge place set herein is notified to the first stacker111via the printing apparatus107, and the sheet is discharged to the escape tray334. In order to determine whether or not a next sheet includes the same page image as the sheet for which the inspection result is NG, the CPU233compares a copy counter I and a page counter J of the next sheet with the copy counter I and the page counter J of the sheet for which the inspection result is NG, respectively. Specifically, the CPU233determines whether or not the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG (step S24). Note that the determination in step S24is similar to that in step S18. Here, in a case where the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG, this indicates that an image on the next sheet is the same as the normal image of the sheet for which the inspection result is NG. Therefore, if it is determined that the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG (YES in step S24), the CPU233turns off the flag indicating that inspection is not necessary (step S25). As a result, the determination processing is restarted for a sheet after the sheet discharged to the escape tray334in response to the occurrence of the abnormal sheet among the remaining sheets, and a discharge place can be selected based on a result of the determination processing. Thereafter, the CPU233determines again whether or not an inspection end instruction has been received (step S10). In addition, in step S24, if the copy counter I of the next sheet=the copy counter I of the sheet for which the inspection result is NG or the page counter J of the next sheet≠the page counter J of the sheet for which the inspection result is NG, an image on the next sheet is different from the normal image of the sheet for which the inspection result is NG. Therefore, if it is not determined that the copy counter I of the next sheet≠the copy counter I of the sheet for which the inspection result is NG and the page counter J of the next sheet=the page counter J of the sheet for which the inspection result is NG (NO in step S24), the CPU233determines again whether or not an inspection end instruction has been received (step S10). First Embodiment A specific first embodiment using the above-described image forming apparatus101will be described with reference toFIG.7. Here, it is assumed that an image forming job is executed in which images are formed consecutively for a plurality of copies (e.g., three copies) as set, each copy including a plurality of sheets (e.g., three sheets). First, the CPU233compares an image read by the inspection apparatus108with image information to determine whether the image is a normal image or an abnormal image (determination step). If the image read by the inspection apparatus108is a normal image, a sheet1-1on which the normal image is formed is discharged to the stack tray331(first normal discharge step). If the image read by the inspection apparatus108is an abnormal image, an abnormal sheet1-2on which the abnormal image is formed is discharged to the escape tray334(first abnormal discharge process). A plurality of sheets on which image formation is started later than the abnormal sheet1-2at the time when an abnormal image is read are defined as remaining sheets1-3to3-1. Here, the remaining sheets includes a first group and a second group. The first group includes sheets from an (n+1)th sheet to the last sheet of the first copy and sheets from a 1st sheet to an (n−1)th sheet of the second copy. The second group includes sheets from the (n)th sheet of the second copy to the last sheet of the remaining sheets. Specifically, among the remaining sheets1-3to3-1, first, a sheet1-3constituting a first copy of sheets1-1to1-3including the abnormal sheet1-2is discharged to the escape tray334. Then, among the remaining sheets, a 1st sheet2-1to a sheet that is one sheet before the same page number of sheet as that of the abnormal sheet1-2of the first copy in a second copy of sheets2-1to2-3subsequent to the first copy, that is, a sheet2-1, is discharged to the escape tray334(second abnormal discharge process). These sheets1-3to2-1are sheets of the first group of the remaining sheets. For each of the sheets2-2to3-3after and including the sheet2-2whose page number in the second copy of sheets2-1to2-3is the same as that of the abnormal sheet1-2in the first copy, an image read by the inspection apparatus108is compared with the image information to determine whether the image is a normal image or an abnormal image. These sheets2-2to3-1are sheets of the second group of the remaining sheets. Then, a discharge place is selected based on a result of the determination processing (first selection step). That is, the CPU233is configured to execute an abnormal discharge mode in which the sheets of the first group of the remaining sheets are discharged to the escape tray334and each of the sheets of the second group of the remaining sheets is discharged selectively to the stack tray331or the escape tray334based on a result of the determination processing. Note that it is illustrated inFIG.7that the sheets2-2to3-3are normal sheets, and the sheets2-2to3-3are discharged to the stack tray331. Then, image formation is performed by the printing apparatus107until the number of copies discharged to the stack tray331reaches a set number of copies which is the number of copies to be printed in the image forming job (recovery printing, R-1to R-3), and the copies are discharged to the stack tray331(second normal discharge process). That is, the recovery printing is performed with respect to the same number of copies as the number of abnormal sheets generated in the job. As described above, the method for controlling the image forming apparatus101according to the first embodiment includes the determination step, the first normal discharge step, the first abnormal discharge step, the second abnormal discharge step, the first selection step, and the second normal discharge step. Note that the second normal discharge step may be executed according to a user's instruction after issuing a warning to the user that an abnormal image has occurred, rather than being automatically executed. As described above, in the image forming apparatus101according to the present embodiment, in an image forming job in which images are formed consecutively for a first copy and a second copy that is a next copy of the first copy, each copy including a plurality of sheets, in a case where an nth sheet in the first copy is an abnormal sheet, sheets after and including an (n+1)th sheet in the first copy and sheets from a 1st sheet to an (n−1)th sheet in the second copy, among a plurality of remaining sheets, are discharged to the escape tray334. Then, for each of the sheets after and including an nth page number of sheet in the second copy among the remaining sheets, a discharge place is determined based on a result of determination processing by the inspection apparatus108. In the present embodiment, the inspection apparatus108does not perform determination processing on the sheets after and including the (n+1)th sheet in the first copy and sheets from the 1st sheet to the (n−1)th sheet in the second copy among the plurality of remaining sheets. That is, in the abnormal discharge mode, the CPU233is configured not to execute the determination processing on the sheets of the first group of the remaining sheets. However, the CPU233may execute the inspection apparatus108to perform determination processing on these sheets. In such a case, the CPU233can specify whether or not each of the plurality of sheets discharged to the escape tray334is a normal sheet, and notify the user of the same. Second Embodiment Next, a specific second embodiment using the above-described image forming apparatus101will be described with reference toFIG.8. Here, it is assumed that a first image forming job is executed in which images are formed consecutively for a plurality of copies (e.g., two copies) as set, each copy including a plurality of sheets (e.g., three sheets). Also, it is assumed that, after the first image forming job is terminated, a second image forming job is consecutively executed in which images are formed consecutively, for example, for two copies, each copy including, for example, three sheets. The second image forming job differs from the first image forming job in images to be formed. Here, it is assumed that the second image forming job is consecutively executed after the first image forming job is terminated, with remaining sheets including a sheet on which an image is formed in the second image forming job. In addition, it is assumed that an abnormal sheet is not a sheet included in a last copy of the first image forming job. In this case, similarly to the first embodiment, the CPU233executes the determination step, the first normal discharge step, the first abnormal discharge step, the second abnormal discharge step, and the first selection step. In addition, the CPU233discharges, to the stack tray331, sheets2-2to2-3from the same page number of sheet2-2in the second copy as that of the abnormal sheet in the first copy to a last sheet2-3in a last copy of the first image forming job. The CPU233discharges a sheet A-1on which an image is formed in the second image forming job among the remaining sheets to the escape tray334(third abnormal discharge step). In the abnormal discharge mode, in a case where the second image forming job in which an image formed by the printing apparatus107is different from the image in the first image forming job is consecutively executed after the first image forming job is terminated, and the second group (2-2to A-1) of the remaining sheets includes a sheet (A-1) on which the image is formed in the second image forming job, the CPU233is configured to discharge the sheet (A-1) on which the image is formed in the second image forming job in the second group of the remaining sheets to the escape tray334. Then, image formation is performed by the printing apparatus107until the number of copies of sheets discharged to the stack tray331reaches a set number of copies (recovery printing, R-1to R-3), and all of the set number of copies of sheets are discharged to the stack tray331(second normal discharge process). As described above, the method for controlling the image forming apparatus101according to the second embodiment includes the determination step, the first normal discharge step, the first abnormal discharge step, the second abnormal discharge step, the first selection step, the third abnormal discharge step, and the second normal discharge step. Note that the second image forming job is executed after the image formation in the second normal discharge step is terminated. Third Embodiment Next, a specific third embodiment using the above-described image forming apparatus101will be described with reference toFIG.9. Here, it is assumed that a first image forming job is executed in which images are formed consecutively for a plurality of copies (e.g., two copies) as set, each copy including a plurality of sheets (e.g., three sheets). Also, it is assumed that, after the first image forming job is terminated, a second image forming job is consecutively executed in which images are formed consecutively, for example, for two copies, each copy including, for example, three sheets. The second image forming job differs from the first image forming job in images to be formed. Here, it is assumed that the second image forming job is consecutively executed after the first image forming job is terminated, with remaining sheets including a sheet on which an image is formed in the second image forming job. In addition, it is assumed that an abnormal sheet2-2is a sheet included in a last copy of the first image forming job. That is, a sheet A-1on which an image is formed in the second image forming job immediately follows sheets2-1to2-3of the last copy of the first image forming job. In this case, similarly to the first embodiment, the CPU233executes the determination step, the first normal discharge step, and the first abnormal discharge step. The CPU233discharges all the remaining sheets2-3to B-1to the escape tray334(fourth abnormal discharge step). That is, in the abnormal discharge mode, in a case where the second image forming job in which an image formed by the image forming unit is different from the image in the first image forming job is consecutively executed after the first image forming job is terminated, and the first copy is the last copy of the first image forming job, the CPU233is configured to discharge all the remaining sheets to the escape tray334. Then, image formation is performed by the printing apparatus107for the sheets2-2to2-3after and including the same page number of sheet as that of the abnormal sheet (recovery printing, R-2to R-3). At this time, for each of the sheets2-2to2-3after and including the same page number of sheet as that of the abnormal sheet, an image read by the inspection apparatus108is compared with the image information to determine whether the image is a normal image or an abnormal image, and a discharge place is selected based on a result of the determination processing (second selection step). As described above, the method for controlling the image forming apparatus101according to the third embodiment includes the determination step, the first normal discharge step, the first abnormal discharge step, the fourth abnormal discharge step, and the second selection step. Note that the second image forming job is executed after the image formation in the second selection step is terminated. As described above, according to the image forming apparatus101of the present embodiment, in a case where an abnormal sheet is detected by the inspection apparatus108, sheets from the abnormal sheet to a sheet that is one sheet before the same page number of sheet in a next copy as that of the abnormal sheet, among remaining sheets, are discharged to the escape tray334. Therefore, the number of waste sheets can be reduced as compared with that in a case where all the remaining sheets are discharged to the escape tray334. Moreover, since a dedicated conveyance path for temporarily retracting some of the remaining sheets is unnecessary, an increase in the number of components can be avoided. As a result, in the image forming apparatus101having a function of discharging a sheet determined to have an abnormal image by the inspection apparatus108to the escape tray334, it is possible to suppress an increase in size and complexity of the apparatus while reducing the number of waste sheets. Further, according to the image forming apparatus101of the present embodiment, the CPU233executes image formation until the number of copies of sheets discharged to the stack tray331reaches a set number of copies for the image forming job, and all of the set number of copies of sheets are discharged to the stack tray331. Therefore, even though sheets from a sheet on which an abnormal image is formed to a sheet that is one sheet before the same page number of sheet in a next copy as that of the abnormal sheet are discharged to the escape tray334, sheets can be compensated as many as the number of sheets discharged to the escape tray334, thereby improving user's operability. In addition, according to the image forming apparatus101of the present embodiment, a second image forming job may be consecutively executed after a first image forming job is terminated, with remaining sheets including a sheet on which an image is formed in the second image forming job. That is, if a sheet on which an abnormal image is formed is not a sheet included in a last copy of the first image forming job, sheets from the sheet on which the abnormal image is formed to a sheet that is one sheet before the same page number of sheet in a next copy as that of the abnormal sheet can be discharged to the escape tray334, thereby reducing the number of waste sheets. On the other hand, if a sheet on which an abnormal image is formed is a sheet included in a last copy of the first image forming job, all the remaining sheets can be discharged to the escape tray334. Note that, in the image forming apparatus101of the present embodiment described above, during the processing of the printing apparatus107, it is determined whether or not it is necessary to additionally print one copy by detecting whether or not the sheets have been discharged to the escape tray334after printing of all copies in the image forming job is terminated. However, the present invention is not limited thereto, and for example, during the processing of the inspection apparatus108, processing of increasing the number X of copies of printing by one may be performed at the time when the sheets are discharged to the escape tray334. In addition, although it has been described that, as discharge places, two trays, i.e., the stack tray331and the escape tray334, are applied to the image forming apparatus101of the present embodiment described above, the present invention is not limited thereto. For example, three or more discharge places including the second stack tray341or the like may be provided. According to the present disclosure, by providing the function of discharging a sheet determined to have an abnormal image formed thereon by the inspection apparatus to the escape tray, it is possible to suppress an increase in size and complexity of the apparatus while reducing the number of waste sheets. OTHER EMBODIMENTS Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)), a flash memory device, a memory card, and the like. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2021-124888, filed Jul. 30, 2021, which is hereby incorporated by reference herein in its entirety. | 52,923 |
11861243 | DESCRIPTION OF THE INVENTION FIG.1schematically shows the architecture of a generative adversarial network (GAN) (1) to generate images. The GAN (1) comprises two neural networks (2,3), namely a generator (2) and a discriminator (3). The generator (2) is a neural network that is capable of generating (image) data. The discriminator (3) on the other hand is a neural network that is capable of determining whether (image) data appears real or fake. Real (image) data is data that plausibly belongs to original training data. The purpose of the generator (2) is to generate new data that appears real to the discriminator (3). The generator (2) tries to fool the discriminator (3) by generating real looking (image) data while the discriminator tries to distinguish between real and fake (image) data. Therewith, the generator (2) trains on more (image) data to produce plausible results. In this embodiment, the generator (2) tries to create images of cats that look real. The discriminator (3) examines whether the images of cats are real or fake. In this example, the generator (2) generated images which was examined by the discriminator (3) as being a real image of a cat after a few attempts. FIG.2schematically shows a framework of using a GAN architecture (1) for generating stylized output images (8). A digital input image (4) and a user selection of at least one style preference (5) of at least one style is fed into the GAN architecture (1). The input image (4) can be selected by a user. Alternatively, the input image (4) is selected from an image database (6). The user can choose one or more image style preferences (5), such as image enhancements (i.e. image shade, image colour temperature, image brightening, etcetera) or image modifications (i.e. applying image filters, adjusting the white balance, etcetera). The input image (4) is fed into an encoder network, not shown in this figure, which interprets the input image (4) and encodes the input image (4) in various features relating to the input image (4) in a feature map. Style preference of a user is fed into the GAN (1) as a style vector representing the user-selected style preference (5). The style vector and the feature map, comprising the interpreted and encoded input image (4), may be concatenated into a concatenated feature map. This concatenated feature map is subsequently fed into the generator (2) of the GAN (1). The generator (2) combines features of the concatenated feature map and generates an unique image (7). The unique image (7) is then injected in the discriminator (3). The discriminator (3) examines whether the unique image (7) is a realistic image or an unrealistic image based on the images from the image database (6). When the discriminator decides that the unique image (7) is an unrealistic image than the image will be discarded. If the discriminator (3) decides that the unique image (7) is a realistic image than the image will be stored and used as an output image (8). The resulting output image (8) is a unique image which combined the input image (4) and the user-selected style preference (5). The use of a discriminator (3) has the advantage that artifacts are reduced and the output images match more closely with the selected style preference. FIG.3schematically shows a framework of running the GAN (11) asynchronously to generate unique output images. In this embodiment, an input image (14) and a possible style preference (15) are injected into the GAN architecture (11). From the combination of the input image (14) and the possible style preference (15) the generator (12) generates a unique image (17). This unique image (17) is subsequently fed into the discriminator (13). The discriminator (13) examines on his turn whether the generated unique image (17) is realistic or unrealistic. If the discriminator (13) decides that the unique image (17) is unrealistic, the unique image (17) will be discarded. When the discriminator (13) decides that the unique image (17) is realistic then the unique image (17) will be accepted. Subsequently, this accepted image is added and stored into an output image database (19). In parallel, a user can select an input image (14′) and a style preference (15′) to generate a unique image. The unique output image resulting from the (user) selected input image (14′) and style preference (15′) is retrieved from the output image database (19) which is provided with unique output images from the in parallel running GAN (11). This asynchronously running of the GAN is of use when people are running the GAN at the same time in an online environment. Running the GAN namely requires a lot of processing power. Resulting in a slowly running GAN as a consequence of the time the GAN needs to generate a unique image. Additionally, due to the high processing power running the GAN will also be expensive. Therefore, asynchronously running the GAN is a solution to these two problems since the GAN will generate and store unique output images in a database with which users can interact to generate their unique image. Thereby, the process of generating a unique output image for the user will occur fast. FIG.4schematically shows a flow chart of a computer-implemented method for generating a decorative pattern for decorative panels. First, a digital input image for a decorative panel is selected (a). The digital input image can be selected (a) by a user or by an image database, wherein the image database comprises a plurality of scanned wood pattern designs. In addition, the user can also upload a photo to a server. This photo can subsequently be used as digital input image for a decorative panel. Second, the user selects at least one style preference (b). Thereby, the user can select one or more styles (b) such as image shade, image colour temperature and image filters. Third, the style preference is applied to the chosen digital input image in a GAN (c). Subsequently, the GAN generates a unique output image (q) combining the digital input image and the style preference of the user. This output image comprises a unique decorative pattern for decorative panels is the input for a printer. The printer will digitally print the output image onto a base panel (I). The output image can be printed on a film such as a paper film or a plastic film, more specifically a PVC film. Finally, when the output image is digitally printed onto the base panel a protective layer is attached to the panel to cover the panel (m). FIG.5schematically shows an extended flow chart of a computer-implemented method for generating a decorative pattern for decorative panels. First, a digital input image for a decorative panel is selected (a) either by a user or by an image database. Thereafter, the user selects at least one style preference (b). The user can select one or more styles such as an image related style preference and/or an image filter. Image related style preferences are styles comprising the image shade, image colour, image colour temperature, or image brightening. In addition, the user can choose image filters to modify the image, according which masking filters can be applied to the image to limit portions of the digital input image that are of interest to be combined with the style preference in the stylization. The user can further choose a relief related style preference, wherein the user can choose a relief structure to be printed on a decorative panel. The chosen style preference(s) are subsequently applied to the chosen digital input image in a GAN (c). The GAN generates an output image (q) comprising an output image for the decorative layer and optionally an output image for the relief structure. The identified size of a panel (e), or the specified size of a panel indicated by the user (f) can be used to fit the output image onto panel. The identified size of a panel area (e), or the specified panel area indicated by the user (f) can be used to cover an assembly of decorative panels with the generated output image. Based on size information of one or an assembly of panels, the output image of the decor image and relief structure can be sized (e, f) to fit the specified surface area. In addition, the preferred panel installation pattern can be specified by the user. The output image of the decor image and relief structure can also be adjusted to fit the specified panel installation pattern. In order to fit the output image of the decor image and relief structure on the surface area and the panel installation pattern, the output image can be cropped, cut, or multiplied. The output image further incorporates extension zones where a base panel can be segmented (i) after printing. After sizing the output image of the décor image can be digitally printed onto a base panel (l). The output image can be printed on a film such as a paper film or a plastic film, more specifically a PVC film. Thereafter, the relief structure can be printed onto the décor image (r). Optionally, the panel is then covered with a protective layer (m). Finally, the base panel can be segmented into multiple individual panels (o). Thereafter, the edges of the individual panels can be processed to provide the panels with interlockable edges (n). FIG.6schematically shows a user interface (60) to select a style preference (61,62) for a decorative panel. The user interface (60) comprises options for the user to select style preferences. In this embodiment, the user interface comprises two selectable style preferences, namely image shade (61) and image colour temperature (62). In this example, the user can select a preferred image shade by choosing between a light, medium and a dark image shade. It is imaginable that the user can choose from more than three image shades. The user can further select a preferred by choosing between a cool, neutral or a warm colour temperature. It is imaginable that the user can select more than three style preferences in another embodiment, such as a relief structure style preference. After clicking on the generate bottom (63) the GAN generates, from the selected style preferences combined with an input image, a unique and personalized decoration for a decorative panel. In another embodiment, the user can upload a photo to a server to implement a personalized input image to the decorative panel. FIG.7schematically shows a perspective view of a decorative panel (70) according to the invention. A decorative panel (70) comprises a core (71) which comprising an upper layer and a lower layer. A decorative layer (73) is, either directly or indirectly, affixed to the upper layer of the core (71). The decorative layer (73) of every decorative panel (70) according to the invention is unique by the GAN generated output image. On top of the decorative layer (73) a protective layer (74) is affixed which covers the decorative layer (73). In addition to the decorative layer (73), the protective layer (74) can comprise a user-selected relief structure which is unique by the GAN generated output image. Optionally, the decorative panel (70) is provided with a UV-coating (75) that is attached to the decorative panel (70) on top of the protective layer (74). Further, a backing layer (76) is, either directly or indirectly, affixed to the lower layer of the core (71). The core (71) further comprises coupling profiles (72) at the panel edges. The coupling profiles (72) enable the locking of decorative panels (70) for the covering of a floor, a wall, a ceiling or furniture. The above-described inventive concepts are illustrated by several illustrative embodiments. It is conceivable that individual inventive concepts may be applied without, in so doing, also applying other details of the described example. It is not necessary to elaborate on examples of all conceivable combinations of the above-described inventive concepts, as a person skilled in the art will understand numerous inventive concepts can be (re)combined in order to arrive at a specific application. It is explicitly emphasized here that all mathematical combinations are possible among the features mentioned above and referred to in the claims as filed, as far as the respectively obtained combination does not include any contradictory characteristics. In this manner, this application thus also forms a reservoir of possibilities of claimed subject-matter. By “horizontal” is meant a direction which extends parallel to a plane defined by the floor panel, and which may intersect the core. By “vertical” is meant a direction which is perpendicular to said plane defined by the floor panel. The ordinal numbers used in this document, like “first”, “second”, and “third” are used only for identification purposes. Hence, the use of the expressions “third locking element” and “second locking element” does therefore not necessarily require the co-presence of a “first locking element”. By “complementary” coupling profiles is meant that these coupling profiles can cooperate with each other. However, to this end, the complementary coupling profiles do not necessarily have to have complementary forms. The “floor panel” according to the invention may also applied as wall covering element, ceiling covering element, or alternative covering element. In case in this document reference is made to a “floor tile” or “floor panel”, these expressions may be replaced by expressions like “tile”, “wall tile”, “ceiling tile”, “covering tile”. It will be apparent that the invention is not limited to the working examples shown and described herein, but that numerous variants are possible within the scope of the attached claims that will be obvious to a person skilled in the art. The verb “comprise” and conjugations thereof used in this patent publication are understood to mean not only “comprise”, but are also understood to mean the phrases “contain”, “substantially consist of”, “formed by” and conjugations thereof. | 13,949 |
11861244 | DESCRIPTION OF EMBODIMENTS An embodiment of a printing system, and the like, disclosed in the present application is described below in detail with reference to the drawings. Further, the disclosed technology is not limited to the present embodiment. Moreover, embodiments illustrated below may be combined as appropriate as long as there is no inconsistency. Embodiment Configuration of Printing System1 FIG.1is an explanatory diagram illustrating an example of the printing system1according to the present embodiment. The printing system1illustrated inFIG.1includes a plurality of information processing apparatuses2, a printing device3, a server4, a LAN (Local Area Network)5, and a USB (Universal Serial Bus)6. The information processing apparatus2is communicatively connected to the server4via, for example, the LAN5. The information processing apparatus2is, for example, a computer such as a personal computer. The printing device3is communicatively connected to the information processing apparatus2via, for example, the USB6and is, for example, a printer device such as an airline printer that prints out print content such as an air boarding pass and a baggage tag. Further, for convenience of explanation, the printing device3is communicatively connected to the information processing apparatus2via the USB6in the case described, but it may be connected in parallel and may be changed as appropriate. Moreover, the printing device3is connected to the information processing apparatus2by wire in the case described, but it may be connected wirelessly and may be changed as appropriate. The server4is, for example, a host computer such as an airport server that manages the overall printing system1. Configuration of Printing Device3 FIG.2is a perspective view illustrating an example of the printing device3. The printing device3illustrated inFIG.2includes a printer unit3A and a control unit3B. A roll paper stand12where roll-shaped roll paper11, which is a print medium (e.g., thermal paper), is placed may be connected and installed on a back surface of the printer unit3A. Further, instead of the roll paper stand12, it is also possible to connect a fanfold paper tray where a print medium such as continuously folded fanfold paper is placed. The control unit3B has a power switch21provided on a front surface and has an operating unit22provided on an upper surface. The power switch21switches on/off the power of the printing device3. The operating unit22includes an operation switch to operate the printing device3and a status display LED that displays the status of the printing device3. For example, the status display LED lights up when an error occurs or lights up when the roll paper11or the fanfold paper runs out. The control unit3B has an ejection roller23provided on the front surface. The ejection roller23ejects the printed roll paper11from the printer unit3A. FIG.3is an explanatory diagram illustrating an example of an internal configuration of the printing device3. The printer unit3A includes the operating unit22, the ejection roller23, a conveyance motor24, a platen roller25, a thermal head26, a cutter drive motor27, a cutter28, and the like. The printer unit3A has an insertion port29A provided on a back side and has an ejection port29B provided on a front side. Furthermore, a conveyance path29C is formed between the insertion port29A and the ejection port29B. The insertion port29A has a print medium, such as the roll paper11or fanfold paper, inserted from the roll paper stand12or the fanfold paper tray. The ejection port29B ejects a baggage tag or an air boarding pass as a print medium having print content printed thereon due to the rotational force of the ejection roller23. The conveyance path29C conveys the print medium inserted through the insertion port29A to the ejection port29B. The conveyance motor24rotates the platen roller25. The platen roller25is rotated by the conveyance motor24to convey the print medium inserted through the insertion port29A to the ejection port29B via the conveyance path29C. The thermal head26selectively generates heat from a plurality of heating elements arranged in a line to print the print content on the print medium that reacts with heat. The cutter drive motor27performs operations to move the cutter28up and down. The cutter28is moved up and down by the cutter drive motor27to cut the print medium having the print content printed thereon to a predetermined size so as to form a baggage tag, or the like. The ejection roller23pinches a printed baggage tag, or the like, with an opposing pinch roller and ejects it through the ejection port29B. FIG.4is an explanatory diagram illustrating an example of a hardware configuration of the printing device3. The printer unit3A illustrated inFIG.4includes the conveyance motor24and the thermal head26. The thermal head26includes a thermistor26A that detects the ambient temperature of any heating element in the thermal head26. The thermistor26A outputs an AD conversion value of a detection signal of the ambient temperature of the heating element. The control unit3B includes a connection IF (Interface)31, a ROM (Read Only Memory)32, a RAM (Random Access Memory)33, and a CPU (Central Processing Unit)34. The connection IF31is an interface for communicatively connecting to the information processing apparatus2via the USB6. The ROM32stores table data, and the like, for executing each function of the printing device3in addition to a printing program executed by the printing device3. For example, it stores a control program for printing a desired drawing design on a print medium by the thermal head26driven by a head drive circuit and data such as print data, size information, and print font for characters, symbols, pictograms, and the like, included in the drawing design to be printed. The RAM33serves as an input data memory that stores print information such as information specifying characters, symbols, pictograms included in the desired drawing design to be printed, the sizes thereof, and character intervals, and the size of a printed material to be created. Further, the RAM33functions as a data memory that stores print image data that is generated based on the input print information and that represents the desired drawing design. Further, the print image data is data such as a print image corresponding to the print content. Further, the RAM33includes a register, a counter, and the like, which temporarily stores data needed for printing processing, etc., for example, the position of a specific portion and printing control information on each position. Moreover, the specific portion is, for example, a portion such as a logo or a barcode for which high-quality printing of a print image is needed. The CPU34controls the overall printing device3. For example, the CPU34loads the print program stored in the ROM32into the RAM33and executes printing control on the print medium based on the loaded print program. Configuration of Information Processing Apparatus2 FIG.5is an explanatory diagram illustrating an example of a hardware configuration of the information processing apparatus2. The information processing apparatus2illustrated inFIG.5includes a communication IF41, an HDD (Hard Disk Drive)42, a ROM43, a RAM44, and a CPU45. The communication IF41is an interface that communicatively connects to the server4via the LAN5and communicatively connects to the printing device3via the USB6. The HDD42is an area that stores various types of information. The ROM43is an area that stores various programs. The RAM44is an area that stores various types of information. The CPU45controls the overall information processing apparatus2. FIG.6is an explanatory diagram illustrating an example of a software configuration of the printing system1. The printing system1illustrated inFIG.6includes the information processing apparatus2and the printing device3. Software on the CPU45in the information processing apparatus2includes an application (hereinafter simply referred to as app)45A, a handler45B, and a USB driver45C. Further, it is assumed that the ROM43stores the app45A, the handler45B, and the USB driver45C. The app45A is a customer application for transmitting and receiving customer data, or the like, from the server4. The handler45B is a service module that converts a command from the app45A into a command of the printer unit3A in the printing device3. The handler45B detects the coordinate position of a specific portion, such as barcode or logo, from the print image data acquired from the server4and generates printing control information for each coordinate position. The USB driver45C is a driver that drives the printer unit3A in the printing device3. FIG.7is an explanatory diagram illustrating an example of a functional configuration of the information processing apparatus2. The information processing apparatus2illustrated inFIG.7includes a control unit2A and a storage unit2B. The control unit2A loads the printing control program stored in the ROM43into the RAM44and executes the loaded printing control program to execute it as a function of a printing control process. The control unit2A includes, as functional configurations, a reception unit51, a detection unit52, a generation unit53, and a transmission unit54. The storage unit2B includes a first storage unit61, a second storage unit62, and a coordinate table63. The control unit2A controls the overall information processing apparatus2. The storage unit2B stores various types of information. The reception unit51receives the customer data including the print image data to be printed from the server4via the LAN5. The detection unit52detects, from the print image data, the coordinate position of the specific portion such as barcode or logo and detects the coordinate position of a control target portion described below from the coordinate position of the specific portion. The generation unit53generates printing control information, e.g., the print speed and the print density, for each coordinate position of the control target portion. Further, the printing control information is control information used for printing control of the coordinate position of the control target portion. The printing control information is, for example, control information such as a print speed lower than the default print speed and a print density higher than the default print density. The detection unit52stores the printing control information for each coordinate position of the control target portion in the coordinate table63. The transmission unit54transmits the coordinate position of the control target portion including barcode, logo, etc., and the printing control information and the print image data for each coordinate position of the control target portion to the printing device3via the USB6. The detection unit52binarizes each pixel of the print image corresponding to the print image data in black and white and executes a morphology conversion process on each pixel after binarization. The detection unit52extracts the contour of the specific portion from the pixels after the morphology conversion process based on a similar condition that is similar to the specific portion. Further, when the specific portion is a barcode, the similar condition is a barcode threshold with which it may be assumed that the horizontal and vertical size of the contour is the horizontal and vertical size of a barcode. Further, when the specific portion is a logo, the similar condition is a logo threshold with which it may be assumed that the horizontal and vertical size of the contour is the horizontal and vertical size of a logo. When an extraction contour corresponding to the similar condition is extracted from the extraction contours after the morphology conversion process, the detection unit52stores the coordinate position of the extraction contour corresponding to the similar condition in the first storage unit61. The detection unit52detects the contour as the coordinate position of the specific portion when the duty ratio between one value (black) and the other value (white) out of two values for each pixel of the extraction contour stored in the first storage unit61is equal to or more than the duty threshold. The detection unit52stores the coordinate position of the extraction contour whose duty ratio is equal to or more than the duty threshold in the second storage unit62. Then, the detection unit52detects the coordinate position of the control target portion from the coordinate position of the extraction contour stored in the second storage unit62based on a single condition described below and stores the coordinate position of the control target portion in the coordinate table63. The first storage unit61stores the coordinate position of the extraction contour that corresponds to the similar condition. The second storage unit62stores the coordinate position of the extraction contour whose duty ratio is equal to or more than the duty threshold. The coordinate table63stores the coordinate position of the control target portion. FIG.8Ais an explanatory diagram illustrating an example of a print image70A of an airline boarding pass. The print image70A corresponding to the print image data on the air boarding pass illustrated inFIG.8Aincludes a logo71A near the lower left end, a barcode72A near the center lower end, and the logo71A near the upper right end. The logo71A near the lower left end is, for example, “ABCD airline”, and the logo71A near the upper right end is, for example, “ABCD”. FIG.8Bis an explanatory diagram illustrating an example of a print image70B of a baggage tag. The print image70B corresponding to the print image data on the baggage tag illustrated inFIG.8Bincludes three barcodes72B near the left side, the four barcodes72B near the center, the one barcode72B near the right side, one logo71B near the center, and the one logo71B near the right side. The logos71B near the center and near the right side are, for example, “ABCD”. Description of Morphology Conversion Process FIG.9is an explanatory diagram illustrating an example of the morphology conversion process. The detection unit52executes the morphology conversion process on each pixel of the print image corresponding to the print image data. In the morphology conversion process, each pixel of the print image is binarized in white or black and, as illustrated inFIG.9, it is determined whether there is even one black in eight pixels α1adjacent to a pixel α0that is one dot as a determination target. Then, in the morphology conversion process, when there is even one black in the eight pixels α1adjacent to the pixel α0as a determination target, the pixel α0as a determination target is replaced with black. FIG.10is an explanatory diagram illustrating an example of the morphology conversion process. In accordance with the line width or the ratio of the barcode, for example, the morphology conversion process is performed multiple times to replace approximately two dots with black, that is, to fill them with black, at one time. As a result, the pixels after the morphology conversion process are filled with black as illustrated inFIG.10so that the black area gradually increases. Description of Contour Extraction Process FIG.11is an explanatory diagram illustrating an example of a contour extraction process. In the contour extraction process, only the outermost contour within the area filled with black in the print image after the morphology conversion is extracted as an extraction contour, and the black-filled area inside the extraction contour is ignored. In the example ofFIG.11, among extraction contours R1, R2, and R3, the extraction contour R3is excluded from the extraction contours. The detection unit52stores, in the first storage unit61, the coordinate position of the extraction contour whose horizontal and vertical size satisfies the similar condition (4 mm×26 mm) of barcodes and the similar condition (5 mm×5 mm) of logos. Further, the detection unit52stores, in the second storage unit62, the coordinate position of the extraction contour whose duty is equal to or more than the duty threshold (e.g., the proportion of black is 40%) among the extraction contours stored in the first storage unit61. The detection unit52sorts the coordinate positions of the extraction contours in the second storage unit62in ascending order of a Y coordinate of the print image. FIG.12is an explanatory diagram illustrating an example of the print content when a barcode interval in a Y-coordinate direction is equal to or more than a predetermined interval. As illustrated inFIG.12, the printer unit3A sequentially executes printing in units of rows in a column direction (Y-coordinate direction) from top to bottom. When the interval between an extraction contour R4and an extraction contour R5in the Y-coordinate direction is equal to or more than a predetermined interval, the detection unit52determines that the extraction contour R4and the extraction contour R5are not on the identical Y coordinate (identical row) and determines that the extraction contour R4and the extraction contour R5do not satisfy the single condition of the extraction contours. Then, the detection unit52stores, in the coordinate table63, a portion of the extraction contour R4from a start Y coordinate Y1to an end Y coordinate Y2and a portion of the extraction contour R5from a start Y coordinate Y3to an end Y coordinate Y4as the coordinate positions of separate control target portions M. On the other hand, the coordinate positions of portions other than the control target portions M include a portion from a start Y coordinate Y0to the start Y coordinate Y1, a portion from the end Y coordinate Y2to the start Y coordinate Y3, and a portion after the end Y coordinate Y4. The printing control on the coordinate position of the control target portion M is first printing control, and the printing control on the coordinate position of the portion other than the control target portion M is second printing control. The second printing control has the default print speed and print density, whereas the first printing control has a print speed lower than the default print speed and a print density higher than the default print density. The printer unit3A in the printing device3executes the second printing control to print the portion from the start Y coordinate Y0to the start Y coordinate Y1, the portion from the end Y coordinate Y2to the start Y coordinate Y3, and the portion after the end Y coordinate Y4. Further, the printer unit3A executes the first printing control to print the portion from the start Y coordinate Y1to the end Y coordinate Y2and the portion from the start Y coordinate Y3to the end Y coordinate Y4. That is, the printer unit3A prints the control target portions M including the logo, barcode, or the like, of the extraction contour R4and the extraction contour R5with a high quality and prints the portions other than the control target portions M with the normal quality. As a result, even in the case of printing from the print image data including the specific portion, the print quality of the specific portion may be improved. FIG.13is an explanatory diagram illustrating an example of the print content when multiple barcodes are overlapped on the Y coordinate. As illustrated inFIG.13, the detection unit52determines that an extraction contour R6and an extraction contour R7are overlapped on the Y coordinate and are on the identical Y coordinate (identical row) and determines that the extraction contour R6and the extraction contour R7satisfy the single condition of the extraction contours. Then, as the extraction contour R6and the extraction contour R7satisfy the single condition, the detection unit52determines that they are one extraction contour and determines that a portion from a start Y coordinate Y11of the extraction contour R6to an end Y coordinate Y14of the extraction contour R7is the coordinate position of the control target portion M. On the other hand, the coordinate positions of the portions other than the control target portion M include a portion from a start Y coordinate Y10to the start Y coordinate Y11and a portion after the end Y coordinate Y14. The printer unit3A in the printing device3executes the second printing control to print the portion from the start Y coordinate Y10to the start Y coordinate Y11and the portion after the end Y coordinate Y14. Further, the printer unit3A executes the first printing control to print the portion from the start Y coordinate Y11to the end Y coordinate Y14. That is, the printer unit3A prints the control target portion M including the logo, barcode, or the like, of the extraction contour R6and the extraction contour R7with a high quality and prints the portion other than the control target portion M with the normal quality. As a result, even in the case of printing from the print image data including the specific portion, the print quality of the specific portion may be improved. FIG.14is an explanatory diagram illustrating an example of the print content when the barcode interval in the Y-coordinate direction is less than the predetermined interval. When the interval between an extraction contour R8and an extraction contour R9in the Y-coordinate direction is less than the predetermined interval, the detection unit52determines that the extraction contour R8and the extraction contour R9are on the identical Y coordinate (identical row) and determines that the extraction contour R8and the extraction contour R9satisfy the single condition of the extraction contours. Then, as the extraction contour R8and the extraction contour R9satisfy the single condition, the detection unit52determines that they are one extraction contour and determines that a portion from a start Y coordinate Y21of the extraction contour R8to an end Y coordinate Y24of the extraction contour R9is the coordinate position of the control target portion M. On the other hand, the coordinate positions of the portions other than the control target portion M include the portion from a start Y coordinate Y20to the start Y coordinate Y21and a portion after the end Y coordinate Y24. The printer unit3A in the printing device3executes the second printing control to print the portion from the start Y coordinate Y20to the start Y coordinate Y21and the portion after the end Y coordinate Y24. Further, the printer unit3A executes the first printing control to print the portion from the start Y coordinate Y21to the end Y coordinate Y24. That is, the printer unit3A prints the control target portion M including the logo, barcode, or the like, of the extraction contour R8and the extraction contour R9with a high quality and prints the portions other than the control target portion M with the normal quality. As a result, even in the case of printing from the print image data including the specific portion, the print quality of the specific portion may be improved. The detection unit52detects the coordinate position of the control target portion M from the print image and stores the coordinate position in the coordinate table63. Further, the generation unit53generates printing control information regarding the first printing control for the coordinate position of the control target portion M. Then, the transmission unit54transmits the print image data, the coordinate position of the control target portion M, and the printing control information corresponding to the coordinate position of the control target portion M to the printing device3. When the printing device3receives the print image data, the coordinate position of the control target portion M, and the printing control information, it specifies the coordinate position of the control target portion M and the coordinate position of the portion other than the control target portion M from the print image data. The printing device3specifies the first printing control based on the printing control information for the coordinate position of the control target portion M and the second printing control as default for the coordinate position of the portion other than the control target portion M. The printing device3executes the printing control on the print medium based on the first printing control and the second printing control for each coordinate position. Description of Operation of Overall Printing System1 Next, an operation of the printing system1according to the present embodiment is described.FIG.15is a flowchart illustrating an example of a processing operation of the printing system1regarding a ticketing process. The app45A in the information processing apparatus2receives the print image data from the server4(Step S11). Furthermore, for convenience of explanation, the app45A directly receives the print image data from the server4in the case described, but the print image data may be generated from the coordinate, text data, etc., in the customer data received from the server4, and changes may be made as appropriate. The app45A requests the handler45B to make a request for printing the print image data (Step S12). When the print request is received from the app45A, the handler45B in the information processing apparatus2determines whether a detection process is enabled (Step S13). Further, the detection process is a process to detect the coordinate position of the control target portion including the specific portion, such as barcode or logo, from the print image data. For example, the detection process is not needed and therefore is disabled when the quality of the print medium is desirable, and the detection process is needed and therefore is enabled when the quality of the print medium is poor. When the detection process is enabled (Step S13: Yes), the handler45B starts the detection process to detect the coordinate position of the control target portion including the specific portion, such as barcode or logo, in the print image data (Step S14). The handler45B determines whether the print image data includes the specific portion such as barcode or logo (Step S15). When the print image data includes the specific portion such as barcode or logo (Step S15: Yes), the handler45B specifies the coordinate position of the specific portion such as barcode or logo in the print image data (Step S16). The handler45B specifies the coordinate position of the control target portion from the coordinate position of the specified specific portion (Step S17). Further, the coordinate position of the control target portion is the coordinate position including the specific portion, such as barcode or logo, in the print image data. The handler45B generates the printing control information corresponding to the coordinate position of the control target portion, for example, the information on the first printing control such as the print density and the print speed (Step S18). Further, the information on the first printing control is information on the printing control that is higher than the default print density and lower than the default print speed. The handler45B transmits the printing control information for each coordinate position of the control target portion to the printer unit3A in the printing device3via the USB driver45C (Step S19). Although the printer unit3A receives the coordinate position of the control target portion and the printing control information from the handler45B, the coordinate position of the portion other than the control target portion and the default printing control information on the coordinate position, for example, the second printing control information, are previously registered. Further, the handler45B transmits the print image data to the printer unit3A (Step S20). The printer unit3A in the printing device3determines whether the printing control information (information on the first printing control) for each coordinate position of the control target portion has been received (Step S21). When the printing control information (information on the first printing control) for each coordinate position of the control target portion has been received (Step S21: Yes), the printer unit3A stores the printing control information (information on the first printing control) for each coordinate position based on the printing control information (information on the first printing control) (Step S22). Specifically, the printer unit3A stores information on the first printing control such as the print speed and the print density for each coordinate position of the control target portion in the print image data and information on the second printing control for each coordinate position of the portion other than the control target portion in the print image data. The printer unit3A starts a printing operation of the print image data based on the information on the first printing control and the information on the second printing control for each coordinate position (Step S23) and ends the processing operation illustrated inFIG.15. That is, the printer unit3A increases the print density and decreases the print speed for the control target portion, such as barcode or logo, in the print image data so as to improve the print quality of barcode and logo portions. Further, the printer unit3A prints a portion other than the barcode and logo portions in the print image data at the default print speed and print density. Further, when the printing control information for each coordinate position of the control target portion has not been received (Step S21: No), the printer unit3A starts the printing operation of the print image data based on the information on the second printing control such as the print speed and the print density as default settings (Step S24). Then, the printer unit3A ends the processing operation illustrated inFIG.15. When the detection process is not enabled (Step S13: No), the handler45B proceeds to Step S20to transmit the print image data to the printer unit3A. Further, when there is no specific portion, such as barcode or logo, in the print image data (Step S15: No), the handler45B proceeds to Step S20to transmit the print image data to the printer unit3A. Description of Operation of Information Processing Apparatus2 FIG.16is a flowchart illustrating an example of the processing operation of the information processing apparatus2regarding the detection process. The handler45B determines whether there is print image data (Step S31). When there is print image data (Step S31: Yes), the handler45B determines whether the detection process is enabled to detect the coordinate position of the specific portion, such as barcode or logo, from the print image data (Step S32). When the detection process is enabled (Step S32: Yes), the handler45B determines whether a specified size of the extraction contour has been set (Step S33). Further, the specified size is a similar condition such as the minimum size of the contour of a barcode or logo in the print image data. For example, the object for the specified size is the width or ratio of a thin line and a thick line in a barcode in the case of a one-dimensional barcode and is the size of one particle in the case of a two-dimensional barcode. When the specified size has been set (Step S33), the handler45B executes the morphology conversion process illustrated inFIGS.9and10on the print image data (Step S34). After the morphology conversion process is executed on the print image data, the handler45B executes the contour extraction process (seeFIG.11) to extract the extraction contour from the print image data (Step S35). Specifically, after the morphology conversion process and the contour extraction process are executed, the handler45B specifies the coordinate position of the extraction contour of the similar specific portion such as logo or barcode from the print image data and stores the coordinate position of the specified extraction contour in the first storage unit61. After the coordinate position of the extraction contour of the similar specific portion such as logo or barcode is stored in the first storage unit61, the handler45B determines whether the number (extraction contour number) of extraction contours stored in the first storage unit61has reached a predetermined threshold (Step S36). Further, the predetermined threshold is the total number of specific portions, such as logos and barcodes, previously set in the print image data. When the extraction contour number has not reached the predetermined threshold (Step S36: No), the handler45B specifies any extraction contour from a plurality of extraction contours (Step S37). The handler45B determines whether the specified extraction contour is equal to or more than the barcode threshold (Step S38). Further, the barcode threshold is, for example, a similar condition corresponding to the preset minimum size of a barcode. When the specified extraction contour is equal to or more than the barcode threshold (Step S38: Yes), the handler45B determines that the specified extraction contour is similar to a barcode and stores the coordinate position of the extraction contour in the first storage unit61(Step S39). Furthermore, when the extraction contour is not equal to or more than the barcode threshold (Step S38: No), the handler45B determines that the specified extraction contour is not similar to a barcode and determines whether the specified extraction contour is equal to or more than the logo threshold (Step S40). Moreover, the logo threshold is, for example, a similar condition corresponding to the preset minimum size of a logo. When the specified extraction contour is equal to or more than the logo threshold (Step S40: Yes), the handler45B determines that the extraction contour is similar to a logo and stores the coordinate position of the specified extraction contour in the first storage unit61(Step S41). Then, the handler45B proceeds to Step S36to determine whether the extraction contour number has reached the predetermined threshold. When the extraction contour number has reached the predetermined threshold (Step S36: Yes), the handler45B determines whether there is a duty threshold (Step S42). Further, as for the duty threshold, for example, the proportion of black is 40%. When there is a duty threshold (Step S42: Yes), the handler45B determines whether the extraction contour number has reached an effective detection number (Step S43). Moreover, the effective detection number is the total number of specific portions in the preset print image data. When the extraction contour number has not reached the valid detection number (Step S43: No), the handler45B specifies the extraction contour (Step S44). The handler45B calculates the duty of the specified extraction contour (Step S45). Further, the duty of the extraction contour is the ratio between black and white of the extraction contour at the extracted coordinates after extracting the coordinates of the extraction contour from the print image data. The handler45B determines whether the duty of the specified extraction contour is equal to or more than the duty threshold (Step S46). When the duty of the specified extraction contour is equal to or more than the duty threshold (Step S46: Yes), the handler45B determines that the specified extraction contour is a specific portion such as barcode or logo and stores the coordinate position of the extraction contour in the second storage unit62(Step S47). Further, after storing the coordinate position of the extraction contour in the second storage unit62, the handler45B proceeds to Step S43to determine whether the extraction contour number of the extraction contours stored in the second storage unit62has reached the effective detection number. When the duty of the extraction contour is not equal to or more than the duty threshold (Step S46: No), the handler45B determines that the specified extraction contour is not a specific portion such as barcode or logo and proceeds to Step S43to determine whether the extraction contour number has reached the effective detection number. When the extraction contour number has reached the effective detection number (Step S43: Yes), the handler45B sorts the coordinate positions of the extraction contours that are stored in the second storage unit62and have equal to or more than the duty threshold (Step S48). The handler45B determines whether there is an extraction contour that satisfies the single condition among the sorted extraction contours (Step S49). The handler45B determines that the extraction contour satisfying the single condition is one extraction contour (Step S50). The handler45B determines that the coordinate position of the extraction contour that is stored in the second storage unit62and satisfies the single condition or the coordinate position of the extraction contour that does not satisfy the single condition is the coordinate position of the control target portion and stores the coordinate position of the control target portion in the coordinate table63(Step S51). Specifically, the handler45B sorts the coordinate positions of the extraction contours of the sorted control target portions stored in the coordinate table63in ascending order. Further, the handler45B stores the printing control information (information on the first printing control) in the coordinate table63for each coordinate position of the control target portion (Step S52). Further, the handler45B transmits the coordinate position of the control target portion stored in the coordinate table63, the printing control information (information on the first printing control), and the print image data to the printing device3(Step S53) and ends the processing operation illustrated in16. That is, the printer unit3A in the printing device3sorts the sorted coordinate positions of the extraction contours in ascending order to start printing. When there is no print image data (Step S31: No), the handler45B transmits a normal print request to the printing device3(Step S54) and ends the processing operation illustrated inFIG.16. When the detection process is not enabled (Step S32: No), the handler45B transmits the print image data to the printing device3(Step S55) and ends the processing operation illustrated inFIG.16. When there is no duty threshold (Step S42: No), the handler45B sets the duty default value as a duty threshold (Step S56) and proceeds to Step S43to determine whether the extraction contour number has reached the effective detection number. Description of Operation of Printing Device3 FIG.17is a flowchart illustrating an example of the processing operation of the printing device3regarding a printing process. The printer unit3A in the printing device3determines whether the coordinate position of the control target portion, the printing control information, and the print image data have been received from the information processing apparatus2(Step S61). Further, the printing control information is the information on the first printing control for the coordinate position of the control target portion. When the coordinate position of the control target portion, the printing control information, and the print image data have been received (Step S61: Yes), the printer unit3A calculates a control start position and a control end position from the coordinate position of the control target portion (Step S62). Further, for example, when the coordinate position of the control target portion is as inFIG.14, the control start position is the start Y coordinate Y21of the extraction contour R8of the control target portion, and the control end position is the end Y coordinate Y24of the extraction contour R9of the control target portion. After the control start position and the control end position of all the control target portions in the print image are calculated, the printer unit3A stores the information on the first printing control and the information on the second printing control for each control start position and control end position. Further, the printer unit3A sets the print medium at a print start position (Step S63) and starts printing on the print medium at the print start position (Step S64). Moreover, the print start position is, for example, the start position of the print content corresponding to the print image data. After printing on the print medium starts, the printer unit3A executes the printing control corresponding to the default printing control information (Step S65). After executing the printing control corresponding to the default printing control information, the printer unit3A determines whether the print position has reached the print end position (Step S66). Further, the print position is, for example, the print position in the middle of the current printing by the thermal head26. The print end position is, for example, the end position of the print content corresponding to the print image data. When the print position has reached the print end position (Step S66: Yes), the printer unit3A ends the processing operation illustrated inFIG.17. When the print position has not reached the print end position (Step S66: No), the printer unit3A determines whether the print position has reached the control start position (Step S67). When the print position has reached the control start position (Step S67: Yes), the printer unit3A executes the printing control corresponding to the printing control information (information on the first printing control) (Step S68). Specifically, when the control start position has been reached, the printer unit3A executes the first printing control for the coordinate position of the control target portion, that is, increases the print density and decreases the print speed. Further, the printer unit3A determines whether the print position has reached the control end position during execution of the printing control corresponding to the printing control information at Step S68(Step S69). When the print position has reached the control end position (Step S69: Yes), the printer unit3A proceeds to Step S65to execute the printing control corresponding to the default printing control information. Further, when the print position has not reached the control start position (Step S67: No), the printer unit3A proceeds to Step S65to execute the printing control corresponding to the default printing control information. When the print position has not reached the control end position (Step S69: No), the printer unit3A proceeds to Step S68to execute the printing control corresponding to the printing control information. Description of Effect of Embodiment The information processing apparatus2detects the control target portion including the specific portion, such as barcode or logo, from the print image data and transmits, to the printing device3, the printing control information (information on the first printing control) for increasing the print density and decreasing the print speed for only the control target portion on the print medium. As a result, the printing device3may improve the print quality of the specific portion even when printing is executed from the print image data including the specific portion. Furthermore, as the printing device3decreases the print speed for only the control target portion including the specific portion, it is possible to prevent a significant reduction in the print speed of the entire print medium. Moreover, as the printing device3increases the print density for only the control target portion, the effect on the life of the thermal head26may be reduced. The detection unit52extracts the contour of the specific portion from the print image data based on the similar condition that is similar to the specific portion and detects the position of the specific portion from the extracted contour. As a result, the information processing apparatus2may detect the position of the control target portion including the specific portion from the print image data. The detection unit52binarizes each pixel of the print image, executes the morphology conversion process on each binarized pixel, then extracts the contour from the pixels after the morphology conversion process, and detects the position of the specific portion from the extracted contour. As a result, the information processing apparatus2may accurately detect the position of the control target portion including the specific portion from the print image data. The detection unit52detects the contour as the position of the specific portion when the duty ratio between one value and the other value out of two values of each pixel of the extracted contour is equal to or more than a predetermined duty threshold. As a result, the information processing apparatus2may detect the position of the control target portion including the specific portion from the print image data with high accuracy. The generation unit53generates, as the printing control information for each position of the specific portion, the information on the first printing control for performing control to increase the print density so as to be different from the portion other than the specific portion when printing is executed by the thermal head26. As a result, the printing device3increases the print density for the specific portion so as to improve the print quality of the specific portion. The generation unit53generates, as the printing control information for each position of the specific portion, the information on the first printing control for performing control to decrease the print speed so as to be different from the portion other than the specific portion when printing is executed by the thermal head26. As a result, the printing device3decreases the print speed for the specific portion so as to improve the print quality of the specific portion. The information processing apparatus2enables the detection process in the case of a print medium having a poor print quality. As a result, the printing device3may improve the print quality of the specific portion even when printing is executed from the print image data including the specific portion even on the print medium having a poor print quality. Further, the information processing apparatus2disables the detection process in the case of a print medium having a desirable print quality. Accordingly, the print speed is the same as that of the conventional model. The information processing apparatus2transmits the coordinate position of the control target portion, the information on the first printing control for each coordinate position of the control target portion, and the print image data to the printing device3. The printing device3controls printing of the control target portion based on the coordinate position of the control target portion acquired from the information processing apparatus2and the information on the first printing control for each coordinate position of the control target portion. As a result, the printing device3may improve the print quality of the specific portion without changing the hardware configuration. Although the printing device3according to the present embodiment is illustrated as an airline printer, or the like, which issues an airline boarding pass, etc., it may be, for example, a receipt printing device that issues a receipt, or the like, and may be changed as appropriate. Furthermore, each component of each unit illustrated does not always need to be physically configured as illustrated. Specifically, specific forms of separation/integration of units are not limited to the one illustrated, and all or some of them may be functionally or physically configured to be separated/integrated in any unit in accordance with various loads or usage conditions. Furthermore, all or any part of various processing functions performed by each device may be executed on a CPU (Central Processing Unit) (or microcomputer such as MPU (Micro Processing Unit) or MCU (Micro Controller Unit)). Further, it is obvious that all or any part of various processing functions may be executed on a program analyzed and executed by a CPU (or microcomputer such as MPU or MCU) or wired logic hardware. According to one aspect, the object is to provide a printing system, and the like, which may improve the print quality of the specific portion even when printing is executed from the print image data including the specific portion. All examples and conditional language recited herein are intended for pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiment of the present invention has been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention. | 49,234 |
11861245 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS An embodiment of the present invention will be described below with reference to the accompanying drawings. In the present specification, a schedule for the printing order of a plurality of print jobs subject to continuous printing is referred to as a “job execution schedule”, a schedule for using roll paper loaded in each slot and a schedule for loading roll paper into each slot are referred to as a “roll paper loading schedule”, and a job execution schedule and a roll paper loading schedule are collectively referred to as a “print schedule”. The type of roll paper may be abbreviated as a “roll type”. 1. Configuration of Printing System FIG.1is a configuration diagram of a printing system according to an embodiment of the present invention. The printing system includes a printer10, an autochanger20including a plurality of slots each for holding roll paper, a paper winder30for winding printing paper PA after printing, and a print controller40for controlling the operations of the printer10, the autochanger20, and the paper winder30. The printer10performs printing by discharging ink onto the printing paper PA without using a printing plate, and the autochanger20has an automatic switching function for roll paper to be used for printing. That is, this printing system achieves an inkjet printing apparatus with an automatic switching function for roll paper. The print controller40is provided with print job data (hereinafter referred to as “job data”) via a communication line such as a local area network (LAN). Note that the autochanger20will be described later in detail. As illustrated inFIG.1, the printer10includes a first driving roller110for conveying the printing paper PA supplied from the autochanger20to the inside, a plurality of support rollers120for conveying the printing paper PA inside the printer10, a printing unit130that performs printing by discharging ink onto the printing paper PA, a drying unit140that dries the printing paper PA after printing, an inspection unit150that inspects a state of printing on the printing paper PA, and a second driving roller160for outputting the printing paper PA from the inside of the printer10. The printing unit130includes a C inkjet head130c, an M inkjet head130m, a Y inkjet head130y, and a K inkjet head130kthat discharge cyan (C), magenta (M), yellow (Y), and black (K) inks, respectively. Typically, each of the inkjet heads130c,130m,130y, and130kincludes a plurality of head modules arranged in a staggered manner. Each head module has many nozzles. In the above configuration, when an instruction to start printing is given to the print controller40, the print controller40controls the operations of the autochanger20, the printer10, and the paper winder30so that the printing paper PA is supplied from the autochanger20to the printer10and the printing paper PA after printing is wound up by the paper winder30. Then, in the process of conveying the printing paper PA, first, printing is performed by discharging the ink from each of the inkjet heads130c,130m,130y, and130kin the printing unit130, next, the printing paper PA is dried by the drying unit140, and finally, the printing state is inspected by the inspection unit150. Although the configuration of the printer10for performing color printing has been exemplified here, the present invention can also be applied to a case where a printer for performing monochrome printing has been employed. Further, although the configuration of the printer10using aqueous ink has been exemplified here, the present invention can also be applied to a case where a printer using ultraviolet (UV) ink (ultraviolet curing ink), such as an inkjet printing apparatus for label printing, has been employed. In this case, an ultraviolet irradiation unit that cures UV ink on the printing paper PA by ultraviolet irradiation is provided inside the printer10(cf.FIG.1) instead of the drying unit140. Moreover, the present invention can also be applied to a case where a configuration in which the printer10is directly connected to a post-processing machine has been employed instead of the configuration in which the paper winder30is provided. Furthermore, the present invention can also be applied to a case where a configuration in which duplex printing is enabled by connecting two printers10via a reversing unit has been employed. In addition, the present invention can also be applied to a case where an autochanger similar to those illustrated inFIG.3is connected to the paper outputting end of the printer10. 2. Hardware Configuration of Print Controller FIG.2is a block diagram illustrating a hardware configuration of the print controller40. As illustrated inFIG.2, the print controller40includes a body410, an auxiliary storage device421, an optical disk drive422, a display unit423, a keyboard424, a mouse425, and the like. The body410includes a central processing unit (CPU)411, a memory412, a first disk interface unit413, a second disk interface unit414, a display control unit415, an input interface unit416, an output interface unit417, and a network interface unit418. The CPU411, the memory412, the first disk interface unit413, the second disk interface unit414, the display control unit415, the input interface unit416, the output interface unit417, and the network interface unit418are connected to each other via a system bus. The auxiliary storage device421is connected to the first disk interface unit413. The optical disk drive422is connected to the second disk interface unit414. The display unit (display device)423is connected to the display control unit415. The keyboard424and the mouse425are connected to the input interface unit416. The printer10is connected to the output interface unit417via a communication cable. A communication line50is connected to the network interface unit418. The auxiliary storage device421is a magnetic disk device or the like. An optical disk52as a computer-readable recording medium such as a compact disk read-only memory (CD-ROM) or a digital versatile disk (DVD)-ROM is inserted into the optical disk drive422. The display unit423is a liquid crystal display or the like. The display unit423is used to display information desired by an operator. The keyboard424and the mouse425are used by an operator to input instructions to the print controller40. The auxiliary storage device421stores a print control program P. In the present embodiment, the print control program P includes a subprogram that creates a print schedule so that efficient printing is performed utilizing the function of the autochanger20. The CPU411reads the print control program P stored in the auxiliary storage device421into the memory412and executes the program to achieve various functions of the print controller40. The memory412includes a random-access memory (RAM) and a read-only memory (ROM). The memory412functions as a work area for the CPU411to execute the print control program P stored in the auxiliary storage device421. Note that the print control program P is provided by being stored into the computer-readable recording medium (non-transitory recording medium). That is, for example, a user purchases the optical disk52as a recording medium of the print control program P, inserts the optical disk52into the optical disk drive422, reads the print control program P from the optical disk52, and installs the print control program P in the auxiliary storage device421. Alternatively, the print control program P transmitted via the communication line50may be received by the network interface unit418and installed in the auxiliary storage device421. 3. Configuration of Autochanger FIG.3is a diagram illustrating a schematic configuration of the autochanger20. As illustrated inFIG.3, the autochanger20according to the present embodiment includes four slots (first slot211, second slot212, third slot213, and fourth slot214) each for holding the roll paper RL, a switching mechanism220that automatically switches the roll paper RL to be used for printing (i.e., automatically switches the slot from which the roll paper RL to be supplied to the printer10is drawn), and a splicer230that joins the terminal end of the roll paper RL in use and the starting end of the roll paper RL to be used next (i.e., the starting end of the roll paper RL held in the slot selected by the switching mechanism220) using an adhesive tape or the like. As above, the autochanger20supplies the roll paper drawn out from one of the four slots to the printer10as continuous paper. In a case where the roll paper RL to be used is switched when the remaining amount of the roll paper in use is not 0, the roll paper in use is cut, and the terminal end of the roll paper RL in use after cutting and the starting end of the roll paper RL to be used next are joined by the splicer230. The printing system according to the present embodiment is provided with the autochanger20having the configuration as described above, so that the roll paper RL to be used can be switched without stopping the conveyance of the printing paper PA. Note that the physical structure of the autochanger20is not directly related to the present invention, and hence its description and illustration are omitted. 4. Functional Configuration Related to Creation of Print Schedule As described above, the print control program P includes a subprogram for creating a print schedule.FIG.4is a block diagram illustrating a functional configuration achieved by executing the subprogram (i.e., a functional configuration related to the creation of a print schedule). Note that, the components in the autochanger20are achieved by a computer provided in the autochanger20executing a predetermined program. The print controller40includes a setting unit440, a storage unit450, a job processing unit460, and a communication unit470. The setting unit440includes a priority mode setting unit441, a roll replacement timing setting unit442, a preferentially used roll setting unit443, a required roll replacement time registration unit444, and an active slot registration unit445. The storage unit450includes a priority mode storage unit451, a roll replacement timing storage unit452, a preferentially used roll storage unit453, a required roll replacement time storage unit454, an active slot storage unit455, and a loading information storage unit456. The job processing unit460includes a job reception unit461, a roll type identification unit462, a job grouping unit463, a job group classification unit464, a job execution scheduling unit465, and a roll paper loading scheduling unit466. The communication unit470includes a loading information transmission unit471. The priority mode setting unit441sets a priority mode at the time of performing scheduling (the creation of a print schedule) on the basis of the operation of the operator. In the present embodiment, a job list registration order mode for executing printing in the order of print jobs registered in a job list (information of a plurality of print jobs subject to continuous printing, which is held in a list format so that the information can be referred to), a production time reduction mode for prioritizing a reduction in time required for executing continuous printing based on the plurality of print jobs registered in the job list, and a roll specification order mode for executing printing in a specified “order of roll types” are prepared, and the priority mode setting unit441sets any one of the three modes as the priority mode. At the time of setting the roll specification order mode to the priority mode, the order of roll types is also specified. The information on the priority mode set by the priority mode setting unit441is stored into the priority mode storage unit451. The roll replacement timing setting unit442sets whether to replace the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs or to replace the roll paper at a later timing from the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary, on the basis of the operation of the operator. The matter set by the roll replacement timing setting unit442is stored into the roll replacement timing storage unit452. The preferentially used roll setting unit443sets whether to preferentially use roll paper with a large remaining amount or to preferentially use roll paper with a small remaining amount in a case where the same type of roll papers are held in plural slots, on the basis of the operation of the operator. The matter set by the preferentially used roll setting unit443is stored into the preferentially used roll storage unit453. The required roll replacement time registration unit444registers a time per slot required for replacement of the roll paper by the operator (hereinafter referred to as “required roll replacement time”) on the basis of the operation of the operator. The information on the required roll replacement time registered by the required roll replacement time registration unit444is stored into the required roll replacement time storage unit454. The active slot registration unit445registers available slots (active slots) among the four slots (first slot211, second slot212, third slot213, and fourth slot214) provided in the autochanger20on the basis of the operation of the operator. The information on the active slots registered by the active slot registration unit445is stored into the active slot storage unit455. The job reception unit461receives job data provided to the print controller40via the communication line50. The job data includes, in addition to image data to be printed, information on the type of roll paper necessary for executing the corresponding print job, and the like. The roll type identification unit462identifies a roll type necessary for execution of each print job on the basis of the job data received by the job reception unit461. The job grouping unit463groups a plurality of print jobs registered in a job list. Note that this grouping is performed using the roll type as a key. Hereinafter, a set of plural print jobs associated with the same roll type is referred to as a “job group”. For example, in a case where four types of roll paper are used by executing a plurality of print jobs registered in a job list, grouping into four job groups is performed for the plurality of print jobs. The job group classification unit464classifies job groups obtained by the grouping by the job grouping unit463into an executable job group and an inexecutable job group. The executable job group is a job group including only print jobs enabling the completion of printing using one or more roll papers loaded in the active slots. The inexecutable job group is a job group including print jobs not enabling the completion of printing in a case in which only one or more roll papers loaded in the active slots are used. For example, it is assumed that one job group is made up of two print jobs (job J1 and job J2) associated with a roll type of “type TA”, and the total print distance of the two print jobs is 1,000 m. At this time, when the roll paper of Type TA is loaded in only one slot, and the remaining amount (remaining distance) of the roll paper is 2,000 m, the job group is classified as an executable job group. On the other hand, when the roll paper of Type TA is loaded in only one slot, and the remaining amount of the roll paper is 500 m, the job group is classified as an inexecutable job group. As thus described, the job group is classified by the job group classification unit464in consideration of the remaining amount of roll paper loaded in each active slot and the total print distance for each roll type. The information stored in the priority mode storage unit451, the roll replacement timing storage unit452, the preferentially used roll storage unit453, the required roll replacement time storage unit454, and the active slot storage unit455(hereinafter referred to as “scheduling condition information”) is referred to by the job execution scheduling unit465and the roll paper loading scheduling unit466. The job execution scheduling unit465determines the printing order for the plurality of print jobs registered in the job list (i.e., creates a job execution schedule) on the basis of the scheduling condition information. The roll paper loading scheduling unit466creates a roll paper loading schedule that is a schedule for using a roll paper loaded in each slot and a schedule for loading a roll paper into each slot on the basis of the scheduling condition information. At that time, regarding a case where the loading of the roll paper into the slot is required when the printing is executed in the order based on the job execution schedule created by the job execution scheduling unit465, the roll paper loading scheduling unit466obtains a time at which the loading of the roll paper into a loading target slot (a slot to which the roll paper is loaded) can be started (hereinafter referred to as a “possible loading start time”) and a time indicating when the loading of the roll paper into the loading target slot is to be started at the latest in order not to stop the printing operation (hereinafter referred to as a “loading start deadline time”). The information on the possible loading start time and the loading start deadline time obtained by the roll paper loading scheduling unit466is stored into the loading information storage unit456. The loading information transmission unit471transmits the information on the possible loading start time and the loading start deadline time (hereinafter referred to as “loading information”) stored in the loading information storage unit456to the autochanger20. The autochanger20includes a communication unit250and an information display unit260. The communication unit250includes a loading information reception unit251. The information display unit260includes a loading information notification unit261. The loading information reception unit251receives the loading information transmitted from the loading information transmission unit471. The loading information is provided to the loading information notification unit261. The loading information notification unit261notifies the outside of the loading information. 5. Creation of Print Schedule Next, how the print schedule (job execution schedule and roll paper loading schedule) is created in the present embodiment will be described. Hereinafter, the roll paper to be loaded into the slot is referred to as “loading target roll paper”. 5.1 Procedure for Scheduling Processing A procedure for scheduling processing (a series of processing related to the creation of a print schedule) will be described with reference to a flowchart illustrated inFIG.5. Note that this scheduling processing is achieved by the subprogram included in the print control program P. That is, this scheduling processing is processing performed by the print controller40. After the scheduling processing is started, initial processing is performed (step S100). The initial processing will be described with reference to the flowchart illustrated inFIG.6. In the initial processing, first, various conditions related to the operation of the autochanger20are set (step S101), specifically as follows. The priority mode setting unit441sets a mode to be employed as the priority mode (any one of the job list registration order mode, the production time reduction mode, and the roll specification order mode is set as the priority mode). The roll replacement timing setting unit442sets whether to replace the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs or to replace the roll paper at a later timing from the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary. The preferentially used roll setting unit443sets whether to preferentially use roll paper with a large remaining amount or to preferentially use roll paper with a small remaining amount. The required roll replacement time registration unit444registers the required roll replacement time. The active slot registration unit445registers active slots. Next, on the basis of the job data received by the job reception unit461, a print job (to execute the continuous printing from now on) is registered in the job list (step S102). Here, it is assumed that a plurality of print jobs are registered. Note that the roll type identification unit462identifies the roll type associated with each print job. Next, information on the type of roll paper loaded in each active slot of the autochanger20is acquired (step S103). When the active slot is not loaded with roll paper, information indicating that the active slot is an empty slot is acquired. Next, the job grouping unit463groups the plurality of print jobs registered in step S102(i.e., the plurality of print jobs registered in the job list) using the roll type as a key (step S104). Next, the job group classification unit464classifies the job groups obtained by the grouping in step S104into the executable job group described above and the inexecutable job group described above (step S105). Thereby, the initial processing ends. After the end of the initial processing, it is determined whether or not a first determination condition that “continuous printing based on a plurality of print jobs registered in a job list can be executed without requiring loading of roll paper into a slot or replacement of roll paper loaded in the slot during execution of printing” is satisfied (step S110). In the determination in step S110, the result of the classification in step S105is referred to. When all the job groups are classified as executable job groups, it is determined that the first determination condition is satisfied. In addition, even when there is a job group classified as an inexecutable job group, it is determined that the first determination condition is satisfied in a case where the continuous printing based on the plurality of print jobs registered in the job list is enabled by loading roll paper into an empty slot before the start of printing or by replacing roll paper in a slot loaded with roll paper that is not scheduled to be used before the start of printing. When it is necessary to load or replace the roll paper during execution of printing, it is determined that the first determination condition is not satisfied. When it is determined in step S110that the first determination condition is satisfied, the processing proceeds to step S112, and when it is determined in step S110that the first determination condition is not satisfied, the processing proceeds to step S120. In step S112, the printing order for the plurality of print jobs registered in the job list is determined according to the priority mode set in step S101. When the priority mode is the job list registration order mode, the printing order is determined according to the order of print jobs registered in the job list. When the priority mode is the production time reduction mode, the printing order is determined so that the time required to execute the continuous printing based on the plurality of print jobs becomes the shortest (typically, the printing order is determined so that the number of times of roll paper splicing is minimized). When the priority mode is the roll specification order mode, the printing order is determined so that the continuous printing is executed in the specified “order of roll types”. After the end of step S112, the processing proceeds to step S190. In step S120, it is determined whether or not a second determination condition that “the priority mode set in step S101is the job list registration order mode” is satisfied. As a result, when the second determination condition is satisfied, the processing proceeds to step S122, and when the second determination condition is not satisfied, the processing proceeds to step S130. In step S122, the printing order is determined according to the order of the print jobs registered in the job list. The loading target slot corresponding to each loading target roll paper is identified in consideration of the determined printing order and the setting as to whether to preferentially use roll paper with a large remaining amount or to preferentially use roll paper with a small remaining amount (this setting was made in step S101ofFIG.6). After the end of step S122, the processing proceeds to step S190. In step S130, information of the roll type necessary for executing the print job constituting the job group classified as the inexecutable job group in step S105(i.e., information of the roll type associated with the print job constituting the job group classified as the inexecutable job group) is acquired. In step S140, it is determined whether or not a third determination condition that “the priority mode set in step S101is the production time reduction mode” is satisfied. As a result, when the third determination condition is satisfied, the processing proceeds to step S150, and when the third determination condition is not satisfied, the processing proceeds to step S160. That is, when the priority mode is the production time reduction mode, the processing proceeds to step S150, and when the priority mode is the roll specification order mode, the processing proceeds to step S160. In step S150, it is determined whether or not a fourth determination condition that “a setting of ‘replacing the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary’ was made in step S101” is satisfied. As a result, when the fourth determination condition is satisfied, the processing proceeds to step S152, and when the fourth determination condition is not satisfied, the processing proceeds to step S154. In step S152, the printing order of the plurality of print jobs is determined so that the print job constituting the job group classified as the inexecutable job group is later in the order, and the print job associated with the roll type having a shorter total print distance is earlier in the order. Similarly to step S122, the loading target slot corresponding to each loading target roll paper is identified. After the end of step S152, the processing proceeds to step S170. In step S154, the printing order of the plurality of print jobs is determined so that the print job constituting the job group classified as the executable job group is earlier in the order, and the print job associated with the roll type having a longer total print distance is earlier in the order. Similarly to step S122, the loading target slot corresponding to each loading target roll paper is identified. After the end of step S154, the processing proceeds to step S170. In step S160, it is determined whether or not the fourth determination condition described above is satisfied. As a result, when the fourth determination condition is satisfied, the processing proceeds to step S162, and when the fourth determination condition is not satisfied, the processing proceeds to step S164. In step S162, the printing order for the plurality of print jobs is determined so that printing is executed in the “order of roll types” specified in step S101. On the basis of the determined printing order, the loading target slot corresponding to each loading target roll paper is identified on the assumption that the roll paper is loaded into the slot at the timing as early as possible from the printing start time point. After the end of step S162, the processing proceeds to step S170. In step S164, the printing order for the plurality of print jobs is determined so that printing is executed in the “order of roll types” specified in step S101. In addition, on the basis of the determined printing order, the loading target slot corresponding to each loading target roll paper is identified on the assumption that the roll paper is loaded into the slot at the timing as late as possible from the printing start time point. After the end of step S164, the processing proceeds to step S170. In step S170, the possible loading start time is obtained for each loading target roll paper on the basis of the printing order and the loading target slot determined in any one of steps S122, S152, S154, S162, and S164. In step S180, the loading start deadline time is obtained for each loading target roll paper on the basis of the printing order referred to in step S170and the required roll replacement time set in step S101. Specifically, a time earlier by the required roll replacement time than the scheduled time at which printing on the roll paper after the replacement is started is obtained as the loading start deadline time. Note that a period from the possible loading start time to the loading start deadline time corresponds to a possible loading time (a time at which roll paper can be loaded into the loading target slot). In step S190, the entire print schedule (the printing order for the plurality of print jobs registered in the job list, the printing start time and the print end time for each print job, the association between the print job and the slot, the possible loading start time and the loading start deadline time for each loading target roll paper, etc.) is fixed. Thereafter, in step S200, the loading information is transmitted from the print controller40to the autochanger20, and the loading information is displayed on the information display unit260of the autochanger20. Thereby, the scheduling processing ends. Note that, in step S200, for example, the loading information may be transmitted to an information transmission destination, such as a mail address specified in advance, by e-mail or the like. Further, when the possible loading start time is reached during the period in which the continuous printing is executed according to the print schedule fixed in step S190, a notification that the roll paper is to be loaded into the slot may be transmitted to an information transmission destination, such as a mail address specified in advance, by e-mail or the like. Meanwhile, in steps S122, S152, S154, S162, and S164, the job execution schedule and the roll paper loading schedule are created on the basis of the type and the remaining amount of the roll paper loaded in each of the plurality of slots constituting the autochanger20, the type of the roll paper required to execute each of the plurality of print jobs subject to continuous printing, and the print distance of each of the plurality of print jobs. In the present embodiment, a priority mode setting step, a required roll replacement time registration step, and a roll replacement timing setting step are achieved by step S101, a print job registration step is achieved by step S102, a job grouping step is achieved by step S104, a job group classification step is achieved by step S105, a scheduling step is achieved by step S112, step S122, step S152, step S154, step S162, and step S164, a possible loading time calculation step is achieved by step S170and step S180, a possible loading start time calculation step is achieved by step S170, a loading start deadline time calculation step is achieved by step S180, and a loading information notification step is achieved by step S200. 5.2 Detailed Specific Example of Scheduling Processing Next, a detailed specific example of the scheduling processing will be described. In the present specific example, situations described in the following (a) to (i) are assumed.(a) The printing speed is “100 m/min”.(b) All of the four slots (first slot211, second slot212, third slot213, and fourth slot214) provided in the autochanger20are registered as active slots.(c) Eight print jobs (jobs J1 to J8) are registered in the job list as illustrated inFIG.7.(d) In the initial state, as illustrated inFIG.8, the first slot211is loaded with roll paper of Type TA with a remaining amount of 1,000 m, the second slot212is loaded with roll paper of Type TB with a remaining amount of 400 m, the third slot213is loaded with roll paper of Type TB with a remaining amount of 5,000 m, and the fourth slot214is loaded with roll paper of Type TD with a remaining amount of 10,000 m. Note that the serial number of the roll paper loaded in the first slot211is A123, the serial number of the roll paper loaded in the second slot212is B456, the serial number of the roll paper loaded in the third slot213is B987, and the serial number of the roll paper loaded in the fourth slot214is D357.(e) The priority mode has been set to the roll specification order mode, and the specified “order of roll types” is “Type TB, Type TD, Type TA, and Type TC”.(f) The setting of “preferentially using roll paper with a small remaining amount” has been made.(g) The setting of “replacing the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made.(h) The required roll replacement time is 5 minutes.(i) The printing start time is 10:00. In step S104ofFIG.6, eight print jobs (jobs J1 to J8) registered in the job list are grouped into four job groups. Here, for convenience, a job group made up of the print jobs (Jobs J3, J4, and J8) associated with the roll type “Type TA” is referred to as a “job group JGA”, a job group made up of the print jobs (Jobs J5 and J7) associated with the roll type “Type TB” is referred to as a “job group JGB”, a job group made up of the print job (job J6) associated with the roll type “Type TC” is referred to as a “job group JGC”, and a job group made up of the print jobs (Jobs J1 and J2) associated with the roll type “Type TD” is referred to as a “job group JGD”. The total print distance for the job group JGA is 3,300 m, and the remaining amount of the roll paper of Type TA loaded in the active slot is 1,000 m. It is thus necessary to load the roll paper of Type TA into the active slot. The total print distance for the job group JGB is 900 m, and the remaining amount of the roll paper of Type TB loaded in the active slot is 5,400 m. Thus, it is not necessary to load the roll paper of Type TB into the active slot. The total print distance for the job group JGC is 1,200 m, and there is no roll paper of Type TC loaded in the active slot. It is thus necessary to load the roll paper of Type TC into the active slot. The total print distance for the job group JGD is 3,200 m, and the remaining amount of the roll paper of Type TD loaded in the active slot is 10,000 m. Thus, it is not necessary to load the roll paper of Type TD into the active slot. From the above, in step S105ofFIG.6, the job group JGA and the job group JGC are classified as the inexecutable job groups, and the job group JGB and the job group JGD are classified as the executable job groups. There is an inexecutable job group, and roll paper needs to be loaded during printing. Thus, in step S110ofFIG.5, it is determined that the first determination condition is not satisfied. Further, since the priority mode is the roll specification order mode, it is determined that the second determination condition is not satisfied in step S120ofFIG.5, and it is determined that the third determination condition is not satisfied in step S140ofFIG.5. The setting of “replacing the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made, so that it is determined in step S160ofFIG.5that the fourth determination condition is satisfied. The specified “order of roll types” is “Type TB, Type TD, Type TA, and Type TC”, and hence the printing order is determined in step S162ofFIG.5as illustrated inFIG.9. Therefore, hereinafter, the job group JGB is referred to as a “first print job group”, the job group JGD is referred to as a “second print job group”, the job group JGA is referred to as a “third print job group”, and the job group JGC is referred to as a “fourth print job group”. For job J5, the print distance is 400 m, and the print time is 4 minutes. The remaining amount of the roll paper of Type TB loaded in the second slot212is 400 m. The printing start time is 10:00. From the above, in step S170ofFIG.5, the possible loading start time is determined to be 10:04 for the loading of the roll paper of Type TA necessary for executing the third print job group into the loading target slot (second slot212). Note that, when the setting of “replacing the roll paper at later timing from the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made, a roll paper loading schedule for loading the roll paper of Type TA into the fourth slot214after the end of printing based on the second print job group is created. At this time, in step S170ofFIG.5, the possible loading start time is determined to be 10:41, which is a time at which printing based on the second print job group ends. The first slot211is loaded with the roll paper of Type TA with a remaining amount of 1,000 m. The print distance of job J4 is 1,600 m. The printing speed is “100 m/min”. From the above, the remaining amount of the roll paper loaded in the first slot211becomes 0 ten minutes after the start of the execution of the job J4. That is, at 10:51, the remaining amount of the roll paper loaded in the first slot211becomes 0. The required roll replacement time is five minutes. Thus, in step S180ofFIG.5, the loading start deadline time is determined to be 10:46 for the loading of the roll paper of Type TA necessary for executing the third print job group into the loading target slot (second slot212). For job J7, the print distance is 500 m, and the print time is 5 minutes. Although the remaining amount of the roll paper of Type TB loaded in the third slot213is 5,000 m, the roll paper of Type TB is not used for printing after the end of the printing based on the job J7. Thus, the third slot213can be used as the loading target slot for the roll paper of Type TC necessary for executing the fourth print job group. From the above, in step S170ofFIG.5, the possible loading start time is determined to be 10:09 for the loading of the roll paper of Type TC necessary for executing the fourth print job group into the loading target slot (the third slot213). Note that, when the setting of “replacing the roll paper at a later timing from the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made, a roll paper loading schedule for loading the roll paper of Type TC into the first slot211after the remaining amount of the roll paper of Type TA loaded in the first slot211becomes 0 is created. At that time, in step S170ofFIG.5, the possible loading start time is determined to be 10:51, which is a time at which the remaining amount of the roll paper loaded in the first slot211becomes 0. It takes one hour and 14 minutes to execute the first print job group, the second print job group, and the third print job group. The required roll replacement time is five minutes. Thus, in step S180ofFIG.5, the loading start deadline time is determined to be 11:09 for the loading of the roll paper of Type TC necessary for executing the fourth print job group into the loading target slot (the third slot213). As described above, for the loading of the roll paper of Type TA into the second slot212, the possible loading start time is determined to be 10:04 and the loading start deadline time is determined to be 10:46. For the loading of the roll paper of Type TC into the third slot213, the possible loading start time is determined to be 10:09 and the loading start deadline time is determined to be 11:09. On the basis of these, in step S200ofFIG.5, for example, a roll paper loading schedule as illustrated inFIG.10is displayed on the information display unit260of the autochanger20. The “possible loading start time” column shows a time at which the new roll paper can be loaded into each slot. The “possible loading time length” column shows a time length from the possible loading start time to the loading start deadline time. 5.3 Various Specific Examples Hereinafter, various specific examples related to the scheduling processing will be described. Again, it is assumed that all of the four slots (first slot211, second slot212, third slot213, and fourth slot214) provided in the autochanger20are active slots. 5.3.1 First Case A first case is a case where creation of a job execution schedule (hereinafter referred to as “job execution scheduling”) is performed in step S112ofFIG.5. For the first case, the following situation is assumed. As illustrated inFIG.11, four print jobs (jobs J1 to J4) are registered in the job list. The states of the first to fourth slots211to214in the initial state are as illustrated inFIG.12. The priority mode has been set to the job list registration order mode. In this case, only a job group including the print jobs (jobs J1 and J4) associated with the roll type “Type TA” is the inexecutable job group. In the initial state, the fourth slot214is an empty slot. When the roll paper of Type TA having a length of 2,500 m or more is loaded into the fourth slot214before the start of printing, it is not necessary to load the roll paper into the slot during execution of printing. Therefore, a roll paper loading schedule for loading the roll paper of Type TA into the fourth slot214, which is an empty slot, before the start of printing is created. According to the roll paper loading schedule, the states of the first to fourth slots211to214at the printing start time point are, for example, the states illustrated inFIG.13. Note that, since the priority mode is the job list registration order mode, the printing order determined by the scheduling processing is as illustrated inFIG.11. 5.3.2 Second Case A second case is also a case where the job execution scheduling is performed in step S112ofFIG.5. For the second case, the following situation is assumed. As illustrated inFIG.11, four print jobs (jobs J1 to J4) are registered in the job list. The states of the first to fourth slots211to214in the initial state are as illustrated inFIG.14. The priority mode has been set to the production time reduction mode. In this case, as in the first case, only a job group including the print jobs (jobs J1 and J4) associated with the roll type “Type TA” is the inexecutable job group. In the initial state, there is no empty slot, but the fourth slot214is loaded with the roll paper of Type TD that is not used for printing. When the roll paper loaded in the fourth slot214is replaced with the roll paper of Type TA having a length of 2,500 m or more before the start of printing, it is not necessary to load the roll paper into the slot during execution of printing. Therefore, a roll paper loading schedule for replacing the roll paper loaded in the fourth slot214with the roll paper of Type TA before the start of printing is created. According to the roll paper loading schedule, the states of the first to fourth slots211to214at the printing start time point are, for example, the states illustrated inFIG.13. In addition, since the priority mode is the production time reduction mode, the printing order is determined so that the job J1 and the job J4 are executed in succession. Therefore, the printing order determined by the scheduling processing is as illustrated inFIG.15. 5.3.3 Third Case A third case is a case where the job execution scheduling is performed in step S122ofFIG.5. For the third case, the following situation is assumed. As illustrated inFIG.16, six print jobs (jobs J1 to J6) are registered in the job list. The states of the first to fourth slots211to214in the initial state are as illustrated inFIG.17. The priority mode has been set to the job list registration order mode. In this case, a job group including the print jobs (Jobs J1 and J5) associated with the roll type “Type TA” and a job group including the print jobs (Jobs J2 and J6) associated with the roll type “Type TB” are the inexecutable job groups. In the initial state, there is neither an empty slot nor a slot loaded with a type of roll paper that is not used for printing. It is thus necessary to load the roll paper of Type TA and the roll paper of Type TB into the slot during execution of printing. Since the priority mode is the job list registration order mode, the printing order determined by the scheduling processing is as illustrated inFIG.16. The total print distance of the two print jobs (jobs J1 and J5) associated with the roll type “Type TA” is 2,400 m, whereas the remaining amount of the roll paper of Type TA loaded in the first slot211is 500 m. Here, the roll paper of Type TD is required at the latest timing among the four types of roll paper. Therefore, a roll paper loading schedule for replacing the roll paper loaded in the fourth slot214with the roll paper of Type TA before the start of printing is created. The remaining amount of the roll paper of Type TA loaded in the first slot211becomes 0 during the execution of the job J1, and hence a roll paper loading schedule for loading the roll paper of Type TB into the first slot211is created. The roll paper of Type TD is being taken out from the fourth slot214, so that it is necessary to load the roll paper of Type TD into the slot before the execution of the job J4 is started. In this regard, the remaining amount of the roll paper of Type TB loaded in the second slot212becomes 0 during the execution of the job J2, and hence a roll paper loading schedule for loading the roll paper of Type TD into the second slot212is created. According to the roll paper loading schedule created as described above, the states of the first to fourth slots211to214at the printing start time point are, for example, the states illustrated inFIG.18. In addition, the states of the first to fourth slots211after the remaining amount of the roll paper loaded in the first slot211becomes 0 by the execution of the job J1 are, for example, the states illustrated inFIG.19. Note that, since the printing based on the job J1 is continued even during the work of loading the roll paper into the first slot211, the remaining amount of the roll paper of Type TA loaded in the fourth slot214is actually smaller than 10,000 m at the time when the loading of the roll paper of Type TB into the first slot211is completed. Further, the states of the first to fourth slots211to214after the remaining amount of the roll paper loaded in the second slot212becomes 0 by the execution of the job J2 are the states illustrated inFIG.20. Note that, in practice, at the time when the loading of the roll paper of Type TD into the second slot212is completed, the remaining amount of the roll paper of Type TB loaded in the first slot211is less than 10,000 m. 5.3.4 Fourth Case A fourth case is a case where the job execution scheduling is performed in step S152ofFIG.5. For the fourth case, the following situation is assumed. As illustrated inFIG.21, six print jobs (jobs J1 to J6) are registered in the job list. The states of the first to fourth slots211to214in the initial state are as illustrated inFIG.14. The priority mode has been set to the production time reduction mode. The setting of “replacing the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made. In this case, only a job group including the print jobs (jobs J1 and J5) associated with the roll type “Type TA” is the inexecutable job group. In the initial state, there is neither an empty slot nor a slot loaded with a type of roll paper that is not used for printing. It is thus necessary to load the roll paper of Type TA into the slot during execution of printing. Since the priority mode is the production time reduction mode, the printing order is determined so that the job J1 and the job J5 are continuously executed, the job J2 and the job J6 are continuously executed, and the print jobs (jobs J1 and J5) constituting the inexecutable job group are later in the order. Further, the setting of “replacing the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made, and hence the printing order is determined so that the print job associated with the roll type having a shorter total print distance is earlier in the order. From the above, the printing order determined by the scheduling processing is as illustrated inFIG.22. Concerning the loading of the roll paper of Type TA into the slot, the roll paper of Type TB is not used for printing after the end of printing based on job J6. Thus, a roll paper loading schedule for loading the roll paper of Type TA into the second slot212after the end of the printing based on the job J6 is created. According to the roll paper loading schedule, the states of the first to fourth slots211to214after the end of printing based on the job J6 are, for example, the states illustrated inFIG.23. Note that, in practice, at the time when the loading of the roll paper of Type TA into the second slot212is completed, the remaining amount of the roll paper of Type TC loaded in the third slot213is less than 15,000 m. 5.3.5 Fifth Case A fifth case is a case where the job execution scheduling is performed in step S154ofFIG.5. For the fifth case, the following situation is assumed. As illustrated inFIG.24, six print jobs (jobs J1 to J6) are registered in the job list. The states of the first to fourth slots211to214in the initial state are as illustrated inFIG.14. The priority mode has been set to the production time reduction mode. The setting of “replacing the roll paper at a later timing from the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made. In this case, as in the fourth case, only a job group including the print jobs (jobs J1 and J5) associated with the roll type “Type TA” is the inexecutable job group. In the initial state, there is neither an empty slot nor a slot loaded with a type of roll paper that is not used for printing. It is thus necessary to load the roll paper of Type TA into the slot during execution of printing. Since the priority mode is the production time reduction mode, the printing order is determined so that the job J1 and the job J5 are continuously executed, the job J2 and the job J6 are continuously executed, and the print jobs (jobs J1 and J5) constituting the inexecutable job group are later in the order. Further, the setting of “replacing the roll paper at a later timing from the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made, and hence the print ordering is determined so that the print job associated with a roll type having a longer total print distance is earlier in the order. From the above, the printing order determined by the scheduling processing is as illustrated inFIG.25. Concerning the loading of the roll paper of Type TA into the slot, the roll paper of Type TD is not used for printing after the end of printing based on job J4. Thus, a roll paper loading schedule for loading the roll paper of Type TA into the fourth slot214after the end of the printing based on the job J4 is created. According to the roll paper loading schedule, the states of the first to fourth slots211to214after the end of printing based on the job J4 are, for example, the states illustrated inFIG.13. Note that, in practice, at the time when the loading of the roll paper of Type TA into the fourth slot214is completed, the remaining amount of the roll paper of Type TC loaded in the third slot213is less than 15,000 m. 5.3.6 Sixth Case A sixth case is a case where the job execution scheduling is performed in step S162ofFIG.5. For the sixth case, the following situation is assumed. As illustrated inFIG.26, six print jobs (jobs J1 to J6) are registered in the job list. The states of the first to fourth slots211to214in the initial state are as illustrated inFIG.27. The priority mode has been set to the roll specification order mode. The setting of “replacing the roll paper at an early timing after the start of the continuous printing based on the plurality of print jobs in a case where the roll paper replacement is necessary” has been made. The setting of “preferentially using roll paper with a small remaining amount” has been made. In this case, a job group including the print job (job J3) associated with the roll type “Type TC” and a job group including the print job (job J4) associated with the roll type “Type TD” are the inexecutable job groups. In the initial state, there is neither an empty slot nor a slot loaded with a type of roll paper that is not used for printing. It is thus necessary to load the roll paper of Type TC and the roll paper of Type TD into the slot during execution of printing. Since the priority mode is the roll specification order mode, the printing order determined by the scheduling processing is as illustrated inFIG.28. The setting of “preferentially using roll paper with a small remaining amount” has been made, so that the remaining amount of the roll paper of Type TA loaded in the first slot211becomes 0 during the execution of the job J1, and the remaining amount of the roll paper of Type TB loaded in the second slot212becomes 0 during the execution of the job J2. Thus, a roll paper loading schedule for loading the roll paper of Type TC into the first slot211and the roll paper of Type TD into the second slot212is created. According to the roll paper loading schedule, the states of the first to fourth slots211to214after the remaining amount of the roll paper loaded in the first slot211becomes 0 by the execution of the job J1 are, for example, the states illustrated inFIG.29, and the states of the first to fourth slots211to214after the remaining amount of the roll paper loaded in the second slot212becomes 0 by the execution of the job J2 are, for example, the states illustrated inFIG.30. Note that, in practice, the remaining amount of the roll paper of Type TA loaded in the third slot213is less than 15,000 m at the time when the loading of the roll paper of Type TC into the first slot211is completed, and the remaining amount of the roll paper of Type TB loaded in the fourth slot214is less than 15,000 m at the time when the loading of the roll paper of Type TD into the second slot212is completed. 6. About Control of Roll Shaft Incidentally, the roll paper is manually loaded into each slot of the autochanger20. In this regard, a configuration may be employed in which a roll shaft supporting the roll paper is controlled as follows so that the loading of the roll paper to be loaded into the loading target slot is quickly performed according to the roll paper loading schedule. FIG.31is a schematic partial cross-sectional view of one slot in the autochanger20. A roll shaft71, a roll shaft moving path72, and an arm73are provided in a housing70forming the slot. The roll shaft71rotatably supports the roll paper. The position of the roll shaft71can be moved in the roll shaft moving path72as indicated by an arrow denoted by reference numeral74by operating the arm73. When the roll paper is being supplied to the printer10, the roll shaft71is placed at a position denoted by reference numeral75(hereinafter referred to as a “normal position”). At the time of replacing the roll paper, the roll shaft71is placed at a position denoted by reference numeral76(hereinafter referred to as an “unmount position”). Note that the normal position corresponds to the first position, and the unmount position corresponds to the second position. The autochanger20is provided with a roll shaft control unit that controls the position of the roll shaft71by moving the arm73. Then, during the period in which printing is being executed on the basis of the print schedule determined as described above, at the timing when the loading target roll paper is to be loaded into the loading target slot (typically, at the possible loading start time described above), a predetermined instruction signal is provided from the print controller40to the roll shaft control unit in the autochanger20. On the basis of the instruction signal, the roll shaft control unit moves the roll shaft71of the loading target slot from the normal position to the unmount position by operating the arm73(roll shaft moving step). This enables the operator to quickly replace the roll paper. 7. Effects According to the present embodiment, in the printing system including the autochanger20having the plurality of slots, in a case where it is necessary to load roll paper into a slot during execution of continuous printing, after the determination of the printing order for a plurality of print jobs subject to continuous printing and a loading target slot being a slot into which the roll paper is to be loaded, a possible loading start time that is the earliest time at which the roll paper can be loaded into the loading target slot and a loading start deadline time that is a time indicating when the loading of the roll paper into the loading target slot is to be started at the latest (the latest time at which the loading of the roll paper into the loading target slot is to be started) are obtained. Thus, in a case where the operation of loading the roll paper is required, it is possible to present the time during which the operation is to be performed to the operator, and hence the operator can load the roll paper into the loading target slot quickly so that the printing operation is not stopped. As above, according to the present embodiment, in the printing system including the autochanger20, it is possible to prevent a decrease in printing productivity due to switching of roll paper to be used for printing and loading of roll paper into a slot. 8. Others Although the present invention has been described in detail above, the above description is illustrative in all aspects and is not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the present invention. For example, although an autochanger having four slots is exemplified in the above embodiment, the present invention can be applied so long as an autochanger having two or more slots is employed. Further, although three modes are prepared as the modes that can be set as the priority modes in the above embodiment, all of the three modes are not necessarily prepared, and modes other than the three modes may be prepared. This application is an application claiming priority based on Japanese Patent Application No. 2022-049274 entitled “Printing Method, Printing System, And Print Control Program” filed on Mar. 25, 2022, and the contents of which are herein incorporated by reference. | 59,551 |
11861246 | DETAILED DESCRIPTION In general, according to one embodiment, an image forming device includes a storage unit (memory) and a control unit (controller). The storage unit stores user information according to a specific device connected to an external network in advance. The control unit receives an execution instruction as an execution instruction of a job by a user indicated by the user information according to the specific device, if the execution instruction of the job is received from the specific device via the external network, and executes the job according to the execution instruction. Hereinafter, an image forming device and an image forming method according to the embodiment will be described with reference to the drawings.FIG.1is a diagram illustrating a configuration example of an image forming system600of the embodiment. The image forming system600includes one or a plurality of image forming devices100, a user terminal300, and a server device400. The image forming device100is a device that forms an image on a sheet. The image forming device100is, for example, a multifunction device. The image forming device100is communicably connected to a network500, for example, via a network such as Local Area Network (LAN) and a GateWay (GW)200. For example, the network500may be configured by using the Internet or a mobile communication network. The user terminal300is used if instructing the server device400to transmit a remote job to be executed by the image forming device100. The server device400instructs the image forming device100to execute a remote job based on the instruction received from the device, such as the user terminal300. The image forming device100and the server device400are communicably connected to each other via the network500. The user terminal300and the server device400are communicably connected to each other via the network500. Hereinafter, the devices will be described in detail. FIG.2is a hardware block diagram of the image forming device100according to the embodiment. First, the image forming device100will be described in detail with reference toFIGS.1and2. The image forming device100includes an image reading unit10, a display110, a control panel120, an image forming unit130, a sheet containing unit140, a communication unit150, a storage unit160, and a control unit170. The image forming device100forms an image on a sheet by using a developer, such as a toner or ink. If the developer is a toner, the developer is heated and fixed to the sheet. If the developer is ink, the developer is dropped to a sheet to form an image on the sheet. The sheet is, for example, a paper or a label paper. The sheet may be any material so long as the image forming device100can form an image on the surface thereof. The image reading unit10reads image information to be read based on brightness and darkness of light. The image reading unit10records the read image information. The recorded image information may be transmitted to another information processing device (for example, the server device400) via the network500. The recorded image information may be formed as an image on the sheet by the image forming unit130. The display110is an image display device, such as a liquid crystal display or an organic Electro Luminescence (EL) display. The display110displays various kinds of information relating to the image forming device100. The control panel120includes an operation device, such as a plurality of buttons. The control panel120receives an operation of the user. For example, the control panel120may receive an input of a number or a character. For example, the control panel120may receive an operation of selecting one or a plurality of jobs from candidates displayed on the display110. The control panel120outputs a signal according to the operation performed by the user to the control unit170. The display110and the control panel120may be configured as an integrated touch panel. The image forming unit130forms an image on the sheet based on the image information generated by the image reading unit10or the image information received via the network500. The image forming unit130includes, for example, a photoconductor drum, an exposure device, a developing device, a transfer device, and a fixing device. A conveyance path of a sheet is formed in the image forming unit130. The sheet to be processed is conveyed by a roller provided in the conveyance path. An image is formed on a sheet in the course of the conveyance. The image forming unit130, for example, forms an image by processes, as described below. The exposure device of the image forming unit130forms an electrostatic latent image on the photoconductor drum based on the image information. The developing device of the image forming unit130forms a visible image by attaching the developer on the electrostatic latent image. The transfer device of the image forming unit130transfers the visible image to the sheet. The fixing device of the image forming unit130fixes the visible image on the sheet by heating and pressurizing the sheet. In addition, the sheet on which the image is formed may be a sheet contained in the sheet containing unit140and conveyed and may be a sheet that is manually fed. The sheet containing unit140contains a sheet used for forming an image by the image forming unit130and conveys the sheet to the image forming unit130by a conveyance roller. The communication unit150is configured by using a communication interface. The communication unit150communicates with the other devices (for example, the server device400) via the network500. The storage unit160is configured by using a storage device, such as a magnetic hard disk device or a semiconductor storage device. The storage unit160stores data required when the image forming device100operates. The storage unit160functions, for example, as a user information storage unit161and a log storage unit162. The user information storage unit161stores information relating to a valid user of the image forming device100. The valid user of the image forming device100refers to a user who is authorized to instruct the image forming device100to execute a job. The user information storage unit161may store identification (ID) information of each valid user (hereinafter, referred to as “user ID”) as the user information. The user information storage unit161may store a user ID and authentication information (for example, a password) in association with each other. The user information storage unit161stores user information (hereinafter, referred to as “remote user information”) according to remote job information described below. The user information includes at least a user ID. As described above, the user information may further include the authentication information in addition to the user ID. The user information storage unit161may further store information indicating the content of the authority in association with the user ID of the valid user. The information indicating the content of the authority refers to, for example, the type of a job that the user can instruct, the content of the job, or the content of an option. The log storage unit162stores information indicating the history of the job executed by the image forming device100. For example, the log storage unit162may store the content of the executed job, the date and time of the execution, and the user ID of the user who instructs the execution, in association with each other. The control unit170is configured by using a processor such as a Central Processing Unit (CPU) and a memory. The control unit170reads and executes a program stored in the storage unit160in advance. The control unit170controls an operation of each device included in the image forming device100. The control unit170functions, for example, as a communication control unit171, a job execution instruction unit172, and a job execution control unit173. The communication control unit171communicates with the server device400. According to the present embodiment, the server device400illustrated inFIG.1is registered in advance in the image forming device100as a valid device. For example, the identification information indicating the valid server device400may be registered in the storage unit160in advance. If data is received from the valid server device400, the communication control unit171transmits the received data to the job execution instruction unit172. Meanwhile, if data is received from a server device other than the valid server device400, the communication control unit171executes an error process. For example, the communication control unit171may discard data without transmitting the received data to the job execution instruction unit172. Examples of the data received by the communication control unit171from the valid server device400include remote job information. The remote job information is information for the server device400to instruct the image forming device100to execute the job. The remote job information is job information that reaches the image forming device100via an external network (for example, the network500) different from a local network (internal network). The job information is information for instructing the image forming device100to execute the job. In the example ofFIG.1, in the image forming device100, the network inside the GW200corresponds to the internal network, and the network outside the GW200corresponds to the external network. In other words, the network of the area of which the security is controlled by an administrator in the network to which the image forming device100is connected is the internal network. Meanwhile, the network of which the security is not controlled by the administrator is the external network. In the remote job information, information indicating the job instructed by the user terminal300(hereinafter, referred to as a “remote job”) and information indicating the own device (the server device400) may be included. As the information indicating the own device, information relating to hardware of the server device400may be used, and information indicating an application that operates on the server device400may be used. Specific examples of such an application include a cloud application. In this case, for example, the user terminal300may access a cloud application, and the cloud application may transmit the remote job information to the image forming device100in response to the operation of the user. If the job information is received, the job execution instruction unit172instructs the job execution control unit173to execute the job based on the received job information. If the received job information is remote job information, the job execution instruction unit172reads the user information (remote user information) according to the remote job information from the user information storage unit161. The job execution instruction unit172instructs the job execution control unit173to execute the job according to the remote job information as the instruction of the user indicated by the remote user information (hereinafter, referred to as a “remote user”). The job execution control unit173executes the job in response to the instruction received from the job execution instruction unit172. If the instruction of the execution of the remote job is received as the instruction of the remote user, the job execution control unit173inquires the user information storage unit161as to whether the remote user is a valid user and what authority the remote user has. According to the present embodiment, the information of the remote user is registered in the user information storage unit161as the information of the valid user. Therefore, the job execution control unit173determines the remote job as an instruction from the valid user and executes the remote job. If the execution of the job is completed, the job execution control unit173transmits information indicating the completion (hereinafter, referred to as “execution information”) to the communication control unit171via the job execution instruction unit172. If the execution of the job is completed, the job execution control unit173records the log information indicating the history of the execution of the job in the log storage unit162. The user terminal300is information device that is operated by the user. The user terminal300is configured, for example, by using a device such as a smartphone, a mobile phone, a wearable device, a portable game machine, a stationary game machine, a television receiver, a smart speaker, a home appliance, or a robot. The user terminal300accesses the server device400via the network500. The user terminal300includes a user interface. The user terminal300generates the information of the instruction indicating the execution of the job (hereinafter, referred to as “instruction information”) in the image forming device100in response to the operation of the user with respect to the user interface. The user terminal300transmits the instruction information to the server device400. The user terminal300may be configured so that the information indicating the operation of the user (including the information of the voice of the user and utterance (speech) content) is simply transmitted to the server device400. In this case, the server device400may generate the instruction information based on the information indicating the operation of the user. The server device400is configured by using one or a plurality of information processing devices. The server device400generates the remote job information based on the information of the operation received from the user terminal300. The server device400transmits the remote job information to the image forming device100via the network500. FIG.3is a sequence chart illustrating a specific example of an operation of the image forming system600. First, the user operates the user terminal300and accesses the application that operates on the server device400. The user terminal300receives the operation of the user on the user interface (ACT101). The user terminal300generates the instruction information in response to the operation of the user. The user terminal300transmits the instruction information to the server device400(ACT102). The server device400generates the remote job information based on the information of the operation received from the user terminal300(ACT103). The server device400transmits the remote job information to the image forming device100via the network500(ACT104). The communication control unit171of the image forming device100receives the remote job information. The communication control unit171authenticates the server device400that is the transmission source of the remote job information based on the information included in the remote job information (ACT105). If the server device400is authenticated, the communication control unit171transmits the received remote job information to the job execution instruction unit172. If the server device400is not authenticated, the communication control unit171discards the received remote job information without transmitting the received remote job information to the job execution instruction unit172. The job execution instruction unit172determines whether the authentication setting is valid (ACT106). If the authentication setting is not valid (ACT106-NO), the job execution instruction unit172instructs the job execution control unit173to execute the remote job (ACT107). In this case, the job execution control unit173executes the instructed remote job (ACT112). Meanwhile, if the authentication setting is valid (ACT106-YES), the job execution instruction unit172reads the remote user information from the user information storage unit161(ACT108). The job execution instruction unit172instructs the execution of the remote job to the job execution control unit173as the instruction of the job execution by the read remote user information (ACT109). The job execution control unit173determines whether the authority of the execution of the instructed remote job is assigned to the remote user (ACT110). If the authority of the execution is not assigned (ACT110-NO), the job execution control unit173transmits error information to the server device400(ACT111). Meanwhile, if the authority of the execution is assigned (ACT110-YES), the job execution control unit173executes the remote job (ACT112). If the execution of the remote job is completed, the job execution control unit173transmits the execution information to the server device400(ACT113). Also, the job execution control unit173records the log information relating to the executed remote job in the log storage unit162(ACT114). The image forming system600configured in this manner can improve the convenience while maintaining the security of the image forming device100by the introduction of the user authentication. Details thereof are as follows. In the image forming system600, if the image forming device100receives an instruction to execute the remote job from the server device400via the external network (the network500), the execution of the job is instructed as being instructed by the user of the remote user information. The user of the remote user information is registered in the user information storage unit161as the valid user in advance. Therefore, in the image forming device100, the remote job is executed as the instruction by the valid user. While certain embodiments have been described these embodiments have been presented by way of example only, and are not intended to limit the scope of the present disclosure. Indeed, the embodiments described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions, and changes in the form of the embodiments described herein may be made without departing from the spirit of the present disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the present disclosure. | 18,056 |
11861247 | DESCRIPTION OF EMBODIMENTS Terms used in the following embodiments of this application are merely intended to describe specific embodiments, but are not intended to limit this application. The terms “one”, “a”, and “this” of singular forms used in this specification and the appended claims of this application are also intended to include plural forms, unless otherwise specified in the context clearly. It should also be understood that the term “and/or” used in this application indicates and includes any or all possible combinations of one or more listed items. The following describes an electronic device, a user interface used for such an electronic device, and embodiments used for using such an electronic device. In some embodiments, the electronic device may be a portable electronic device that further includes other functions such as a personal digital assistant function and/or a music player function, for example, a mobile phone, a tablet computer, or a wearable electronic device (for example, a smartwatch) having a wireless communication function. An example embodiment of the portable electronic device includes but is not limited to a portable electronic device using iOS®, Android®, Microsoft®, or another operating system. The portable electronic device may alternatively be another portable electronic device, for example, a laptop computer having a touch-sensitive surface or a touch panel. It should be further understood that in some other embodiments, the electronic device may not be a portable electronic device, but a desktop computer having a touch-sensitive surface or a touch panel. The term “user interface (user interface, UI)” in the specification, claims, and accompanying drawings of this application is a medium interface for interaction and information exchange between an application or an operating system and a user, and implements conversion between an internal form of information and a form that can be accepted by the user. A user interface of an application is source code written in a specific computer language such as Java or an extensible markup language (XML). The interface source code is parsed and rendered on the terminal device, and is finally presented as content that can be identified by a user, for example, a control such as a picture, a text, or a button. The control is also referred to as a widget, and is a basic element of the user interface. Typical controls include a toolbar, a menu bar, a text box, a button, a scrollbar, a picture, and a text. An attribute and content of a control in an interface are defined by using a tag or a node. For example, the XML defines, by using the node such as <Textview>, <ImgView>, or <VideoView>, the control included in the interface. Anode corresponds to a control or an attribute in an interface. After being parsed and rendered, the node is presented as content visible for a user. In addition, interfaces of a plurality of applications such as a hybrid application usually further include a web page. The web page, also referred to as a page, may be understood as a special control embedded in an application interface. The web page is source code written in a specific computer language, for example, a hypertext markup language (HTML), a cascading style sheet (CSS), or JavaScript (JS). A browser or a web page display component whose function is similar to that of a browser may load and display the web page source code as content that can be identified by the user. Specific content included in the web page is also defined by using a label or a node in the web page source code. For example, the HTML defines an element and an attribute of the web page by using <p>, <img>, <video>, or <canvas>. The user interface is usually in a representation form of a graphical user interface (GUI), which is a user interface that is related to a computer operation and that is displayed in a graphical manner. The user interface may be an interface element such as an icon, a window, or a control displayed on a display screen of the electronic device, and the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, or a widget. The following embodiments of this application provide a data sharing method, a graphical user interface, an electronic device, and a system, so that a printing process of printing an object such as a picture, a document, or a web page by using the electronic device, a projection process of performing projection by using the electronic device, a screen mirroring process of performing screen mirroring by using the electronic device, and the like are more intuitive, simple, and effective for a user, and use efficiency of the electronic device is improved. In the following embodiments of this application, if “Moment share” of an electronic device such as a smartphone is enabled, when the electronic device identifies a scenario in which a user shares an object such as a picture, a document, or a web page, the electronic device may automatically discover another device such as a printer, a projector, a display, a mobile phone, or a tablet computer. If the user expects to print data, the user may select the printer discovered by the electronic device for printing. Therefore, an operation is simple and effective. Similarly, a projection process of performing projection by using the electronic device, a screen mirroring process of performing screen mirroring by using the electronic device, and the like are also more intuitive, simple, and effective for the user. In the following embodiments of this application, “Moment share” may be a service or a function provided by the electronic device, and may be used to support the electronic device in transmitting data to another device. In some embodiments, “Moment share” may be used to support the electronic device in transmitting data to a nearby device by using one or more technologies such as Bluetooth, wireless fidelity direct (Wi-Fi direct), and a Wi-Fi software access point (SoftAP). In some other embodiments, “Moment share” may be used to support the electronic device in transmitting, through a local area network (LAN), data to a device (for example, another electronic device) that is located in a same local area network as the electronic device. In some embodiments of this application, a device that is located in a same local area network as the electronic device may alternatively be a device near the electronic device. In some embodiments, “Moment share” may be used to support the electronic device in transmitting, by using a cellular mobile communications technology such as 3G, LTE, or 5G or a wide area network (WAN) technology, data to a cloud device that can be accessed by the electronic device. It may be understood that the nearby device and the cloud device are merely relative concepts. The cloud device is a device discovered by the electronic device by using a cellular mobile communications technology or a wide area network communications technology. The nearby device is a device discovered by the electronic device by using one or more technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN. In this application, enabling “Moment share” may include enabling one or more of a Bluetooth module, a WLAN module, and a mobile communications module of the electronic device. In some embodiments, after enabling the foregoing function, the electronic device may discover a device near the electronic device by using one or more technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN. In some other embodiments, after enabling the foregoing function, the electronic device may discover a cloud device by using a cellular mobile communications network technology or a wide area network technology. For an embodiment of discovering the cloud device, in some embodiments, after the electronic device is connected to a server in a network, the server may provide the electronic device with a device list of another device connected to the server, so that the electronic device can discover the another device, and the another device may be the cloud device discovered by the electronic device. It may be understood that “” or “Moment share” is merely a word used in the embodiments, a meaning represented by the word has been recorded in the embodiments, and a name of the word does not constitute any limitation on the embodiments. In addition, in some other embodiments of this application, “Moment share” may also be referred to as another noun such as “Short-distance share”. Similarly, “Moment share” mentioned in the embodiments of this application may also be referred to as another name such as “Shoot share” in some other embodiments. An example electronic device100provided in the following embodiments of this application is first described. FIG.1Ais a schematic diagram of a structure of an electronic device100. The electronic device100may include a processor110, an external memory interface120, an internal memory121, a universal serial bus (USB) interface130, a charging management module140, a power management module141, a battery142, an antenna1, an antenna2, a mobile communications module150, a wireless communications module160, an audio module170, a speaker170A, a receiver170B, a microphone170C, a headset jack170D, a sensor module180, a button190, a motor191, an indicator192, a camera193, a display screen194, a subscriber identification module (SIM) card interface195, and the like. The sensor module180may include a pressure sensor180A, a gyro sensor180B, a barometric pressure sensor180C, a magnetic sensor180D, an acceleration sensor180E, a distance sensor180F, an optical proximity sensor180G, a fingerprint sensor180H, a temperature sensor180J, a touch sensor180K, an ambient light sensor180L, a bone conduction sensor180M, and the like. It may be understood that the structure shown in an embodiment of this application does not constitute a specific limitation on the electronic device100. In some other embodiments of this application, the electronic device100may include more or fewer components than those shown in the figure, or combine some components, or split some components, or have different component arrangements. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. The processor110may include one or more processing units. For example, the processor110may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural network processing unit (NPU). Different processing units may be independent devices, or may be integrated into one or more processors. In some embodiments, the electronic device100may alternatively include one or more processors110. The controller may be a nerve center and a command center of the electronic device100. The controller may generate an operation control signal based on instruction operation code and a time sequence signal, to complete control of instruction fetching and instruction execution. The memory may be further disposed in the processor110, and is configured to store an instruction and data. In some embodiments, the memory in the processor110is a cache memory. The memory may store an instruction or data that is just used or cyclically used by the processor110. If the processor110needs to use the instruction or the data again, the processor110may directly invoke the instruction or the data from the memory, to avoid repeated access and reduce a waiting time of the processor110, thereby improving efficiency of the electronic device100. In some embodiments, the processor110may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identification module (SIM) interface, a universal serial bus (USB) interface, and/or the like. The I2C interface is a two-way synchronization serial bus, and includes a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor110may include a plurality of groups of I2C buses. The processor110may be separately coupled to the touch sensor180K, a charger, a flash, the camera193, and the like by using different I2C bus interfaces. For example, the processor110may be coupled to the touch sensor180K by using the I2C interface, so that the processor110communicates with the touch sensor180K by using the I2C bus interface to implement a touch function of the electronic device100. The I2S interface may be configured to perform audio communication. In some embodiments, the processor110may include a plurality of groups of I2S buses. The processor110may be coupled to the audio module170by using the I2S bus, to implement communication between the processor110and the audio module170. In some embodiments, the audio module170may transmit an audio signal to the wireless communications module160by using the I2S interface, to implement a function of answering a call by using a Bluetooth headset. The PCM interface may also be configured to: perform audio communication, and sample, quantize, and code an analog signal. In some embodiments, the audio module170may be coupled to the wireless communications module160by using a PCM bus interface. In some embodiments, the audio module170may also transmit an audio signal to the wireless communications module160by using the PCM interface, to implement a function of answering a call by using a Bluetooth headset. Both the I2S interface and the PCM interface may be configured to perform audio communication. The UART interface is a universal serial data bus, and is configured to perform asynchronous communication. The bus may be a two-way communications bus, and converts to-be-transmitted data between serial communication and parallel communication. In some embodiments, the UART interface is usually configured to connect the processor110to the wireless communications module160. For example, the processor110communicates with a Bluetooth module in the wireless communications module160by using the UART interface, to implement a Bluetooth function. In some embodiments, the audio module170may transmit an audio signal to the wireless communications module160by using the UART interface, to implement a function of playing music by using a Bluetooth headset. The MIPI interface may be configured to connect the processor110to a peripheral component such as the display screen194or the camera193. The MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), and the like. In some embodiments, the processor110communicates with the camera193by using the CSI interface, to implement a photographing function of the electronic device100. The processor110communicates with the display screen194by using the DSI interface, to implement a display function of the electronic device100. The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal, or may be configured as a data signal. In some embodiments, the GPIO interface may be configured to connect the processor110to the camera193, the display screen194, the wireless communications module160, the audio module170, the sensor module180, and the like. The GPIO interface may alternatively be configured as the I2C interface, the I2S interface, the UART interface, the MIPI interface, or the like. The USB interface130is an interface that conforms to a USB standard specification, and may be specifically a mini USB interface, a micro USB interface, a USB Type-C interface, or the like. The USB interface130may be configured to connect to a charger to charge the electronic device100, or may be configured to perform data transmission between the electronic device100and a peripheral device, or may be configured to connect to a headset to play audio by using the headset. The interface may alternatively be configured to connect to another electronic device, such as an AR device. It may be understood that an interface connection relationship between the modules that is shown in an embodiment of the present disclosure is merely an example for description, and does not constitute a limitation on the structure of the electronic device100. In some other embodiments, the electronic device100may alternatively use an interface connection manner different from that in the foregoing embodiment, or a combination of a plurality of interface connection manners. The charging management module140is configured to receive a charging input from the charger. The charger may be a wireless charger or a wired charger. In some embodiments of wired charging, the charging management module140may receive a charging input from the wired charger by using the USB interface130. In some embodiments of wireless charging, the charging management module140may receive a wireless charging input by using a wireless charging coil of the electronic device100. The charging management module140supplies power to the electronic device by using the power management module141while charging the battery142. The power management module141is configured to connect the battery142, the charging management module140, and the processor110. The power management module141receives an input from the battery142and/or the charging management module140, and supplies power to the processor110, the internal memory121, an external memory, the display screen194, the camera193, the wireless communications module160, and the like. The power management module141may be further configured to monitor parameters such as a battery capacity, a battery cycle count, and a battery health status (electric leakage or impedance). In some other embodiments, the power management module141may alternatively be disposed in the processor110. In some other embodiments, the power management module141and the charging management module140may alternatively be disposed in a same component. A wireless communication function of the electronic device100may be implemented by using the antenna1, the antenna2, the mobile communications module150, the wireless communications module160, a modem processor, a baseband processor, and the like. The antenna1and the antenna2are configured to: transmit and receive an electromagnetic wave signal. Each antenna in the electronic device100may be configured to cover one or more communications frequency bands. Different antennas may be further multiplexed, to improve antenna utilization. For example, the antenna1may be multiplexed as a diversity antenna in a wireless local area network. In some other embodiments, the antenna may be used in combination with a tuning switch. The mobile communications module150may provide a wireless communication solution that includes 2G/3G/4G/5G or the like and that is applied to the electronic device100. The mobile communications module150may include at least one filter, a switch, a power amplifier, a low noise amplifier (low noise amplifier, LNA), and the like. The mobile communications module150may receive an electromagnetic wave by using the antenna1, perform processing such as filtering or amplification on the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communications module150may further amplify a signal modulated by the modem processor, and convert the signal into an electromagnetic wave for radiation by using the antenna1. In some embodiments, at least some function modules in the mobile communications module150may be disposed in the processor110. In some embodiments, at least some function modules in the mobile communications module150and at least some modules in the processor110may be disposed in a same component. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium or high-frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transfers the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor, and is then transferred to an application processor. The application processor outputs a sound signal by using an audio device (which is not limited to the speaker170A, the receiver170B, or the like), or displays an image or a video by using the display screen194. In some embodiments, the modem processor may be an independent component. In some other embodiments, the modem processor may be independent of the processor110, and is disposed in a same component as the mobile communications module150or another function module. The wireless communications module160may provide a wireless communication solution that includes a wireless local area network (WLAN) (for example, a wireless fidelity (Wi-Fi) network), Bluetooth (BT), a global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), an infrared (IR) technology, or the like and that is applied to the electronic device100. The wireless communications module160may be one or more components integrating at least one communications processor module. The wireless communications module160receives an electromagnetic wave by using the antenna2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor110. The wireless communications module160may further receive a to-be-sent signal from the processor110, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation by using the antenna2. For example, the wireless communications module160may include a Bluetooth module, a Wi-Fi module, and the like. In some embodiments, the antenna1and the mobile communications module150in the electronic device100are coupled, and the antenna2and the wireless communications module160in the electronic device100are coupled, so that the electronic device100can communicate with a network and another device by using a wireless communications technology. The wireless communications technology may include a global system for mobile communications (GSM), a general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like. The GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a BeiDou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a satellite based augmentation system (SBAS). In some embodiments, a Bluetooth (BT) module and a WLAN module included in the wireless communications module160may transmit a signal to detect or scan a device near the electronic device100, so that the electronic device100can discover a nearby device by using a wireless communications technology such as Bluetooth or a WLAN, establish a wireless communication connection to the nearby device, and share data with the nearby device by using the connection. The Bluetooth (BT) module may provide a Bluetooth communication solution including one or more of classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE). The WLAN module may provide a WLAN communication solution including one or more of Wi-Fi direct, a Wi-Fi LAN, or Wi-Fi SoftAP. In some embodiments, the wireless communication solution provided by the mobile communications module150may enable the electronic device to communicate with a device (for example, a server) in a network, and the WLAN wireless communication solution provided by the wireless communications module160may also enable the electronic device to communicate with a device (for example, a server) in a network, and to communicate with a cloud device by using the device (for example, the server) in the network. In this way, the electronic device can discover the cloud device and transmit data to the cloud device. The electronic device100may implement a display function by using the GPU, the display screen194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen194and the application processor. The GPU is configured to perform mathematical and geometric calculation, and is configured to render an image. The processor110may include one or more GPUs, which execute an instruction to generate or change display information. The display screen194is configured to display an image, a video, and the like. The display screen194includes a display panel. The display panel may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode or an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (FLED), a mini LED, a micro LED, a micro OLED, a quantum dot light emitting diode (QLED), or the like. In some embodiments, the electronic device100may include one or N display screens194, where N is a positive integer greater than 1. The electronic device100may implement the photographing function by using the ISP, the camera193, the video codec, the GPU, the display screen194, the application processor, and the like. The ISP is configured to process data fed back by the camera193. For example, during photographing, a shutter is pressed, a ray of light is transmitted to a light-sensitive element of a camera through a lens, and an optical signal is converted into an electrical signal. The light-sensitive element of the camera transmits the electrical signal to the ISP for processing, and converts the electrical signal into a visible image. The ISP may further perform algorithm optimization on noise, brightness, and complexion of the image. The ISP may further optimize parameters such as exposure and a color temperature of a photographing scenario. In some embodiments, the ISP may be disposed in the camera193. The camera193is configured to capture a static image or a video. An optical image of an object is generated by using the lens, and is projected to a photosensitive element. The light-sensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light-sensitive element converts an optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert the electrical signal into a digital image signal. The ISP outputs the digital image signal to a DSP for processing. The DSP converts the digital image signal into an image signal of a standard format such as RGB or YUV. In some embodiments, the electronic device100may include one or N cameras193, where N is a positive integer greater than 1. The digital signal processor is configured to process a digital signal, and may process another digital signal in addition to the digital image signal. For example, when the electronic device100selects a frequency, the digital signal processor is configured to perform Fourier transform on frequency energy, or the like. The video codec is configured to compress or decompress a digital video. The electronic device100may support one or more video codecs. In this way, the electronic device100can play or record videos in a plurality of coding formats, for example, moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, and MPEG-4. The NPU is a neural-network (NN) computing processor, quickly processes input information by referring to a structure of a biological neural network, for example, by referring to a transfer mode between human brain neurons, and may further continuously perform self-learning. Applications such as intelligent cognition of the electronic device100may be implemented by using the NPU, for example, image recognition, facial recognition, speech recognition, and text understanding. The external memory interface120may be configured to connect to an external memory card, for example, a micro SD card, to extend a storage capability of the electronic device100. The external memory card communicates with the processor110by using the external memory interface120, to implement a data storage function. For example, data such as music, a photo, and a video is stored in the external memory card. The internal memory121may be configured to store one or more computer programs, where the one or more computer programs include an instruction. The processor110may run the instruction stored in the internal memory121, so that the electronic device100performs the data sharing method provided in some embodiments of this application, various function applications, data processing, and the like. The internal memory121may include a program storage area and a data storage area. The program storage area may store an operating system. The program storage area may further store one or more applications (for example, Gallery and Contacts), and the like. The data storage area may store data (for example, Photos and Contacts) created during use of the electronic device100. In addition, the internal memory121may include a high-speed random access memory, and may further include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash memory device, or a universal flash storage (UFS). The electronic device100can implement an audio function such as music playback or recording by using the audio module170, the speaker170A, the receiver170B, the microphone170C, the headset jack170D, the application processor, and the like. The audio module170is configured to convert digital audio information into an analog audio signal for output, and is also configured to convert an analog audio input into a digital audio signal. The audio module170may be further configured to: code and decode an audio signal. In some embodiments, the audio module170may be disposed in the processor110, or some function modules in the audio module170are disposed in the processor110. The speaker170A, also referred to as a “horn”, is configured to convert an audio electrical signal into a sound signal. The electronic device100may listen to music or answer a hands-free call by using the speaker170A. The receiver170B, also referred to as an “earpiece”, is configured to convert an audio electrical signal into a sound signal. When the electronic device100answers a call or receives voice information, the receiver170B may be placed close to a human ear to listen to a voice. The microphone170C, also referred to as a “mike” or a “microphone”, is configured to convert a sound signal into an electrical signal. When making a call or sending voice information, a user may make a sound by moving a human mouth close to the microphone170C to input a sound signal to the microphone170C. At least one microphone170C may be disposed in the electronic device100. In some other embodiments, two microphones170C may be disposed in the electronic device100, to collect a sound signal and implement a noise reduction function. In some other embodiments, three, four, or more microphones170C may alternatively be disposed in the electronic device100, to collect a sound signal, reduce noise, and further identify a sound source, implement a directional recording function, and the like. The headset jack170D is configured to connect to a wired headset. The headset jack170D may be the USB interface130, or may be a 3.5 mm open mobile electronic device platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface. The pressure sensor180A is configured to sense a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor180A may be disposed on the display screen194. There are many types of pressure sensors180A such as a resistive pressure sensor, an inductive pressure sensor, and a capacitive pressure sensor. The capacitive pressure sensor may include at least two parallel plates made of conductive materials. When a force is applied to the pressure sensor180A, capacitance between electrodes changes. The electronic device100determines pressure intensity based on a capacitance change. When a touch operation is performed on the display screen194, the electronic device100detects intensity of the touch operation by using the pressure sensor180A. The electronic device100may also calculate a touch location based on a detection signal of the pressure sensor180A. In some embodiments, touch operations that are performed at a same touch location but have different touch operation intensity may correspond to different operation instructions. For example, when a touch operation whose touch operation intensity is less than a first pressure threshold is performed on an SMS message application icon, an instruction for viewing an SMS message is executed. When a touch operation whose touch operation intensity is greater than or equal to a first pressure threshold is performed on an SMS message application icon, an instruction for creating a new SMS message is executed. The gyro sensor180B may be configured to determine a motion posture of the electronic device100. In some embodiments, an angular velocity of the electronic device100around three axes (namely, x, y, and z axes) may be determined by using the gyro sensor180B. The gyro sensor180B may be configured to perform image stabilization during photographing. For example, when the shutter is pressed, the gyro sensor180B detects an angle at which the electronic device100jitters, obtains, through calculation based on the angle, a distance for which a lens module needs to compensate, and allows the lens to cancel the jitter of the electronic device100through reverse motion, to implement the image stabilization. The gyro sensor180B may also be used in a navigation scenario and a somatic game scenario. The barometric pressure sensor180C is configured to measure atmospheric pressure. In some embodiments, the electronic device100calculates an altitude based on a value of the atmospheric pressure measured by the barometric pressure sensor180C, to assist positioning and navigation. The magnetic sensor180D includes a Hall sensor. The electronic device100may detect opening/closing of a flip leather case by using the magnetic sensor180D. In some embodiments, when the electronic device100is a clamshell phone, the electronic device100may detect opening/closing of a flip cover based on the magnetic sensor180D. Further, a feature such as automatic unlocking of the flip cover is set based on a detected opening/closing state of the leather case or a detected opening/closing state of the flip cover. The acceleration sensor180E may detect magnitude of accelerations in various directions (usually on three axes) of the electronic device100, and may detect magnitude and a direction of the gravity when the electronic device100is still. The acceleration sensor180E may be further configured to identify a posture of the electronic device, and is applied to an application such as switching between landscape orientation and portrait orientation or a pedometer. The distance sensor180F is configured to measure a distance. The electronic device100may measure the distance in an infrared or a laser manner. In some embodiments, in a photographing scenario, the electronic device100may measure the distance by using the distance sensor180F to implement quick focusing. For example, the optical proximity sensor180G may include a light-emitting diode (LED) and an optical detector, for example, a photodiode. The light-emitting diode may be an infrared light-emitting diode. The electronic device100emits infrared light through the light-emitting diode. The electronic device100detects infrared reflected light from a nearby object through the photodiode. When detecting sufficient reflected light, the electronic device100may determine that there is an object near the electronic device101. When detecting insufficient reflected light, the electronic device100may determine that there is no object near the electronic device100. The electronic device100may detect, by using the optical proximity sensor180G, that the user holds the electronic device100close to an ear to make a call, to automatically perform screen-off for power saving. The optical proximity sensor180G may also be used in a flip cover mode or a pocket mode to automatically unlock or lock the screen. The ambient light sensor180L is configured to sense ambient light brightness. The electronic device100may adaptively adjust brightness of the display screen194based on the sensed ambient light brightness. The ambient light sensor180L may also be configured to automatically adjust a white balance during photographing. The ambient light sensor180L may also cooperate with the optical proximity sensor180G to detect whether the electronic device100is in a pocket, to avoid an accidental touch. The fingerprint sensor180H is configured to collect a fingerprint. The electronic device100may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. The temperature sensor180J is configured to detect a temperature. In some embodiments, the electronic device100executes a temperature processing policy by using the temperature detected by the temperature sensor180J. For example, when the temperature reported by the temperature sensor180J exceeds a threshold, the electronic device100lowers performance of a processor located near the temperature sensor180J, to reduce power consumption to implement thermal protection. In some other embodiments, when the temperature is less than another threshold, the electronic device100heats the battery142to prevent the electronic device100from being shut down abnormally because of a low temperature. In some other embodiments, when the temperature is less than still another threshold, the electronic device100boosts an output voltage of the battery142to avoid abnormal shutdown caused by a low temperature. The touch sensor180K may also be referred to as a touch panel or a touch-sensitive surface. The touch sensor180K may be disposed on the display screen194. The touch sensor180K and the display screen194form a touchscreen, which is also referred to as a “touchscreen”. The touch sensor180K is configured to detect a touch operation performed on or near the touch sensor180K. The touch sensor may transfer the detected touch operation to the application processor, to determine a type of a touch event. Visual output related to the touch operation may be provided by using the display screen194. In some other embodiments, the touch sensor180K may also be disposed on a surface of the electronic device100at a location different from that of the display screen194. The bone conduction sensor180M may obtain a vibration signal. In some embodiments, the bone conduction sensor180M may obtain a vibration signal of a vibration bone of a human vocal-cord part. The bone conduction sensor180M may also contact a body pulse to receive a blood pressure beating signal. In some embodiments, the bone conduction sensor180M may also be disposed in the headset, to obtain a bone conduction headset. The audio module170may obtain a speech signal through parsing based on the vibration signal that is of the vibration bone of the vocal-cord part and that is obtained by the bone conduction sensor180M, to implement a speech function. The application processor may parse heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor180M, to implement a heart rate detection function. The button190includes a power button, a volume button, and the like. The button190may be a mechanical button, or may be a touch button. The electronic device100may receive a button input, and generate a button signal input related to a user setting and function control of the electronic device100. The motor191may generate a vibration prompt. The motor191may be used for an incoming call vibration prompt, or may be used for a touch vibration feedback. For example, touch operations performed on different applications (for example, photographing and audio playing) may correspond to different vibration feedback effects. The motor191may also correspond to different vibration feedback effects for touch operations performed on different areas of the display screen194. Different application scenarios (for example, a time reminder, information receiving, an alarm clock, and a game) may also correspond to different vibration feedback effects. A touch vibration feedback effect may be further customized. The indicator192may be an indicator light, and may be configured to indicate a charging status and a power change, or may be configured to indicate a message, a missed call, a notification, and the like. The SIM card interface195is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface195or detached from the SIM card interface195, to implement contact with or separation from the electronic device100. The electronic device100may support one or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface195may support a nano-SIM card, a micro-SIM card, a SIM card, and the like. A plurality of cards may be inserted into a same SIM card interface195at the same time. The plurality of cards may be of a same type or different types. The SIM card interface195may also be compatible with different types of SIM cards. The SIM card interface195may also be compatible with an external storage card. The electronic device100interacts with a network by using the SIM card, to implement a call function, a data communication function, and the like. In some embodiments, the electronic device100uses an eSIM, namely, an embedded SIM card. The eSIM card may be embedded into the electronic device100, and cannot be separated from the electronic device100. For example, the electronic device100shown inFIG.1Amay display, by using the display screen194, user interfaces described in the following embodiments. The electronic device100may detect a touch operation in each user interface by using the touch sensor180K, for example, a tap operation (for example, a touch operation or a double-tap operation on an icon) in each user interface, or an upward or downward swipe operation or an operation of drawing a circle gesture in each user interface. In some embodiments, the electronic device100may detect, by using the gyro sensor180B, the acceleration sensor180E, or the like, a motion gesture made by the user by holding the electronic device100, for example, shaking the electronic device. In some embodiments, the electronic device100may detect a non-touch gesture operation by using the camera193(for example, a 3D camera or a depth camera). A software system of the electronic device100may use a layered architecture, an event-driven architecture, a microkernel architecture, a micro service architecture, or a cloud architecture. In an embodiment of the present disclosure, an Android system of the layered architecture is used as an example to illustrate a software structure of the electronic device100. FIG.1Bis a block diagram of a software structure of the electronic device100according to an embodiment of the present disclosure. In the layered architecture, software is divided into several layers, and each layer has a clear role and task. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, namely, an application layer, an application framework layer, Android runtime, a system library, and a kernel layer from top to bottom. The application layer may include a series of application packages. As shown inFIG.1B, the application package may include applications such as Camera, Gallery, Calendar, Phone, Map, Navigation, WLAN, Bluetooth, Music, Videos, and Messages. The application framework layer provides an application programming interface (API) and a programming framework for the application at the application layer. The application framework layer includes some predefined functions. As shown inFIG.1B, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like. The window manager is configured to manage a window program. The window manager may obtain a size of a display screen, determine whether there is a status bar, perform screen locking, take a screenshot, and the like. The content provider is configured to: store and obtain data, and enable the data to be accessed by an application. The data may include a video, an image, audio, calls that are made and received, a browsing history and bookmarks, an address book, and the like. The view system includes a visual control such as a text display control, or a picture display control. The view system can be configured to construct an application. A display interface may include one or more views. For example, a display interface including an SMS message notification icon may include a text display view and a picture display view. The phone manager is configured to provide a communication function of the electronic device100, for example, management of a call status (including answering or declining). The resource manager provides various resources such as a localized character string, an icon, a picture, a layout file, and a video file for an application. The notification manager enables an application to display notification information in a status bar, and may be configured to convey a notification message. The notification manager may automatically disappear after a short pause without requiring a user interaction. For example, the notification manager is configured to notify download completion, give a message notification, and the like. The notification manager may alternatively be a notification that appears on the top of a status bar of a system in a text form of a graph or a scroll bar, for example, a notification of an application running in a background or a notification that appears on a screen in a form of a dialog window. For example, text information is prompted in the status bar, an alert sound is produced, the electronic device vibrates, or an indicator light blinks. The Android runtime includes a kernel library and a virtual machine. The Android runtime is responsible for scheduling and management of the Android system. The kernel library includes two parts: a function that needs to be invoked by a Java language and a kernel library of Android. The application layer and the application framework layer run on the virtual machine. The virtual machine executes java files at the application layer and the application framework layer as binary files. The virtual machine is configured to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection. The system library may include a plurality of function modules, for example, a surface manager, a media library, a three-dimensional graphics processing library (for example, OpenGL ES), and a 2D graphics engine (for example, SGL). The surface manager is configured to: manage a display subsystem, and provide fusion of 2D and 3D layers for a plurality of applications. The media library supports playback and recording of a plurality of frequently used audio and video formats, static image files, and the like. The media library may support a plurality of audio and video coding formats, for example, MPEG-4, H.264, MP3, AAC, AMR, JPG, and PNG. The three-dimensional graphics processing library is configured to implement three-dimensional graphics drawing, image rendering, composition, layer processing, and the like. The 2D graphics engine is a drawing engine for 2D drawing. The kernel layer is a layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver. The software system shown inFIG.1Brelates to an application presentation (such as a gallery or a file manager) that uses a sharing capability, a moment share module that provides the sharing capability, a print service that provides a printing capability, and a print spooler. In addition, the application framework layer provides a printing framework, a WLAN service, and a Bluetooth service, and the bottom kernel layer provides a WLAN Bluetooth capability and a basic communications protocol. The following describes working procedures of software and hardware of the electronic device100by using an example with reference to a photographing capture scenario. When the touch sensor180K receives a touch operation, a corresponding hardware interruption is sent to the kernel layer. The kernel layer processes the touch operation into a raw input event (including information such as touch coordinates or a time stamp of the touch operation). The raw input event is stored at the kernel layer. The application framework layer obtains the raw input event from the kernel layer, and identifies a control corresponding to the input event. For example, the touch operation is a touch touch operation, and a control corresponding to the touch operation is a control of a camera application icon. A camera application invokes an interface at the application framework layer to enable the camera application, then enables a camera driver by invoking the kernel layer, and captures a static image or a video by using the camera193. FIG.1Cshows an example of a schematic diagram of a structure of a printer101according to this application. As shown inFIG.1C, the printer101may include a processor102, a memory103, a wireless communications processing module104, a power switch105, an RJ11 communications processing module106, a wired LAN communications processing module116, and a mechanical apparatus108. These components may be connected by using a bus. The processor102may be configured to: read and execute a computer readable instruction. In an embodiment, the processor102may mainly include a controller, an arithmetic unit, and a register. The controller is mainly responsible for decoding an instruction, and sends a control signal for an operation corresponding to the instruction. The arithmetic unit is mainly responsible for performing a fixed-point or floating-point arithmetic operation, a shift operation, a logic operation, and the like, or may perform an address operation and an address conversion. The register is mainly responsible for storing a quantity of register operations, intermediate operation results, and the like that are temporarily stored during instruction execution. In an embodiment, a hardware architecture of the processor102may be an application-specific integrated circuit (ASIC) architecture, an MIPS architecture, an ARM architecture, an NP architecture, or the like. In some embodiments, the processor102may be configured to parse signals/a signal received by the wireless communications processing module104and/or the wired LAN communications processing module116, for example, a probe request that is broadcast by the electronic device100, a print request sent by the electronic device100, and a print instruction sent by a server of a cloud printing service provider. The processor102may be configured to perform a corresponding processing operation based on a parsing result, for example, generate a probe response, or drive, based on the print request or according to the print instruction, the mechanical apparatus108to perform a print operation. In some embodiments, the processor102may be further configured to generate signals/a signal sent by the wireless communications processing module104and/or the wired LAN communications processing module116, for example, a Bluetooth broadcast signal or a beacon signal, or a signal that is sent to the electronic device and that is used to feed back a print status (for example, a print success or a print failure). The memory103is coupled to the processor102, and is configured to store various software programs and/or a plurality of sets of instructions. In an embodiment, the memory103may include a high-speed random access memory, and may further include a nonvolatile memory, for example, one or more magnetic disk storage devices, a flash memory device, or another nonvolatile solid-state storage device. The memory103may store an operating system, for example, an embedded operating system such as uCOS, VxWorks, or RTLinux. The memory103may further store a communication program, and the communication program may be used to communicate with the electronic device100, one or more servers, or an additional device. The wireless communications processing module104may include one or more of a Bluetooth (BT) communications processing module104A and a WLAN communications processing module104B. In some embodiments, the one or more of the Bluetooth (BT) communications processing module and the WLAN communications processing module may obtain, through listening, a signal transmitted by another device (for example, the electronic device100), for example, a probe request or a scan signal; may send a response signal, for example, a probe response or a scan response, so that the another device (for example, the electronic device100) can discover the printer101; establish a wireless communication connection to the another device (for example, the electronic device100); and communicate with the another device (for example, the electronic device100) by using one or more wireless communications technologies such as Bluetooth or a WLAN. In some other embodiments, the one or more of the Bluetooth (BT) communications processing module and the WLAN communications processing module may alternatively transmit a signal, for example, a broadcast Bluetooth signal or a beacon signal, so that another device (for example, the electronic device100) can discover the printer101; establish a wireless communication connection to the another device (for example, the electronic device100); and communicate with the another device (for example, the electronic device100) by using one or more wireless communications technologies such as Bluetooth or a WLAN. The wireless communications processing module104may further include a cellular mobile communications processing module (not shown). The cellular mobile communications processing module may communicate with another device (for example, a server) by using a cellular mobile communications technology. The power switch105may be configured to control a power supply to supply power to the printer101. The RJ11 communications processing module106may be configured to process data received or sent through an RJ11 interface. The RJ11 interface is mainly configured to connect to the modem modem. The wired LAN communications processing module107may be configured to communicate with another device in a same LAN by using a wired LAN, and may be further configured to connect to a WAN by using the wired LAN, and may communicate with a device in the WAN. The mechanical apparatus108may include a print head, a carriage mechanism, a paper feeding mechanism, a ribbon transmission mechanism, an ink (toner) supply mechanism, a toner cartridge transmission mechanism, and the like. The mechanisms are all execution mechanisms of a printer system, and are uniformly coordinated and controlled by the processor102. It may be understood that the structure shown inFIG.1Cdoes not constitute a specific limitation on the printer101. In some other embodiments of this application, the printer101may include more or fewer components than those shown in the figure, or combine some components, or split some components, or have different component arrangements. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. FIG.1Dshows an example of a schematic diagram of a structure of a projector111according to this application. As shown inFIG.1D, the projector111may include a processor112, a memory113, a wireless communications processing module114, a power switch115, a wired LAN communications processing module116, an RS-232 communications processing module117, a light source control module118, and an image projection module119. The processor112may be configured to: read and execute a computer readable instruction. In an embodiment, the processor112may mainly include a controller, an arithmetic unit, and a register. The controller is mainly responsible for decoding an instruction, and sends a control signal for an operation corresponding to the instruction. The arithmetic unit is mainly responsible for performing a fixed-point or floating-point arithmetic operation, a shift operation, a logic operation, and the like, or may perform an address operation and an address conversion. The register is mainly responsible for storing a quantity of register operations, intermediate operation results, and the like that are temporarily stored during instruction execution. In an embodiment, a hardware architecture of the processor112may be an application-specific integrated circuit (ASIC) architecture, an MIPS architecture, an ARM architecture, an NP architecture, or the like. In some embodiments, the processor112may be configured to parse signals/a signal received by the wireless communications processing module114and/or the wired LAN communications processing module116, for example, a probe request that is broadcast by the electronic device100, a projection request sent by the electronic device100, and a projection instruction sent by a server of a cloud projection service provider. The processor112may be configured to perform a corresponding processing operation based on a parsing result, for example, generate a detection response, or drive, based on the projection request or according to the projection instruction, the light source control module118and the image projection module to perform a projection operation. In some embodiments, the processor112may be further configured to generate signals/a signal sent by the wireless communications processing module114and/or the wired LAN communications processing module116, for example, a Bluetooth broadcast signal or a beacon signal, or a signal that is sent to the electronic device and that is used to feed back a projection status (for example, a projection success or a projection failure). The memory113is coupled to the processor112, and is configured to store various software programs and/or a plurality of sets of instructions. In an embodiment, the memory113may include a high-speed random access memory, and may further include a nonvolatile memory, for example, one or more magnetic disk storage devices, a flash memory device, or another nonvolatile solid-state storage device. The memory113may store an operating system, for example, an embedded operating system such as uCOS, VxWorks, or RTLinux. The memory113may further store a communication program, and the communication program may be used to communicate with the electronic device100, one or more servers, or an additional device. The wireless communications processing module114may include one or more of a Bluetooth (BT) communications processing module114A and a WLAN communications processing module114B. In some embodiments, the one or more of the Bluetooth (BT) communications processing module and the WLAN communications processing module may obtain, through listening, a signal transmitted by another device (for example, the electronic device100), for example, a probe request or a scan signal; may send a response signal, for example, a probe response or a scan response, so that the another device (for example, the electronic device100) can discover the projector111; establish a wireless communication connection to the another device (for example, the electronic device100); and communicate with the another device (for example, the electronic device100) by using one or more wireless communications technologies such as Bluetooth or a WLAN. In some other embodiments, the one or more of the Bluetooth (BT) communications processing module and the WLAN communications processing module may alternatively transmit a signal, for example, a broadcast Bluetooth signal or a beacon signal, so that another device (for example, the electronic device100) can discover the projector111; establish a wireless communication connection to the another device (for example, the electronic device100); and communicate with the another device (for example, the electronic device100) by using one or more wireless communications technologies such as Bluetooth or a WLAN. The wireless communications processing module114may further include a cellular mobile communications processing module (not shown). The cellular mobile communications processing module may communicate with another device (for example, a server) by using a cellular mobile communications technology. The power switch115may be configured to control a power supply to supply power to the projector111. The wired LAN communications processing module116may be configured to communicate with another device in a same LAN by using a wired LAN, and may be further configured to connect to a WAN by using the wired LAN, and may communicate with a device in the WAN. The RS-232 communications processing module117may be configured to communicate with another device through an RS-232 interface (not shown). The image projection module119may have a light source (not shown), and may modulate, based on image data, light emitted from the light source and project an image on a screen. The light source control module118may be configured to control lighting of the light source of the image projection module119. It may be understood that the structure shown inFIG.1Ddoes not constitute a specific limitation on the projector111. In some other embodiments of this application, the projector111may include more or fewer components than those shown in the figure, or combine some components, or split some components, or have different component arrangements. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. FIG.1Eshows an example of a schematic diagram of a structure of a display121according to this application. As shown inFIG.1E, the display121may include a processor122, a memory123, a wireless communications processing module124, a power switch125, a wired LAN communications processing module126, an HDMI communications processing module127, a USB communications processing module128, and a display screen129. The processor122may be configured to: read and execute a computer readable instruction. In an embodiment, the processor122may mainly include a controller, an arithmetic unit, and a register. The controller is mainly responsible for decoding an instruction, and sends a control signal for an operation corresponding to the instruction. The arithmetic unit is mainly responsible for performing a fixed-point or floating-point arithmetic operation, a shift operation, a logic operation, and the like, and may also perform an address operation and an address conversion. The register is mainly responsible for saving a quantity of register operations temporarily stored during instruction execution, intermediate operation results, and the like. In an embodiment, a hardware architecture of the processor122may be an application-specific integrated circuit (ASIC) architecture, an MIPS architecture, an ARM architecture, an NP architecture, or the like. In some embodiments, the processor122may be configured to parse signals/a signal received by the wireless communications processing module124and/or the wired LAN communications processing module126, for example, a probe request that is broadcast by the electronic device100, a display request sent by the electronic device100, and a display instruction sent by a server of a cloud screen mirroring service provider. The processor122may be configured to perform a corresponding processing operation based on a parsing result, for example, generate a probe response, or drive, based on the display request or according to the display instruction, the display screen129to perform displaying. In some embodiments, the processor122may be further configured to generate signals/a signal sent by the wireless communications processing module124and/or the wired LAN communications processing module126, for example, a Bluetooth broadcast signal or a beacon signal, or a signal that is sent to the electronic device and that is used to feed back a display status (for example, a display success or a display failure). The memory123is coupled to the processor122, and is configured to store various software programs and/or a plurality of sets of instructions. In an embodiment, the memory123may include a high-speed random access memory, and may further include a nonvolatile memory, for example, one or more magnetic disk storage devices, a flash memory device, or another nonvolatile solid-state storage device. The memory123may store an operating system, for example, an embedded operating system such as uCOS, VxWorks, or RTLinux. The memory123may further store a communication program, and the communication program may be used to communicate with the electronic device100, one or more servers, or an additional device. The wireless communications processing module124may include one or more of a Bluetooth (BT) communications processing module124A and a WLAN communications processing module124B. In some embodiments, the one or more of the Bluetooth (BT) communications processing module and the WLAN communications processing module may obtain, through listening, a signal transmitted by another device (for example, the electronic device100), for example, a probe request or a scan signal; may send a response signal, for example, a probe response or a scan response, so that the another device (for example, the electronic device100) can discover the display121; establish a wireless communication connection to the another device (for example, the electronic device100); and communicate with the another device (for example, the electronic device100) by using one or more wireless communications technologies such as Bluetooth or a WLAN. In some other embodiments, the one or more of the Bluetooth (BT) communications processing module and the WLAN communications processing module may alternatively transmit a signal, for example, a broadcast Bluetooth signal or a beacon signal, so that another device (for example, the electronic device100) can discover the display121; establish a wireless communication connection to the another device (for example, the electronic device100); and communicate with the another device (for example, the electronic device100) by using one or more wireless communications technologies such as Bluetooth or a WLAN. The wireless communications processing module124may further include a cellular mobile communications processing module (not shown). The cellular mobile communications processing module may communicate with another device (for example, a server) by using a cellular mobile communications technology. The power switch125may be configured to control a power supply to supply power to the display121. The wired LAN communications processing module126may be configured to communicate with another device in a same LAN by using a wired LAN, and may be further configured to connect to a WAN by using the wired LAN, and may communicate with a device in the WAN. The HDMI communications processing module127may be configured to communicate with another device through an HDMI interface (not shown). The USB communications processing module128may be configured to communicate with another device through a USB interface (not shown). The display screen129may be configured to display an image, a video, and the like. The display screen129may be a liquid crystal display (LCD), an organic light-emitting diode (OLED) display screen, an active-matrix organic light emitting diode (AMOLED) display screen, a flexible light-emitting diode (FLED) display screen, a quantum dot light emitting diode (QLED) display screen, or the like. In some embodiments, the display121may further include an audio module (not shown). The audio module may be configured to output an audio signal through an audio output interface, so that the display121can support audio playback. The audio module may be further configured to receive audio data through an audio input interface. The display121may be a media playback device such as a television set. In some embodiments, the display121may further include a serial interface such as an RS-232 interface. The serial interface may be connected to another device, for example, an audio speaker device such as a sound box, so that the display collaborates with the audio speaker device to play audio and video. It may be understood that the structure shown inFIG.1Edoes not constitute a specific limitation on the display121. In some other embodiments of this application, the display121may include more or fewer components than those shown in the figure, or combine some components, or split some components, or have different component arrangements. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. The following describes an example user interface that is on the electronic device100and that is used for an application menu. FIG.2Ashows an example of an example user interface21that is on the electronic device100and that is used for an application menu. The user interface21may include a status bar201, a tray217having a frequently used application icon, a calendar indicator213, a weather indicator215, a navigation bar251, and other application icons. The status bar201may include one or more signal strength indicators203of a mobile communication signal (which may also be referred to as a cellular signal), one or more signal strength indicators207of a wireless fidelity (wireless fidelity, Wi-Fi) signal, a battery status indicator209, and a time indicator211. The calendar indicator213may be used to indicate a current time, for example, a date, a day of a week, or hour minute information. The weather indicator215may be used to indicate a weather type, for example, Clouds Early/Clearing Late or Light Rain, and may be further used to indicate information such as a temperature. The tray217having the frequently used application icon may display a Phone icon219, a Contacts icon221, a Messages icon223, and a Camera icon225. The navigation bar251may include system navigation buttons such as a back button253, a home screen button255, and a historical call-out task button257. When detecting that a user taps the back button253, the electronic device100may display a previous page of a current page. When detecting that a user taps the home screen button255, the electronic device100may display a home screen. When detecting that a user taps the historical call-out task button257, the electronic device100may display a task last opened by the user. Names of the navigation buttons may alternatively be other names. This is not limited in this application. In addition to a virtual button, each navigation button in the navigation bar251may be further implemented as a physical button. For example, the other application icons may be a WeChat icon227, a QQ icon229, a Twitter icon231, a Facebook icon233, a Mailbox icon235, a Cloud sharing icon237, a Memo icon239, an Alipay icon241, a Gallery icon245, and a Settings icon247. The user interface21may further include a page indicator249. The other application icons may be distributed on a plurality of pages, and the page indicator249may be used to indicate a specific page on which an application is currently browsed by the user. The user may swipe left or right in an area of the other application icons, to browse an application icon on another page. In some embodiments, for example, the user interface21shown inFIG.2Amay be a home screen. In some other embodiments, the electronic device100may further include a home button. The home button may be a physical button or a virtual button. The home button may be used to receive an instruction from the user, and return a currently displayed UI to the home screen, so that the user can view the home screen at any time. The instruction may be specifically an operation instruction of pressing the home button once by the user, or may be an operation instruction of consecutively pressing the home button twice within a short time by the user, or may be an operation instruction of touching and holding the home button within a predetermined time by the user. In some other embodiments of this application, a fingerprint recognizer may be further integrated into the home button, to collect and recognize a fingerprint when the home button is pressed. It may be understood thatFIG.2Amerely shows the example of the user interface on the electronic device100, and should not constitute a limitation on an embodiment of the application. FIG.2B-1andFIG.2B-2show an example of an operation of enabling “Moment share” on the electronic device100. As shown inFIG.2B-1andFIG.2B-2, when detecting a downward swipe gesture performed on the status bar201, the electronic device100may display a window261in the user interface21in response to the gesture. The window261may display an on/off control263of “Moment share”, and may further display on/off controls of other functions (for example, Wi-Fi, Bluetooth, and Flashlight). When detecting an operation (for example, a touch operation performed on the on/off control263) performed on the on/off control263in the window261, the electronic device100may enable “Moment share” in response to the operation. In other words, the user may perform a downward swipe gesture on the status bar201to open the window261, and may tap the on/off control263of “Moment share” in the window261to conveniently enable “Moment share”. In addition to enabling “Moment share” in the window261, the user may further enable “Moment share” when selecting data (such as a picture, a document, or a web page) for sharing. Detailed descriptions are provided in subsequent embodiments, and details are not described herein. The following separately describes application scenarios in this application and some embodiments of the user interface implemented on the electronic device100. Scenario in which a User Shares a Picture FIG.3Ashows an example of a user interface31of a first application (for example, “Gallery”) displayed by an electronic device such as a smartphone. “Gallery” is a picture management application on an electronic device such as a smartphone or a tablet computer, and may also be referred to as “Album”. A name of the application is not limited in an embodiment. The app may support the user in performing various operations on a picture stored in the electronic device, for example, operations such as browsing, editing, deletion, and selection. In other words, an object managed by “Gallery” is the picture. In some other cases, the app may also support the user in performing the various operations on a picture stored in a cloud server. It may be understood that in an embodiment, the picture may be captured by the electronic device by using the camera193, or may be obtained from another application or downloaded from a web page. As shown inFIG.3A, the user interface31may include a status bar301, an application title bar317, a picture area321, and a navigation bar329. For the status bar301, refer to the status bar201in the user interface21shown inFIG.2A. Details are not described herein again. The application title bar317may include a back button313and a current page indicator315. The back button313is an app-level back button, and may be used to back to an upper-level menu. One of ordinary skilled in the art may understand that a logical upper level of a page is fixed, and is determined during application design. The current page indicator315may be used to indicate a current page, for example, may be text information “Gallery”. In addition to the text information, the current page indicator315may be further an icon. One or more pictures may be displayed in the picture area321, for example, a picture319. When the electronic device detects an upward swipe operation or a downward swipe operation in the picture area317, the electronic device may update, in response to the swipe operation, the picture displayed in the picture area317, so that the user browses the picture. To be specific, the user may swipe up or down in the picture area317to browse more pictures. In addition to performing the upward swipe operation or the downward swipe operation, the user may further swipe left or right in the picture area317to browse more pictures. The picture319may be a thumbnail. In this case, an original picture corresponding to the picture319may be stored in the electronic device, or may be stored in a cloud server. Unless otherwise specified, a picture in the following embodiments may be stored in the electronic device, or may be stored in the cloud server. For the navigation bar329, refer to the navigation bar251in the user interface21shown inFIG.2A. Details are not described herein again. FIG.3Bshows an example of an embodiment in which a user shares a picture in “Gallery”. As shown inFIG.3B, the electronic device may detect, in the user interface31, an operation of selecting a picture318and the picture319by the user for sharing. The electronic device may display a “moment share interface” in response to the operation. A device option corresponding to a device such as a printer, a projector, or a display discovered by the electronic device by using the wireless communications module160may be displayed in the “moment share interface”. The device option may be represented by using a device icon, text information, or the like. In this way, the user may select, in the “moment share interface” by performing an operation of tapping a printer option, a printer to print pictures (for example, the picture318and the picture319) selected in a first operation. The operation is simple, and print efficiency of the electronic device is also improved. Similarly, the user may alternatively select, in the moment share interface by performing an operation of tapping a projector option, a projector to project pictures (for example, the picture318and the picture319) selected in a first operation. The user may alternatively select, in the “moment share interface” by performing an operation of tapping a display option, a display to perform screen mirroring on pictures (for example, the picture318and the picture319) selected in a first operation. In other words, the user may select an object such as a picture in “Gallery” for sharing, and may print the object such as the selected picture, or project the object such as the selected picture, or perform screen mirroring on the object such as the selected picture, or the like. In this application, an operation of sharing the object such as the selected picture may be referred to as the first operation. It may be understood that the “moment share interface” is merely a word used in the embodiments of this application, a meaning represented by the word is described in subsequent GUI embodiments, and a name of the word does not constitute any limitation on the embodiments of this application. In some embodiments, an operation of selecting a picture for sharing may be an operation of first selecting one or more pictures and then tapping a button335. In some embodiments, when the electronic device detects, in the user interface31in which one or more pictures are displayed, an operation of selecting one or more pictures319, the electronic device may display a menu333in the user interface31. In some embodiments, the electronic device may further display marks331on the selected picture318and the selected picture319. The mark331may indicate that the picture has been selected by the user. In some embodiments, the electronic device may alternatively initially display the menu333in the user interface31, in other words, may display the menu333without detecting that the user selects the picture. The menu333may include a control335(“Share”), a control337(“Move”), a control339(“Select all”), and a button341(“More”). The control335may be used by the user to share the selected picture, the control337may be used to listen to an operation of moving the selected picture to another storage path, the control339may be used to listen to an operation of selecting all pictures in Gallery, and the button341may be used to listen to an operation of opening a next-level menu, to provide more functions, for example, renaming and picture editing. In addition to the operation of first selecting the one or more pictures and then tapping the control335, the first operation may be further presented in another form, for example, an operation of first selecting a picture and then drawing a circle gesture in the picture area321, or an operation of selecting a picture in a fixed time (for example, 1 second) after the electronic device is shaken. The first operation may be further a voice control operation, that is, the user only needs to speak out a voice instruction for sharing a picture. An embodiment of the operation of selecting the picture for sharing is not limited in this application. In addition to the picture in “Gallery”, the scenario in which the user shares the picture may further include that the user shares a picture in another application, for example, a picture in an application such as File browser. Moreover, in addition to the picture in the electronic device, the picture shared by the user may further include a picture shared by the user in the cloud server. UI Embodiments in which Printing is Performed by Using the Electronic Device in the Scenario in which the User Shares the Picture that are Provided in this Application UI Embodiments Shown as Examples inFIG.4AtoFIG.4H In the UI embodiments shown as the examples inFIG.4AtoFIG.4H, the user may select a printer near the electronic device that is discovered by the electronic device to print a picture. The picture selected by the user may be a picture stored in the electronic device, or may be a picture in a cloud server accessed by the electronic device. “Moment share” may be used to support the user in sharing data with a device near the electronic device. The nearby device may include a nearby first device, for example, a nearby printer, a nearby projector, or a nearby display, or may include a nearby second device, for example, a nearby mobile phone, a nearby tablet computer, or a nearby personal computer. Enabling “Moment share” may be enabling one or more of a WLAN or Bluetooth. After enabling “Moment share”, the electronic device may discover the device near the electronic device by using a communications technology such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, or a Wi-Fi LAN. The Following Describes User Interfaces Provided in the Examples of the UI Embodiments Shown inFIG.4AtoFIG.4H. “Moment Share Interface” The “moment share interface” is displayed on a touchscreen of the electronic device when the electronic device detects an operation of selecting a picture for sharing. In some embodiments, the “moment share interface” may be used to display one or more device options, one or more user options, and one or more service options. The device option may correspond to a nearby first device discovered by the electronic device, the user option may correspond to a nearby second device discovered by the electronic device, and the service option may correspond to an application or a protocol used to share data. The device option may include one or more of the following: a printer option, a projector option, and a display option. The printer option may correspond to a nearby printer discovered by the electronic device, the projector option may correspond to a nearby projector discovered by the electronic device, and the display option may correspond to a nearby display discovered by the electronic device. The electronic device may trigger, in response to a detected operation performed on the device option, a first device corresponding to the device option selected in the operation to process the selected picture. The processing may include one or more of the following: printing, projection, and screen mirroring. One or more pictures in “Gallery” may be further displayed in the “moment share interface”, and the one or more pictures may include the picture selected by the user. In an embodiment, a user interface41shown as an example inFIG.4AtoFIG.4Cmay be the “moment share interface”. As shown inFIG.4AtoFIG.4C, the user interface41may include an area405, an area431, and an area421. The area405may be used to display one or more pictures in Gallery, and the one or more pictures may include pictures selected by the user, for example, a selected picture406and a selected picture407. In some embodiments, marks409may be displayed on the selected picture406and the selected picture407, and the marks409may indicate that the picture406and the picture407that correspond to the marks409are selected by the electronic device (that is, the pictures have been selected by the user). In some other embodiments, a control411and a control413may be further displayed in the area405, and the two controls may be used to switch or update the picture displayed in the area405. In addition to the controls, another interactive element may also be displayed in the area405to switch or update the picture displayed in the area405. In some other embodiments, the electronic device may not need to display the controls411and413in the area405, and instead, the user performs a leftward or rightward swipe gesture or the like in the area405to switch or update the picture. The picture407may be a thumbnail. An original picture corresponding to the picture displayed in the area405may be a picture stored in the electronic device, or may be stored in the cloud server. One or more service options (for example, an icon433) may be displayed in the area431. An application or a protocol corresponding to the service option may be used to support sharing the picture selected by the user with a contact or a server. In some embodiments, the electronic device may trigger, in response to an operation (for example, a touch operation performed on the icon433) that is detected in the area431and that is performed on the service option, a process of sharing the selected picture with a cloud contact or a server by using an application or a protocol corresponding to the service option. The process may include: The electronic device opens the application or the protocol, displays a user interface of the application or the protocol, and when detecting, in the user interface, an operation of sharing data by the user, shares the selected picture with the cloud contact or the server by using the application or the protocol in response to the operation. In other words, the user may share the data by using the application or the protocol corresponding to the service option. For example, the user shares the selected picture with one or more contacts in WeChat. For another example, the user shares the selected picture with a dynamic publishing platform (namely, a server) of Facebook. In some other embodiments, a page indicator435may be further displayed in the area431. When a relatively large quantity of service options need to be displayed in the area431, the service options may be displayed on a plurality of pages. The page indicator435may indicate a page on which a currently displayed service option is located. The area421may be used to display an option of a nearby device discovered by the electronic device, and one or more user options. The user option corresponds to a nearby second device discovered by the electronic device. The following describes embodiments of the area421in the following cases. When “Moment share” is not enabled, as shown inFIG.4A, both an icon423and prompt information425may be displayed in the area421. The icon423may be used to listen to an operation of enabling “Moment share”. The prompt information425may be used to prompt the user to enable “Moment share”. The prompt information425may be text information, for example, “Tap here to enable Moment share”. In addition to the text information, the prompt information425may be further in another form such as a picture or a link. This is not limited in this embodiment. In some other embodiments, the prompt information425in the user interface may not be displayed on the touchscreen, but may be audio played by using the speaker170A. It may be understood that in some other embodiments, in addition to the icon423, the electronic device may further listen to an operation of enabling “Moment share” by using an interactive element (IE) in another form. For example, some or all of the prompt information425may also be used to receive the operation of enabling “Moment share”. For example, some characters “Tap here” in the prompt information425“Tap here to enable Moment share” may be used to receive the operation of enabling “Moment share”. As shown inFIG.4A, the electronic device may detect an operation (for example, an operation performed by the user on the icon423, such as tapping, heavy pressing, or touching and holding) performed on the icon423, and in response to the operation, the electronic device may enable “Moment share”, and may further update the area421. The updated area421may be shown inFIG.4B. The electronic device may further display indicators of related wireless signals of “Moment share” in a status bar, for example, a Wi-Fi indicator410and a Bluetooth indicator408. For details, refer toFIG.4B. When “Moment share” is enabled but the electronic device has not discovered a nearby device, as shown inFIG.4B, both an icon427and prompt information429may be displayed in the display area421. The icon427may indicate that “Moment share” is enabled. The prompt information429may be used to prompt the user with a fact that the electronic device is searching for a nearby device. For example, the prompt information429may be text information “Searching for a nearby device. Bluetooth or a WLAN needs to be enabled on the other party. If printing is required, ensure that a printer is turned on. Learn more”. “Learn more” can be tapped by the user to show more details than the prompt information429. In addition to the text information, the prompt information429may be further in another form such as a picture. This is not limited in this embodiment. In some other embodiments, the prompt information429in the user interface may not be displayed on the touchscreen, but may be audio played by using the speaker170A. It may be understood that in addition to the interactive elements (the icon427and the prompt information429) shown as an example inFIG.4B, an interactive element in another form may be further used in the area421to indicate that “Moment share” is enabled and the user is prompted with a fact that the electronic device is searching for a nearby device. In some embodiments, when the electronic device does not discover a nearby device, the electronic device may not present any content in the display area421, that is, the display area421is blank. Therefore, this may indicate that the nearby device is not discovered currently. If the electronic device discovers the nearby device after a period of time, the electronic device may update information in the area421, where an option (for example, an icon or a text) of the nearby device discovered by the electronic device and/or a user option corresponding to a nearby second device discovered by the electronic device may be displayed in the updated area421. For details, refer toFIG.4C. When “Moment share” is enabled and the electronic device discovers the nearby device, as shown inFIG.4C, the option of the nearby device discovered by the electronic device, for example, a printer icon445, and/or the user option corresponding to the nearby second device discovered by the electronic device, for example, a user icon441or a user icon443, may be displayed in the area421. In other words, the area421may be used to display the option of the nearby device discovered by the electronic device, or may be used to display the user option corresponding to the nearby second device discovered by the electronic device. In addition to the device icon (for example, the printer icon445), the device option may be further represented in another form, for example, text information “Printer”. In addition to the user icon, the user option may be further represented in another form, for example, text information “MAC's mobile phone”, where “MAC” in the text information “MAC's mobile phone” is a user account, or text information “Cindy's tablet computer”, where “Cindy” in the text information “Cindy's tablet computer” is a user account. The user option (for example, the user icon441or the user icon443) displayed in the area421may be used to listen to an operation used to trigger sharing. The electronic device may trigger, in response to a detected operation (for example, a touch operation performed on the user icon) performed on the user option, a process of sharing a selected picture to a second device corresponding to the user option selected in the operation. The process may include: The electronic device establishes a communication connection to the second device corresponding to the selected user option, and then transmits, by using the communication connection, the selected picture to the second device corresponding to the user option. A printer option (for example, the printer icon445) displayed in the area421may be used to listen to an operation of selecting a printer to trigger printing. For example, the operation may be an operation (for example, a touch operation performed on the printer icon) performed on the printer option. How the electronic device processes the detected operation of selecting the printer to trigger printing is described in detail in the following embodiments. In some embodiments, the printer corresponding to the printer option displayed in the area421is a printer that can support printing the selected picture. Herein, supporting printing may mean that a format supported by the printer includes a format (for example, a picture format of a picture) of data selected by the user. In some embodiments, the electronic device may first determine whether a print format supported by the discovered printer includes a format of the selected picture. If the print format supported by the discovered printer does not include the format of the selected picture, the electronic device may not display, in the area421, the printer option corresponding to the printer. If the print format supported by the discovered printer includes the format of the selected picture, the electronic device may display, in the area421, the printer option corresponding to the printer. Therefore, the printer corresponding to the printer option displayed in the area421can support printing the picture selected by the user. In this way, a problem that data printing fails because the user selects an inappropriate printer can be avoided, thereby avoiding a waste of resources, and improving use efficiency of the electronic device. In some embodiments, the printer corresponding to the printer option displayed in the area421is a printer can normally work. Herein, normally working may include one or more of the following: The printer has sufficient consumables (for example, an ink cartridge and paper), the printer has no abnormality (for example, an abnormal temperature or an extremely low battery level), and the like. In this way, a problem that data printing fails because the user selects a printer that cannot normally work can be avoided, thereby avoiding a waste of resources, and improving use efficiency of the electronic device. In some embodiments, a control447or a control449may be further displayed in the area421. The control447or the control449may be used by the user to switch or update the device option displayed in the area421, so that more discovered first devices can be viewed. In addition to the controls, another interactive element may also be used by the user to switch the device option displayed in the area421. In some other cases, the user may further perform a leftward or rightward swipe gesture in the area421to switch or update the device option displayed in the area421. In some other embodiments, the electronic device may automatically update information in the area421, and a device option corresponding to a nearby device currently discovered by the electronic device may be displayed in the updated area421. The device option corresponding to the nearby device that is discovered once by the electronic device but cannot be discovered currently may no longer be displayed in the area421. In addition, a device option corresponding to a nearby device that is newly discovered by the electronic device may be displayed in the area421. In some other cases, a control may be further displayed in the rea421, and the control is used by the user to manually update a device option corresponding to a currently discovered nearby device. In some other embodiments, as shown inFIG.4C, operation prompt information corresponding to the device option may be further displayed in the area421. Operation prompt information corresponding to a device option may be used to prompt the user with an operation that can be used to trigger the electronic device to share data with the first device corresponding to the device option, or trigger the first device corresponding to the device option to perform corresponding processing, such as printing, projection, or screen mirroring, on selected data (for example, the selected picture). For example, text information “Tap to print” displayed below the printer icon445may be used to prompt the user to tap the icon445to trigger the printer to print the selected picture. To be specific, operation prompt information corresponding to the printer option may be used to prompt the user to trigger, through an operation (for example, a touch operation performed on the printer icon) performed on the printer option, the printer corresponding to the printer option to print the selected picture. For another example, operation prompt information corresponding to the projector option may be used to prompt the user to trigger, through an operation (for example, a touch operation performed on the projector icon) performed on the projector option, a projector corresponding to the projector option to project the selected picture. For still another example, operation prompt information corresponding to the display option may be used to prompt the user to trigger, through an operation (for example, a touch operation performed on the display icon) performed on the display option, a display corresponding to the display option to display the selected picture. The foregoing examples are merely some embodiments provided in an embodiment, and should not be construed as a limitation. In addition to the operation performed on the device option (for example, the device icon), the operation prompt information corresponding to the device option may be further used to prompt the user to trigger, by performing an operation in another form, the first device (for example, the printer) corresponding to the device option to perform different processing on data shared by the electronic device. For example, the operation prompt information may be used to prompt the user to perform a specific gesture of drawing a circle counterclockwise on the printer option (for example, the printer icon) to trigger the printer corresponding to the printer option to print the selected picture. For another example, the operation prompt information may be further used to prompt the user to perform a specific gesture of drawing a circle clockwise on the projector option (for example, the projector icon) to trigger the projector corresponding to the projector option to project the selected picture. It may be understood that in an embodiment, in addition to the area405, the area421, and the area431described above, the “moment share interface” may further include an interactive element in another form. As shown inFIG.4AtoFIG.4C, the user interface41may further include a title bar, and both a control401and indication information403may be displayed in the title bar. The control401may be used to cancel an operation of selecting a picture for sharing, that is, the user may tap the control401to cancel sharing of the selected picture. The indication information403may be used to indicate a quantity of selected pictures. In the “moment share interface” shown inFIG.4AtoFIG.4C, an area (for example, the area405) used to display one or more pictures may be referred to as a first area, an area (for example, the area431) used to display a service option (for example, a WeChat icon or a Mailbox icon) may be referred to as a second area, and an area (for example, the area521) used to display a user option and a device option may be referred to as a third area. An interactive element (for example, the icon423) that is displayed in the third area and that is used to enable “Moment share” may be referred to as a first interactive element. The device option displayed in the examples of the UI embodiments shown inFIG.4AtoFIG.4His specifically the nearby device option, and corresponds to the nearby first device discovered by the electronic device, such as the printer, the projector, or the display. In addition to page layouts shown inFIG.4AtoFIG.4C, a page layout of the “moment share interface” may be further presented in another form. This is not limited in this embodiment. Related User Interface Used to Select a Printer for Printing As shown inFIG.4C, the electronic device may detect an operation (for example, tapping) performed on the printer icon in the area421. In other words, the electronic device may detect, in the area421, an operation (for example, a touch operation performed on the printer icon) performed on the printer option. The operation is an operation of selecting the printer for printing, and can be used to trigger printing. The printer corresponding to the printer option on which the operation is performed is a selected printer, namely, a printer selected by the user. In some embodiments, the electronic device may display, in response to the detected operation (for example, a touch operation performed by the user on the printer option), a user interface43shown inFIG.4D. The user interface43may be used by the user to perform a print setting. As shown inFIG.4D, the user interface43may include but is not limited to an area431, an area453, and a control457. The area431may be used by the user to perform a print setting, for example, set a quantity of to-be-printed copies, a paper size, and a print color. The area453may display a selected picture (for example, a picture455), and may support the user in selecting (for example, selecting by performing a leftward or rightward swipe operation) a picture on which a print setting needs to be performed. It should be understood that the picture455in the area453may be a thumbnail. An original picture corresponding to the picture455may be a picture stored in the electronic device, or may be stored in a cloud server. In some embodiments, the user interface43may alternatively be used for a print preview. For example, a display state (for example, a color or a paper size) of the picture455may be determined based on the print setting selected by the user in the area431. In this way, the user can view a print effect in advance, thereby improving user experience. The control457may be used to listen to an operation used to trigger a selected printer to perform printing based on the existing print setting. Text information “Start printing” may be displayed on the control457. In addition, prompt information in another form may be further displayed on the control457to prompt the user to trigger the selected printer to perform printing based on the existing print setting, for example, text information “Setting completed”. The electronic device may trigger, in response to the operation (for example, a touch operation performed by the user on the control457) detected on the control457, the selected printer “JIAPUWEI TH880” to print the selected picture based on the existing print setting (namely, the print setting selected by the user in the area431). It can be learned from the foregoing descriptions that in the foregoing embodiment, when the user triggers printing, the user interface43used by the user to perform the print setting may be provided, so that the user performs the print setting, for example, sets the quantity of to-be-printed copies, the paper size, and the print color. In this way, the electronic device can provide personalized selection that meets different user requirements for a print service, thereby improving user experience. In some other embodiments, the electronic device may trigger, in response to a detected operation (for example, a touch operation performed on the printer icon) performed on the printer option, the selected printer (for example, “JIAPUWEI TH880”) to print the selected picture based on a default print setting. For example, the default print setting may include: A quantity of to-be-printed copies is 1, a default paper size is A4, a default print color is black and white, and the like. It can be learned that in the foregoing embodiment, when the user triggers printing, a print service based on the default print setting may be provided, and the user interface43shown inFIG.4Dmay not need to be displayed. This simplifies an operation of printing a file by using the electronic device, and improves print efficiency of the electronic device. The following describes an embodiment of selecting a plurality of printers for printing at a time. In some embodiments, the electronic device may detect, in the “moment share interface”, an operation of dragging selected pictures to a plurality of printer options, and the electronic device may trigger, in response to the operation, printers corresponding to the plurality of printer options to respectively print pictures allocated to the printers. A picture allocated to a printer corresponding to a printer option may be a picture dragged to the printer option. For example, in the “moment share interface” shown as an example inFIG.4C, when the selected pictures are the picture406and the picture407, and the electronic device detects an operation of dragging the picture406to an icon of “Yunpeng's Canon TS318 . . . ”, and detects an operation of dragging the picture407to an icon of “JIAPUWEI TH880”, in response to the two operations, the electronic device may trigger the printer “Yunpeng's Canon TS318 . . . ” to print the picture406, and may trigger the printer “JIAPUWEI TH880” to print the picture408. To be specific, the user may drag the selected pictures to different printer options in the “moment share interface”, to allocate the selected pictures to different printers for printing, thereby improving print efficiency and user experience. In addition to the selected pictures, the user may further drag an unselected picture to a printer option in the “moment share interface”. In addition to the drag operation, in the “moment share interface”, the operation used to allocate the pictures to the plurality of printers may be further presented in another form. This is not limited in this application. In some other embodiments, the electronic device may detect, in the “moment share interface”, an operation of selecting a plurality of printer options, for example, detect an operation of consecutively tapping the plurality of printer options. Herein, the consecutive tapping may be a plurality of tap operations performed in a preset time period (for example, 1 second). Printers corresponding to the plurality of printer options selected in the operation are selected printers. The electronic device may trigger, in response to the operation according to a preset allocation policy, the selected printer to print the selected picture, thereby improving print efficiency and user experience. The preset allocation policy may be randomly allocating the selected pictures to the printers corresponding to the plurality of printer options for printing, or evenly allocating the selected pictures to the printers corresponding to the plurality of printer options for printing. The preset allocation policy may alternatively be that each selected printer prints all selected pictures. The preset allocation policy is not limited in this application. In some embodiments, after triggering the printer to perform printing, the electronic device may further display a notification window471shown as an example in one or more ofFIG.4EtoFIG.4H, and may display, in the notification window471, prompt information475indicating a print status of selected data (for example, a selected picture). For example, as shown inFIG.4EtoFIG.4H, the prompt information475may be “Print task is queuing . . . ”, “JIAPUWEI TH880 is printing . . . ”, “Printing is completed”, “Printing fails”, or the like. In this way, the user can very intuitively view a current print status, thereby improving user experience. In some other embodiments, the prompt information475in the user interface may not be displayed on the touchscreen, but may be audio played by using the speaker170A. User Interface Used to Feed Back a Print Status In some embodiments, after the printer is triggered to perform printing, as shown in one or more ofFIG.4EtoFIG.4H, the electronic device may display the notification window471in the user interface21shown as an example inFIG.2A. The user interface21may be a home screen. In this way, the user may return to the home screen to perform another transaction, for example, open another application. The notification window471may be used to prompt the user with the print status of the selected picture. As shown inFIG.4E, the electronic device may display the prompt information475in the notification window471, where the prompt information475may be used to indicate that the print status of the selected picture is a first print state. The first print state may indicate that a print task of the selected picture is in a print task queue of a printer, and is waiting in the queue to be processed by the printer. The prompt information475may be text information “Print task is queuing . . . ”, and is not limited thereto. The prompt information475may alternatively be information in another form such as a picture or an animation. In some embodiments, as shown inFIG.4E, when the printer state prompted by the prompt information475in the notification window471is the first print state, the electronic device may further display a control473in the notification window471. Text information “Cancel printing” may be displayed on the control473. When the electronic device detects an operation (for example, a touch operation performed by the user on the control473) performed on the control473, the electronic device may cancel printing of the selected picture in response to the operation. Herein, canceling printing is canceling the print task. To be specific, the print task of the selected picture is deleted from the print task queue of the printer, and therefore the printer does not print the selected picture. As shown inFIG.4F, the electronic device may display the prompt information475in the notification window471, where the prompt information475may be used to prompt the user with a fact that the print status of the selected picture is a second print state. The second print state may indicate that the printer is printing the selected picture. The prompt information475may be text information “Printing . . . ”, and is not limited thereto. The prompt information475may alternatively be information in another form such as a picture or an animation. In some embodiments, as shown inFIG.4F, when the printer state prompted by the prompt information475in the notification window471is the second print state, the electronic device may further display a control477in the notification window471. Text information “Stop printing” may be displayed on the control477. When the electronic device detects an operation (for example, a touch operation performed on the control477) performed on the control477, the electronic device may stop printing of the selected picture in response to the operation. Herein, stopping printing is stopping a current printing task. In other words, the printer may have finished partial printing. In some other embodiments, when detecting an operation performed on the control477, the electronic device may further display, in response to the operation, another piece of prompt information (not shown in the figure) in the notification window471, where the prompt information may indicate which selected pictures have been printed and which selected pictures have not been printed. As shown inFIG.4G, the electronic device may display the prompt information475in the notification window471, where the prompt information475may be used to indicate that the print status of the selected picture is a third print state. The third print state may indicate that printing of the selected picture is complete. The prompt information475may be text information “Printing is completed”, and is not limited thereto. The prompt information475may alternatively be information in another form such as a picture or an animation. As shown inFIG.4H, the electronic device may display the prompt information475in the notification window471, where the prompt information475may be used to indicate that the print status of the selected picture is a fourth print state. The fourth print state may indicate that the printer fails to print the selected picture. The prompt information475may be text information “Printing fails”, and is not limited thereto. The prompt information475may alternatively be information in another form such as a picture or an animation. In some embodiments, as shown inFIG.4H, when the printer state prompted by the prompt information475in the notification window471is the fourth print state, the electronic device may further display a control479in the notification window471. Text information “Tap to view a print failure cause” may be displayed on the control479. When the electronic device detects an operation (for example, a touch operation performed by the user on the control479) performed on the control479, the electronic device may display a detailed print failure cause in response to the operation. In this way, the electronic device can more accurately prompt the user with a specific print failure cause, so that the user can perform correction in time when printing is performed next time. In addition to the control479shown inFIG.4H, the electronic device may further display the specific print failure cause in the notification window471, for example, a failure cause such as a paper jam, exhaustion of consumables, an incorrect picture format, a low battery level, or overheating. In some other embodiments, the prompt information475may be used to prompt the user with the specific print failure cause. In this application, the notification window471may be referred to as a first notification window. According to the UI embodiments shown as the examples inFIG.4AtoFIG.4H, the electronic device may automatically discover the printer when the user needs to share the picture, and intuitively present the discovered printer to the user, so that the user can tap the printer option to trigger the printer to print the picture selected by the user, and user experience is intuitive and simple, thereby greatly improving efficiency of performing a print service by using the electronic device. UI Embodiments Shown as Examples inFIG.5AtoFIG.5J In the UI embodiments shown as the examples inFIG.5AtoFIG.5J, the user may select a printer near the electronic device that is discovered by the electronic device to print a picture. The picture selected by the user may be a picture stored in the electronic device, or may be a picture in a cloud server accessed by the electronic device. “Moment share” may be used to support the user in sharing data with a device near the electronic device. The nearby device may include a nearby first device, for example, a nearby printer, a nearby projector, or a nearby display, or may include a nearby second device, for example, a nearby mobile phone, a nearby tablet computer, or a nearby personal computer. Enabling “Moment share” may be enabling one or more of a WLAN or Bluetooth. After enabling “Moment share”, the electronic device may discover the device near the electronic device by using a communications technology such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, or a Wi-Fi LAN. Different from the UI embodiments shown as the examples inFIG.4AtoFIG.4H, in the embodiments, to improve data security of a print service, authentication may also need to be performed on a print service of a printer, that is, only after verifying validity of the electronic device, the printer can start to process the print service related to the electronic device. In some embodiments, authentication may be performed by requesting, by the printer, the electronic device to pay a print service fee. To be specific, the fee needs to be paid for the printer selected by the user, and the electronic device is authorized for printing only after the fee is successfully paid. In some other embodiments, authentication may be performed in a manner of a whitelist or a blacklist, and only an electronic device in the whitelist is authorized for printing, or only an electronic device that is not in the blacklist is authorized for printing. In some other embodiments, authentication may be performed by entering a dynamic verification code (sent to the electronic device) on the printer, and the electronic device is authorized for printing only after the dynamic verification code is correctly entered. In some embodiments, the printer may further set a plurality of authentication levels, and the authentication level may be determined based on a print setting selected by the user. For example, a more complex print setting indicates a higher authentication level of the print service, that is, an authentication process is more complex. For another example, a more complex print setting indicates a lower authentication level of the print service, that is, an authentication process is simpler. The Following Describes User Interfaces Provided in the Examples of the UI Embodiments Shown inFIG.5AtoFIG.5J. “Moment Share Interface” A user interface51shown as an example inFIG.5AtoFIG.5Cmay be the “moment share interface” mentioned in the foregoing content. For an embodiment of the user interface51, refer to the user interface41shown inFIG.4AtoFIG.4C. Details are not described herein again. In the “moment share interface” shown inFIG.5AtoFIG.5C, an area used to display one or more pictures may be referred to as a first area, an area used to display a service option (for example, a WeChat icon or a Mailbox icon) may be referred to as a second area, and an area (for example, an area521) used to display a user option and a device option may be referred to as a third area. An interactive element (for example, an icon522) that is displayed in the third area and that is used to enable “Moment share” may be referred to as a first interactive element. Related User Interface Used to Select a Printer for Printing As shown inFIG.5C, the electronic device may detect, in the area521, a touch operation performed on a printer icon. In other words, the electronic device may detect, in the area521, an operation (for example, a touch operation performed by the user on the printer icon) performed on the printer option. The operation is an operation of selecting the printer for printing, and can be used to trigger printing. The printer corresponding to the printer option on which the operation is performed is a selected printer, namely, a printer selected by the user. In some embodiments, for the detected operation in the area521, if authentication needs to be performed on the printer (for example, “Yunpeng's Canon TS318 . . . ”) selected by the user, in response to the detected operation (for example, a touch operation performed by the user on the printer option), the electronic device may first display a user interface (for example, a user interface53) used by the user to perform a print setting, and after the print setting is completed, the electronic device displays a user interface used for authentication. After the authentication succeeds, the printer selected by the user may print the selected picture based on the print setting selected by the user. FIG.5Dshows an example of the user interface53that may be used by the user to perform the print setting. For display content in the user interface53, refer to the user interface43described inFIG.4D. Details are not described herein again. The electronic device may display, in response to an operation (for example, a touch operation performed by the user on a control557) detected on the control557, a user interface used for authentication, for example, related user interfaces used to pay a print fee that are shown as examples inFIG.5EandFIG.5F. In some other embodiments, for the detected operation performed on the printer option in the area521, if authentication needs to be performed on the printer selected by the user, the electronic device may display, in response to the detected operation, a user interface used for authentication, for example, related user interfaces used to pay a print fee that are shown as examples inFIG.5EandFIG.5F. After the authentication succeeds (for example, the payment succeeds), the electronic device may trigger the printer to print the selected picture based on a default print setting. For example, the print fee may be determined based on the default print setting. For example, a default quantity of to-be-printed copies is 1, a default paper size is A4, and a default print color is black and white. An embodiment of the user interface used for authentication is not limited in this application. The following uses the related user interface used by the user to pay the print fee as an example for description. FIG.5EandFIG.5Fshow examples of the related user interfaces used by the user to pay the print fee. As shown inFIG.5E, order information and a control561that correspond to a selected picture may be displayed in a user interface55. The order information may include a print fee that needs to be paid by the user, for example, “¥12.00”. The order information may further include one or more of the following: indication information of a payee, a number of an order, and the like. For example, the indication information of the payee may be “Yunpeng”, and the number of the order may be “2018020312366”. The electronic device may display, in response to an operation (for example, a touch operation performed by the user on the control561) detected on the control561, a user interface (not shown) used by the user to enter a payment password. An embodiment of the user interface is not limited in an embodiment. After the payment succeeds, the electronic device may display a user interface57shown inFIG.5F. As shown inFIG.5F, one or more of order information, a transaction time (for example, “2018-2-8 08:08”) of an order, indication information (for example, “Payment succeeds”) of a transaction status, indication information (for example, “Balance”) of a payment method, and the like may be displayed in the user interface57. In some cases, a control563may be further displayed in the user interface57. The control563may be used to determine an operation of completing payment. The control563may be a button or another interactive element. This is not limited in this embodiment. In some embodiments, the electronic device may trigger, in response to a detected operation (for example, a tap operation performed on the control563) performed on the control563, the printer (for example, “Yunpeng's Canon TS318 . . . ”) to print the selected picture. In some other embodiments, the electronic device may display only the user interface57for a predetermined period of time (for example, 2 seconds), and when the period of time ends, the printer selected by the user may print the selected picture. In other words, the electronic device may not need to listen to an operation performed on the control563, and the user interface57may not include the control563. In this way, a quantity of operations is reduced, and user experience is improved. FIG.5EandFIG.5Fmerely show the examples of the related user interfaces used by the user to pay the print fee. In actual application, the related user interfaces may be different, and should not be construed as a limitation. According to the UI embodiments shown as the examples inFIG.5AtoFIG.5J, selecting a plurality of printers for printing at a time may also be supported. For an embodiment of selecting the plurality of printers for printing at a time, refer to the related descriptions in the UI embodiments shown as the examples inFIG.4AtoFIG.4H. Details are not described herein again. User Interface Used to Feed Back a Print Status In some embodiments, after the printer (for example, “Yunpeng's Canon TS318 . . . ”) selected by the user starts to print a picture, the electronic device may further display the user interface used to feed back the print status. As shown in one or more ofFIG.5GtoFIG.5J, the user interface may be the user interface21in which a notification window571is displayed and that is shown as the example inFIG.2A. In other words, after the printer starts to print the picture, the electronic device may further display the notification window571shown as an example in one or more ofFIG.5GtoFIG.5J, and may display, in the notification window571, prompt information575indicating a print status. For example, as shown inFIG.5GtoFIG.5J, the prompt information575may be “Print task is queuing . . . ”, “JIAPUWEI TH880 is printing . . . ”, “Printing is completed”, “Printing fails”, or the like. It may be understood that for related descriptions of the notification window571, the prompt information575, and the like, reference may be made to the related descriptions of the notification window471, the prompt information475, and the like shown inFIG.4EtoFIG.4Hin the foregoing embodiment. Details are not described herein again. In some other embodiments, the prompt information575in the user interface may not be displayed on the touchscreen, but may be audio played by using the speaker170A. It may be understood that a difference between the UI embodiments shown as the examples inFIG.5AtoFIG.5Jand the UI embodiments shown as the examples inFIG.4AtoFIG.4Hlies in that in the UI embodiments shown as the examples inFIG.5AtoFIG.5J, authentication may need to be performed on the printer selected by the user. For content that is not mentioned in the UI embodiments shown as the examples inFIG.5AtoFIG.5J, refer to the UI embodiments shown as the examples inFIG.4AtoFIG.4H. Details are not described herein again. According to the UI embodiments shown as the examples inFIG.5AtoFIG.5J, when the user needs to print the picture, the electronic device may automatically discover a nearby printer, and intuitively present, to the user, the nearby printer discovered by the electronic device. If authentication needs to be performed on (for example, a fee needs to be paid for) the printer selected by the user, the electronic device may display a payment page after the user selects the printer option corresponding to the printer, and trigger the printer for printing after the payment succeeds. In this way, the user may select the printer for which the fee needs to be paid for printing, so that an operation is intuitive and simple, and security of performing printing by using the electronic device is also improved. UI Embodiments Shown as Examples inFIG.6AtoFIG.6J In the UI embodiments shown as the examples inFIG.6AtoFIG.6J, the user may select a printer near the electronic device that is discovered by the electronic device to print a picture, or may select a cloud printer discovered by the electronic device to print a picture. The picture selected by the user may be a picture stored in the electronic device, or may be a picture in a cloud server accessed by the electronic device. “Moment share” may be used to support the user in sharing data with a device near the electronic device, or may be used to support the user in sharing data with a cloud device. The nearby device may include a nearby first device, for example, a nearby printer, a nearby projector, or a nearby display, or may include a nearby second device, for example, a nearby mobile phone, a nearby tablet computer, or a nearby personal computer. The cloud device may include a cloud first device, for example, a cloud printer, a cloud projector, or a cloud display, or may include a cloud second device, for example, a cloud mobile phone, a cloud tablet computer, or a cloud personal computer. Enabling “Moment share” may be enabling cellular mobile data, a WLAN, and Bluetooth, or may be enabling cellular mobile data and a WLAN, or may be enabling cellular mobile data and Bluetooth, or may be enabling a WLAN and Bluetooth. After enabling “Moment share”, the electronic device may discover the device near the electronic device by using one or more technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN, or may discover the cloud device by using a cellular mobile communications network technology or a wide area network technology. The Following Describes User Interfaces Provided in the Examples of the UI Embodiments Shown inFIG.6AtoFIG.6J. “Moment Share Interface” The “moment share interface” is displayed on a touchscreen of the electronic device when the electronic device detects an operation of selecting a picture for sharing. In some embodiments, the “moment share interface” may be used to display one or more device options. The one or more device options may include one or more nearby device options and/or one or more cloud device options. The nearby device option may include one or more of the following: a nearby printer option, a nearby projector option, and a nearby display option. The cloud device option may include one or more of the following: a cloud printer option, a cloud projector option, and a cloud display option. The electronic device may trigger, in response to a detected operation performed on the device option, a first device corresponding to the device option selected in the operation to process the selected picture. The processing may include one or more of the following: printing, projection, and screen mirroring. The “moment share interface” may be further used to display one or more user options and one or more service options. The service option may correspond to an application or a protocol used to share data. In other words, the user may share the data by using the application or the protocol corresponding to the service option. One or more pictures in “Gallery” may be further displayed in the “moment share interface”, and the one or more pictures may include the picture selected by the user. In an embodiment, a user interface61shown as an example inFIG.6AtoFIG.6Cmay be the “moment share interface”. For an embodiment of the user interface61, refer to the user interface41shown inFIG.4AtoFIG.4Cdescribed in the foregoing embodiments. Different from the user interface41, as shown inFIG.6C, not only one or more user options and a nearby device option may be displayed in an area621in the user interface61, but also a cloud device option may be displayed in the area621in the user interface61. In some embodiments, as shown inFIG.6C, when the electronic device discovers a cloud first device, the electronic device may display the cloud device option in the area621, for example, a cloud printer icon623. In some embodiments, the cloud device option and the nearby device option may be presented in different representation forms in the user interface, so that the user can very intuitively distinguish between the cloud device option and the nearby device option, thereby helping the user select a proper printer. For example, as shown inFIG.6C, the cloud device option may be an icon in a cloud shape, and the nearby device option may be a round icon. The example is merely an embodiment provided in this application. In actual application, the cloud device option and the nearby device option may alternatively be represented in other forms in the user interface. This is not specifically limited in this embodiment. In some other embodiments, as shown inFIG.6C, the electronic device may further display location information of the discovered cloud first device (for example, a cloud printer). In this way, it is convenient for the user to know the location of the cloud printer. For example, location information corresponding to a printer “Canon 1” is “KingKey Tower 2F”. For another example, location information corresponding to “Canon 2” is “50 meters nearby”. The examples are merely some examples provided in this application, and should not be construed as a limitation. There may be different examples in actual application. In some other embodiments, as shown inFIG.6D-1andFIG.6D-2, the electronic device may further display one or more cloud printing service options in the area621, for example, an icon627of “Huawei cloud printing” and an icon627of “HP cloud printing”. The electronic device may automatically update, in response to a detected operation (for example, a touch operation performed by the user on the icon627of the cloud printing service) performed on the cloud printing service option, information displayed in the area621. A printer option, for example, an icon629, corresponding to each of one or more cloud printers provided by the cloud printing service (for example, “HUAWEI cloud printing”) selected by the user may be displayed in the updated area621. In some other embodiments, the cloud printers may be printers that are closest to the electronic device and that are provided by the cloud printing service selected by the user. In addition, a back button631may be further displayed in the updated area621. In this way, the user may tap the back button631to back, and reselect another cloud printing service or select another device discovered by the electronic device. In some embodiments, the one or more user options displayed in the area621may include one or more nearby user options and/or one or more cloud user options. The nearby user option corresponds to a nearby second device discovered by the electronic device, and the cloud user option corresponds to a cloud second device discovered by the electronic device. In the “moment share interface” shown inFIG.6AtoFIG.6C, an area used to display one or more pictures may be referred to as a first area, an area used to display a service option (for example, a WeChat icon or a Mailbox icon) may be referred to as a second area, and an area (for example, the area621) used to display a user option and a device option may be referred to as a third area. An interactive element (for example, an icon622) that is displayed in the third area and that is used to enable “Moment share” may be referred to as a first interactive element. Related User Interface Used to Select a Printer for Printing As shown inFIG.6CorFIG.6D-1andFIG.6D-2, the electronic device may detect, in the area621, an operation (for example, a touch operation performed by the user on the printer icon) performed on the printer option corresponding to the nearby printer or the cloud printer. The printer option may be used to listen to an operation used to trigger printing of a selected picture. For a manner in which the electronic device responds to the operation detected in the area621, specifically refer to the related descriptions inFIG.4Dand the related embodiments, or refer to the related descriptions inFIG.5DtoFIG.5Fand the related embodiments. Details are not described herein again. According to the UI embodiments shown as the examples inFIG.6AtoFIG.6J, selecting a plurality of printers (for example, a nearby printer and a cloud printer that are discovered by the electronic device) for printing at a time may also be supported. For an embodiment of selecting the plurality of printers for printing at a time, refer to the related descriptions in the UI embodiments shown as the examples inFIG.4AtoFIG.4H. Details are not described herein again. User Interface Used to Feed Back a Print Status After triggering the printer (for example, “Canon 1” shown inFIG.6Cor “Printer 2” shown inFIG.6D-2) selected by the user to print the picture selected by the user, the electronic device may display the user interface used to feed back the print status of the picture selected by the user. If the printer selected by the user is a nearby printer, the user interface may be shown in one or more ofFIG.5GtoFIG.5J. For details, refer to the related descriptions in the embodiments inFIG.5GtoFIG.5J. Details are not described herein again. If the printer selected by the user is a cloud printer, the user interface may be shown as the example in one or more ofFIG.6EtoFIG.6H. Detailed descriptions are provided below. As shown inFIG.6EtoFIG.6H, the user interface may be the user interface21in which a notification window671is displayed and that is shown as the example inFIG.2A, and the user interface21may be a home screen. The notification window671may be used to prompt the user with the print status of the selected picture. For details, refer to the related descriptions in the embodiments inFIG.6EtoFIG.6H. Details are not described herein again. In some other embodiments, as shown in the example in one or more ofFIG.6EtoFIG.6H, prompt information673may be further displayed in the notification window671, and the prompt information673may be used to prompt the user with a location of the cloud printer configured to print the picture selected by the user. The prompt information673may be text information, for example, characters “KingKey Tower 2F”. In addition to the text information, the prompt information673may be further information in another form such as a picture or an animation. This is not limited in this embodiment. In some other embodiments, as shown in one or more ofFIG.6EtoFIG.6H, a control675may be further displayed in the notification window671. The electronic device may display, in response to a detected operation (for example, a touch operation performed by the user on the control675) performed on the control675, a user interface (not shown) used to navigate a user's way to the location of the cloud printer. For example, as shown inFIG.6EtoFIG.6H, when detecting that the user taps the control675, the electronic device may display a user interface used to navigate a user's way to the location “KingKey Tower 2F”, and a route used by the user to go to the location “KingKey Tower 2F” may be displayed in the user interface. In this way, the user can be intuitively and effectively guided to the location of the cloud printer, and user experience is simple and effective. In some embodiments, as shown in one or more ofFIG.6EtoFIG.6H, the electronic device may display operation prompt information corresponding to the control675, for example, display text information “Go here” below the control675, to prompt the user to tap the control675to open the user interface (not shown) used to navigate the user's way to the location of the cloud printer. The example is merely an embodiment provided in an embodiment, and should not be construed as a limitation. FIG.6EtoFIG.6Hmerely show an example of an embodiment of this application. The prompt information673and the control675may alternatively be implemented in other forms. This is not limited in this application. For example, only one control may be displayed in the notification window671, and text information “Go to ‘KingKey Tower 2F’” may be displayed on the control. When detecting a touch operation performed on the control, the electronic device may display a user interface used to navigate a user's way to the location “KingKey Tower 2F”. In other words, the prompt information673may be displayed on the control675. In some embodiments, the user interface used to navigate the user's way to the location of the cloud printer may be provided by a third-party map application. For example, after the user taps the control675, the electronic device may display a navigation interface of the third-party map application. An embodiment of the navigation interface is not limited in this application. In some embodiments, when the location of the cloud printer that prints the picture selected by the user is the same as a preset location, the prompt information673may be used to prompt the user with the preset location. For example, as shown inFIG.6I, when the location of the cloud printer that prints the picture selected by the user is the same as a home location that is preset by the user, the prompt information673may be text information “Go home”. For another example, as shown inFIG.6J, when the location of the cloud printer that prints the picture selected by the user is the same as an office location that is preset by the user, the prompt information673may be text information “Go to office”. In this way, the location of the cloud print can be more intuitively indicated. The preset location may be set by the user in the third-party map application.FIG.6IandFIG.6Jmerely show examples of some embodiments of this application. The prompt information673may alternatively be implemented in another form such as a picture or an animation, and the prompt information673may alternatively be displayed in different locations. This is not limited in this application. In some other embodiments, the electronic device may alternatively discover only the cloud first device, for example, the cloud printer. In this case, only the cloud device option may be displayed in the area621. It may be understood that a difference between the UI embodiments shown as the examples inFIG.6AtoFIG.6Jand each of the UI embodiments shown as the examples inFIG.4AtoFIG.4Hand the UI embodiments shown as the examples inFIG.5AtoFIG.5Jlies only in that in the UI embodiments shown as the examples inFIG.6AtoFIG.6J, the “moment share interface” may be further used to display the cloud device discovered by the electronic device, for example, the cloud printer. For content that is not mentioned in the UI embodiments shown as the examples inFIG.6AtoFIG.6J, refer to the UI embodiments shown as the examples inFIG.4AtoFIG.4Hand the UI embodiments shown as the examples inFIG.5AtoFIG.5J. Details are not described herein again. According to the UI embodiments shown as the examples inFIG.6AtoFIG.6J, when identifying the scenario in which the user shares the picture, the electronic device may automatically discover the nearby printer and/or the cloud printer, and intuitively present, to the user, the nearby printer and/or the cloud printer discovered by the electronic device, so that the user taps the nearby printer option or the cloud printer option (for example, the icon) to trigger the nearby printer or the cloud printer to print the picture selected by the user, and user experience is intuitive and simple. UI Embodiments Shown as Examples inFIG.7AtoFIG.7C In the UI embodiments shown as the examples inFIG.7AtoFIG.7C, the user may select a printer near the electronic device that is discovered by the electronic device to print a picture, or may select a cloud printer discovered by the electronic device to print a picture. The picture selected by the user may be a picture stored in the electronic device, or may be a picture in a cloud server accessed by the electronic device. “Moment share” may be classified into “local moment share (local moment share)” and “cloud moment share (remote moment share)”. “Local moment share” may be used to support the user in sharing data with a device near the electronic device. “Cloud moment share” may be used to support the user in sharing data with a cloud device. The nearby device may include a nearby first device, for example, a nearby printer, a nearby projector, or a nearby display, or may include a nearby second device, for example, a nearby mobile phone, a nearby tablet computer, or a nearby personal computer. The cloud device may include a cloud first device, for example, a cloud printer, a cloud projector, or a cloud display, or may include a cloud second device, for example, a cloud mobile phone, a cloud tablet computer, or a cloud personal computer. Enabling “local moment share” may be enabling any one or more of Bluetooth or a WLAN, and enabling “cloud moment share” may be enabling any one or more of cellular mobile data or a WLAN. After enabling “local moment share”, the electronic device may discover the device near the electronic device by using one or more technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN. After enabling “cloud moment share”, the electronic device may discover the cloud device by using a cellular mobile communications network technology or a wide area network technology. Different from the UI embodiments shown as the examples inFIG.6AtoFIG.6J, in the UI embodiments, the electronic device may separately display a nearby device option and a cloud device option in different areas, and may distinctively present the nearby device option and the cloud device option. This is clearer and more intuitive. The Following Describes a User Interface Provided in the Examples of the UI Embodiments Shown inFIG.7AtoFIG.7C. “Moment Share Interface” The “moment share interface” is displayed on a touchscreen of the electronic device when the electronic device detects an operation of selecting a picture for sharing. In an embodiment, a user interface71shown as an example inFIG.7AtoFIG.7Cmay be the “moment share interface”. As shown inFIG.7AtoFIG.7C, the user interface71may include an area701, an area703, an area705, and an area707. The area701may be used to display a picture. For an embodiment of the area701, refer to the related descriptions of the area405inFIG.4AtoFIG.4Cand the embodiments corresponding toFIG.4AtoFIG.4C. Details are not described herein again. One or more service options (for example, an application icon) may be displayed in the area707. An application or a protocol corresponding to the service option may be used to support sharing the picture selected by the user with a cloud contact or a cloud server. For an embodiment of the area707, refer to the related descriptions of the area431inFIG.4AtoFIG.4Cand the embodiments corresponding toFIG.4AtoFIG.4C. Details are not described herein again. The area703may be used to display a nearby device option, and may be further used to display a nearby user option. The nearby user option corresponds to a nearby second device discovered by the electronic device. For example, when “local moment share” is enabled, as shown inFIG.7C, the electronic device may display, in the area703, a device option (for example, a printer icon725) corresponding to the discovered nearby printer, and may further display a user option (for example, a user icon721) corresponding to the discovered nearby mobile phone, and a user option (for example, a user icon723) corresponding to the discovered nearby tablet computer. For an embodiment of the area703, refer to the related descriptions of the area421inFIG.4AtoFIG.4Cand the embodiments corresponding toFIG.4AtoFIG.4C. Details are not described herein again. For example, for an embodiment of the area703(including an icon711and prompt information713) shown inFIG.7A, refer to the related descriptions of the area421in the embodiment inFIG.4A. Details are not described herein again. For an embodiment of the area703(including an icon721and prompt information723) shown inFIG.7B, refer to the related descriptions of the area421in the embodiment inFIG.4B. Details are not described herein again. For an embodiment of the area703(including the option of the discovered nearby device) shown inFIG.7C, refer to the related descriptions of the area421in the embodiment inFIG.4C. Details are not described herein again. The area705may be used to display a cloud device option, and may be further used to display a cloud user option. The following describes embodiments of the area705in the following cases. When “cloud moment share” is not enabled, as shown inFIG.7A, both an icon715and prompt information717may be displayed in the area705. The icon715may be used to receive an operation of enabling “cloud moment share”. The prompt information717may be used to prompt the user to enable “cloud moment share”. The prompt information717may be text information, for example, “Tap here to enable cloud moment share, to discover a cloud device. Traffic is required”. In addition to the text information, the prompt information717may be further in another form such as a picture. This is not limited in this embodiment. It may be understood that in some other embodiments, in addition to the icon715, the electronic device may further listen to an operation of enabling “cloud moment share” by using an interactive element (IE) in another form. This is not limited in this application. For example, some or all of the prompt information717may also be used to receive the operation of enabling “cloud moment share”. For example, some characters “Tap here” in the prompt information717“Tap here to enable cloud moment share” may be used to receive the operation of enabling “cloud moment share”. In some other embodiments, the prompt information717in the user interface may not be displayed on the touchscreen, but may be audio played by using the speaker170A. As shown inFIG.7A, the electronic device may detect an operation (for example, an operation performed by the user on the icon715, such as tapping, heavy pressing, or touching and holding) performed on the icon715, and in response to the operation, the electronic device may enable “cloud moment share”, and update the area705. The updated area705may be shown inFIG.7B. When “Moment share” is enabled but the electronic device has not discovered a cloud device, as shown inFIG.7B, both an icon725and prompt information727may be displayed in the area705. The icon725may indicate that “cloud moment share” is enabled. In some embodiments, when the electronic device detects an operation performed on the icon725, in response to the operation, the electronic device may disable “Moment share”, and the icon711and the prompt information713shown inFIG.7Amay be displayed in the area705. The prompt information727may be used to prompt the user with a fact that the electronic device is searching for a cloud device. For example, the prompt information727may be text information “Searching for a cloud device . . . ” In addition to the text information, the prompt information727may be further in another form such as a picture. This is not limited in this embodiment. It may be understood that in addition to the interactive elements (the icon725and the prompt information727) shown as an example inFIG.7B, an interactive element in another form may be further used in the area705to indicate that “cloud moment share” is enabled and the user is prompted with a fact that the electronic device is searching for a cloud device. In some embodiments, when the electronic device does not discover a cloud device, the electronic device may not present any content in the area705, that is, the area705is blank. Therefore, this may indicate that the cloud device is not discovered currently. If the electronic device discovers the cloud device after a period of time, the electronic device may update the area705, where a cloud device option and/or a cloud user option may be displayed in the updated area705. For details, refer toFIG.7C. When “cloud moment share” is enabled and the electronic device discovers the cloud device, as shown inFIG.7C, the cloud device option, for example, a cloud printer icon731, and/or the cloud user option, for example, a cloud user icon733, may be displayed in the area705. In other words, the area705may be used to display the option of the cloud device discovered by the electronic device, and may be further used to display the cloud user option. In addition to the device icon (for example, the cloud printer icon731), the cloud device option may be further represented in another form, for example, text information “Cloud printer”. In addition to the user icon, the cloud user option may be further represented in another form, for example, a user name “Lisa” or a phone number “18819198800”. A cloud printer option may be used to listen to an operation used to trigger printing. The operation used to trigger printing may be an operation (for example, a touch operation performed on the cloud printer icon) performed on the cloud printer option. For how the electronic device processes the detected operation used to trigger printing, refer to the following related descriptions in a related user interface used to select a printer for printing. In some embodiments, as shown inFIG.7C, location information corresponding to the option of the discovered cloud device may be further displayed in the area705. In this way, it is convenient for the user to know a location of a cloud printer corresponding to the cloud device option. For example, location information corresponding to a printer “Canon 1” is “KingKey Tower 2F”. For another example, location information corresponding to “Canon 2” is “50 meters nearby”. The examples are merely some examples provided in this application, and should not be construed as a limitation. There may be different examples in actual application. In some embodiments, as shown inFIG.7C, operation prompt information corresponding to the cloud device option may be further displayed in the area705. The operation prompt information may be used to prompt the user with an operation that can be used to trigger the electronic device to share data with the discovered cloud first device, or trigger the discovered cloud first device to perform corresponding processing on shared data, for example, printing, projection, or screen mirroring. For an embodiment of the operation prompt information, refer to the related content in the embodiment inFIG.4C. Details are not described herein again. In some embodiments, a page turning arrow may be further displayed in the area705. The user may switch, by using the page turning arrow, the cloud device option displayed in the area705, so that more cloud device options can be browsed. In addition to the page turning arrow, another interactive element may also be used by the user to switch the cloud device option displayed in the area705. In some embodiments, the user may alternatively switch, in the area705by performing a leftward or rightward swipe gesture, the option of the cloud device discovered by the electronic device. In some embodiments, one or more cloud printing service options, for example, an icon of “Huawei cloud printing” and an icon of “HP cloud printing”, may be further displayed in the area705. The electronic device may refresh the area705in response to an operation (for example, a touch operation performed on the icon) performed on the cloud printing service option. An option, for example, an icon, of each of one or more cloud printers provided by a cloud printing service (for example, “HUAWEI cloud printing”) selected by the user may be displayed in the refreshed area705. In some embodiments, the one or more cloud printers may be printers that are closest to the electronic device and that are provided by the cloud printing service selected by the user. When neither “local moment share” nor “cloud moment share” is enabled, in some embodiments, the electronic device may detect both an operation (for example, a touch operation performed on the icon711) used to enable “local moment share” and an operation (for example, a touch operation on the icon713) used to enable “cloud moment share”. The electronic device may enable “local moment share” and “cloud moment share” in response to the two detected operations. After enabling “local moment share”, the electronic device may discover a nearby device such as a nearby printer or a nearby projector, and may display a nearby device option in the area703. After enabling “cloud moment share”, the electronic device may discover a cloud device such as a cloud printer or a cloud projector, and may display a cloud device option in the area705. In addition to the area701, the area703, the area705, and the area707, the user interface71may further include a title bar, both an interactive element used by the user to cancel picture sharing and indication information may be displayed in the title bar, and the indication information may be used to indicate a quantity of pictures selected by the user. For details, refer to the user interface41described in the embodiments inFIG.4AtoFIG.4C. Details are not described herein again. In addition to page layouts shown inFIG.7AtoFIG.7C, a page layout of the “moment share interface” may be further presented in another form. This is not limited in this application. It can be learned that the area703and the area705inFIG.7Amay enable the user to separately enable “local moment share” and “cloud moment share”. The user may enable only “local moment share” or “cloud moment share”, or may enable both “local moment share” and “cloud moment share”. In some embodiments, the area703and the area705inFIG.7Amay alternatively be implemented as one area, for example, may be the area621shown inFIG.6A. The area703and the area705inFIG.7Amay alternatively be implemented as one area, for example, may be the area621shown inFIG.6B. To be specific, “local moment share” and “cloud moment share” may not be separately enabled. The user may enable “local moment share” and “cloud moment share” by performing one operation (for example, a tap operation). For details, refer toFIG.6AandFIG.6B. In the “moment share interface” shown inFIG.7AtoFIG.7C, an area (for example, the area701) used to display one or more pictures may be referred to as a first area, an area (for example, the area707) used to display a service option (for example, a WeChat icon or a Mailbox icon) may be referred to as a second area, an area (for example, the area703) used to display a nearby user option and a nearby device option may be referred to as a third area, and an area (for example, the area705) used to display a cloud device option may be referred to as a fourth area. An interactive element (for example, the icon711) that is displayed in the third area and that is used to enable “local moment share” may be referred to as a second interactive element. An interactive element (for example, the icon715) that is displayed in the fourth area and that is used to enable “cloud moment share” may be referred to as a third interactive element. Related User Interface Used to Select a Printer for Printing As shown inFIG.7C, the electronic device may detect, in the area703, an operation (for example, a touch operation performed by the user on the nearby printer icon) performed on the nearby printer option, or may detect, in the area705, an operation (for example, a touch operation performed by the user on the cloud printer icon) performed on the cloud printer option. The printer option may be used to listen to an operation used to trigger printing. For a manner in which the electronic device responds to the operation detected in the area703or the operation detected in the area705, refer to the related descriptions inFIG.4Dand the related embodiments, or refer to the related descriptions inFIG.5DtoFIG.5Fand the related embodiments. Details are not described herein again. According to the UI embodiments shown as the examples inFIG.7AtoFIG.7C, selecting a plurality of printers (for example, a nearby printer and a cloud printer that are discovered by the electronic device) for printing at a time may also be supported. For an embodiment of selecting the plurality of printers for printing at a time, refer to the related descriptions in the UI embodiments shown as the examples inFIG.4AtoFIG.4H. Details are not described herein again. A difference lies in that in the UI embodiments shown as the examples inFIG.7AtoFIG.7C, the nearby printer option and the cloud printer option may be displayed in different areas in the “moment share interface”. User Interface Used to Feed Back a Print Status After triggering the printer selected by the user to print the picture selected by the user, the electronic device may display the user interface used to feed back the print status of the selected picture (namely, the picture selected by the user). If the printer selected by the user is a nearby printer, the user interface may be shown in one or more ofFIG.5GtoFIG.5J. For details, refer to the related descriptions in the embodiments inFIG.5GtoFIG.5J. Details are not described herein again. If the printer selected by the user is a cloud printer, the user interface may be shown as example in one or more ofFIG.6EtoFIG.6H. For details, refer to the related descriptions in the embodiments inFIG.6EtoFIG.6H. Details are not described herein again. It may be understood that a difference between the UI embodiments shown as the examples inFIG.7AtoFIG.7Cand the UI embodiments shown as the examples inFIG.6AtoFIG.6Jlies only in that in the UI embodiments shown as the examples inFIG.7AtoFIG.7C, the nearby device and the cloud device that are discovered by the electronic device are displayed in different areas in the “moment share interface”. For content that is not mentioned in the UI embodiments shown as the examples inFIG.7AtoFIG.7C, refer to the UI embodiments shown as the examples inFIG.6AtoFIG.6J. Details are not described herein again. According to the UI embodiments shown as the examples inFIG.7AtoFIG.7C, when identifying the scenario in which the user shares the picture, the electronic device may automatically discover the nearby printer, and display the nearby printer option and the cloud printer option in different areas, so that a process in which the user selects the nearby printer or the cloud printer for printing is clearer and more intuitive. Related Extensions of the Foregoing UI Embodiments Extension 1: Related Extension of a Notification Window Used to Prompt the User with a Print Status As described in the embodiments shown as the examples inFIG.4AtoFIG.4Hto the embodiments shown as the examples inFIG.7AtoFIG.7C, the notification window may be displayed in a home screen (for example, may be the user interface21). The notification window may be the notification window471shown in one or more ofFIG.4EtoFIG.4H, or may be the notification window571shown in one or more ofFIG.5GtoFIG.5J, or may be the notification window671shown as an example in one or more ofFIG.6EtoFIG.6J. The Notification Window May be Further Displayed in Another User Interface. In some embodiments, as shown in an example in one or more ofFIG.8AtoFIG.8D, the electronic device may display the notification window in a user interface displayed when the electronic device automatically discovers a device. In this way, the user can continue to stay in the user interface and select a printer to print a picture. The user interface may be the user interface41shown as an example in one or more ofFIG.4AtoFIG.4C, or may be the user interface51shown as an example in one or more ofFIG.5AtoFIG.5C, or may be the user interface61shown as an example in one or more ofFIG.6AtoFIG.6D-1andFIG.6D-2, or may be the user interface71shown as an example in one or more ofFIG.7AtoFIG.7C. In some embodiments, as shown in an example in one or more ofFIG.9AtoFIG.9D, the electronic device may display the notification window in a user interface used by the user to select a picture for sharing. In this way, the user may return to the user interface to perform another picture-related operation. The user interface may be the user interface31shown as an example inFIG.3AorFIG.3B. In addition, the user interface may alternatively be a user interface provided by another application such as File browser or a picture beautification application. The user interface may alternatively be a user interface that is provided by a cloud server and that is used by the user to browse a picture. In some embodiments, as shown in an example in one or more ofFIG.10AtoFIG.10D, the electronic device may display the notification window on a lock screen.FIG.10AtoFIG.10Dmerely show examples of lock screens, and should not constitute a limitation on the lock screen. In some other embodiments, the electronic device may further display the notification window in a screen-off state. Herein, the screen-off state is a state in which the electronic device powers off a display screen. In this way, even in a screen-locked state or the screen-off state, the user can learn of a print status of a picture selected by the user. In some embodiments, as shown in an example in one or more ofFIG.11A-1andFIG.11A-2toFIG.11D-1andFIG.11D-2, when detecting an operation (for example, a downward swipe gesture performed on a status bar) performed on the status bar, in response to the operation, the electronic device may display both the window261shown inFIG.2B-2and a notification window. In some embodiments, prompt information used to indicate a print status may be displayed in the notification window. For example, as shown inFIG.11A-1andFIG.11A-2toFIG.11D-1andFIG.11D-2, the prompt information may be text information “Print task is queuing . . . ”, “Printing . . . ”, “Printing is completed”, “Printing fails”, or the like. For details, refer to the related descriptions of the notification window471in the embodiments inFIG.4EtoFIG.4H. Details are not described herein again. In some other embodiments, if a printer selected by the user is a cloud printer, prompt information used to prompt the user with a location of the cloud printer may be further displayed in the notification window. For details, refer to the related descriptions of the prompt information673in the notification window671in the embodiments inFIG.6EtoFIG.6H. Details are not described herein again. In some other embodiments, if a printer selected by the user is a cloud printer, a control may be further displayed in the notification window. The electronic device may display, in response to a detected operation performed on the control, a user interface used to navigate a user's way to a location of the cloud printer. For an embodiment of the control, refer to the related descriptions of the control675in the notification window671in the embodiments inFIG.6EtoFIG.6H. Details are not described herein again. Related Extension of Display Content in the Notification Window In some embodiments, prompt information used to prompt the user with a print progress may be displayed in the notification window. In other words, the second print state mentioned in the foregoing UI embodiments may be further refined to the print progress. For example, as shown in an example inFIG.12, the prompt information may be text information “JIAPUWEI TH880 is printing 2018020335.jpg”. The example is merely an embodiment provided in this application. In addition to a name of a picture that is being printed, the prompt information may be further used to prompt the user with a picture that is being printed, a page (applicable to file printing) that is being printed, a percentage of current printing, or the like. In this way, it may be convenient for the user to learn of the print progress. This is more convenient. In some embodiments, if a fee needs to be paid for the printer selected by the user, prompt information used to prompt the user with a print fee may be further displayed in the notification window. For example, as shown in an example inFIG.13, the prompt information may be text information “A total of ¥12.00 is consumed this time”. The embodiment shown as an example inFIG.13is applicable to a scenario in which a print fee is automatically paid. The scenario in which the print fee is automatically paid may mean that a print service provider may provide a recharge service such as “Personal wallet”, and an account-recharged user may perform automatic payment each time the user performs print consumption. In this way, the user does not need to enter a payment password each time. In other words, for the scenario in which the print fee is automatically paid, the related user interfaces used by the user to pay the print fee that are shown as examples inFIG.5EandFIG.5Fmay not be necessary. In this way, an operation can be simplified, and user experience can be improved. In some embodiments, as shown in an example inFIG.14AandFIG.14B, if a printer selected by the user is a printer near the electronic device, a control1401may be further displayed in the notification window. The electronic device may trigger, in response to a detected operation (for example, a touch operation performed on the control1401) performed on the control1401, the printer selected by the user to make a sound (which may be briefly referred to as a sound), so that the user can find a location of the printer based on the sound made by the printer. Therefore, it is convenient for the user to retrieve printed paper. In some embodiments, operation prompt information, for example, text information “Tap to make a sound for positioning”, corresponding to the control1401may be further displayed in the notification window. In other words, the operation prompt information may be used to prompt the user to trigger, through an operation performed on the control1401, the printer to make a sound.FIG.14AandFIG.14Bmerely show an example of an embodiment provided in this application. The control1401may alternatively be presented in another interface representation form. This is not limited in this application. For example, characters “Tap here” in the text information “Tap to make a sound for positioning” may be used to listen to a tap operation performed by the user. In other words, the control1401may also be in an interface representation form of the characters “Tap here”. In some embodiments, as shown in an example inFIG.15AandFIG.15B, a control1501may be further displayed in the notification window, and text information “Tap to retrieve paper” may be displayed on the control1501. In response to a detected operation (for example, a touch operation performed on the control1501) performed on the control1501, the electronic device may display a user interface used by the user to enter a paper retrieval password, and after determining that the paper retrieval password entered by the user is correct, trigger the printer to deliver printed paper of the user. In this way, the printed paper of the user can be prevented from being exposed, and data leakage is avoided. In some other embodiments, the paper retrieval password may alternatively be entered on the printer. The electronic device may display prompt information, to prompt the user to enter the paper retrieval password on the printer. The prompt information may be further used to prompt the user with the paper retrieval password entered on the printer. In some other embodiments, as shown in an example inFIG.16AandFIG.16B, when the electronic device is near a printer (for example, within a distance of 2 meters), prompt information1601and a control1603may be further displayed in the notification window. The prompt information1601may be used to prompt the user with a fact that the user is next to the printer, for example, may be text information “Detect that you are next to a printer”. Text information “Tap to retrieve paper” may be displayed on the control1603. The electronic device may trigger, in response to a detected operation (for example, a touch operation performed on the control1603) performed on the control1603, the printer to deliver printed paper of the user. In this way, the printed paper of the user can be prevented from being exposed, and data leakage is avoided. Prompt information1605may be further displayed in the notification window, and the prompt information1605may be used to prompt the user that the printer has delivered the printed paper of the user, for example, may be text information “Paper is delivered. Please retrieve it in time”. In this way, the user can be reminded to retrieve the paper in time to avoid data leakage. With reference to the embodiment inFIG.15AandFIG.15Bor the embodiment inFIG.16AandFIG.16B, the printer may be provided with a paper retrieval apparatus. The paper retrieval apparatus may store the printed paper, and may deliver the paper according to an instruction of the printer.FIG.15AandFIG.15Bmerely show an example of an embodiment provided in this application. The user interface used by the user to enter the paper retrieval password may alternatively be presented in another interface representation form. This is not limited in this application. The control1501may alternatively be presented in another interface representation form. For example, the control1501may be in an interface representation form of an icon indicating paper retrieval. The user may tap the icon to open the user interface used by the user to enter the paper retrieval password.FIG.16AandFIG.16Balso merely show an example of an embodiment provided in this application. The prompt information1601and the control1603each may alternatively be presented in another interface representation form, and should not be construed as a limitation. A Manner Used to Prompt the User with a State of the Printer Detected by the Electronic Device (for Example, the Printer is Busy or Consumables are Used Up) As shown in an example inFIG.17A, the electronic device may display a current state of the printer in the “moment share interface” (namely, a user interface1702). For example, the printer is busy or the consumables are used up. The “moment share interface” may be the user interface41shown as an example inFIG.4C, or may be the user interface51shown as an example inFIG.5C, or may be the user interface61shown as an example inFIG.6CorFIG.6D-1andFIG.6D-2, or may be the user interface71shown as an example inFIG.7C. An area1704may be the area421in the user interface41, or may be the area521in the user interface51, or may be the area621in the user interface61, or may be the area703or the area705in the user interface71. In some embodiments, if the printer (a nearby printer or a cloud printer) discovered by the electronic device is busy, the electronic device may display, in the area1704in the user interface1702shown as example inFIG.17A, indication information used to indicate that the printer is busy. Herein, the area1704in the user interface1702may be used to display device options/a device option corresponding to a nearby first device and/or a cloud first device discovered by the electronic device. For example, as shown in an example inFIG.17A, the indication information may be a red circular indicator1703displayed in the upper right corner of a printer icon1701, which indicates that the printer is busy. For another example, as shown in an example inFIG.17A, the indication information may alternatively be text information “Printer is busy” in red font displayed below a printer icon1701, which indicates that the printer is busy. These examples are merely some embodiments provided in this application, and may be different in actual application. For example, a display state of the printer icon1701may alternatively be set to indicate that the printer is busy. For example, the printer icon is presented in red, or the printer icon is presented in an animation effect similar to a heartbeat, and should not be construed as a limitation. In some embodiments, if consumables of the printer (a nearby printer or a cloud printer) discovered by the electronic device are used up, for example, the printer is out of paper or the printer is out of ink, the electronic device may display indication information used to indicate that the consumables of the printer are used up. For example, as shown in an example inFIG.17B-1andFIG.17B-2, the indication information may be an indicator1707displayed in the upper right corner of a printer icon1701, which indicates that consumables of the printer are used up. For another example, as shown in an example inFIG.17B-1andFIG.17B-2, the indication information may alternatively be text information “Printer is out of ink” in red font displayed below a printer icon1701, which indicates that the printer is out of ink. These examples are merely some embodiments provided in this application, and may be different in actual application. For example, a display state of the printer icon1701may alternatively be set to indicate that the consumables of the printer are used up. This should not be construed as a limitation. In some other embodiments, as shown in an example inFIG.17B-1andFIG.17B-2, if the consumables of the printer discovered by the electronic device are used up, in response to a detected operation (for example, a double-tap operation performed by the user on the printer icon) performed on a printer option corresponding to the printer, the electronic device may jump to and display a user interface1711used by the user to purchase the consumables of the printer, where the user interface1711may be an interface of a shopping application (for example, Taobao). In this way, it may be convenient for the user to purchase the consumables of the printer. This is simple and convenient. Herein, the operation needs to be different from the operation (for example, the touch operation performed on the printer icon) that is mentioned in the embodiments shown as examples inFIG.4AtoFIG.4Hto the embodiments shown as examples inFIG.7AtoFIG.7Cand that is performed on the printer option.FIG.17B-1andFIG.17B-2merely show examples of some embodiments provided in this application. The user interface used by the user to purchase the consumables of the printer may alternatively be presented in another interface representation form, and should not be construed as a limitation. In some embodiments, if an exception such as an excessively low battery level or an abnormal temperature occurs on the printer discovered by the electronic device, the electronic device may further display indication information used to indicate the exception. An interface representation form of the indication information is not limited in this application. Another Manner Used to Prompt the User with a Print Status (for Example, a Print Progress or a Print Result) of the Selected Picture In the foregoing embodiments shown as examples inFIG.4AtoFIG.4Hto the embodiments shown as examples inFIG.7AtoFIG.7C, the electronic device may display, in the notification window, prompt information used to prompt the user with the print status of the selected picture, for example, “Print task is queuing”, “JIAPUWEI TH880 is printing . . . ”, “Printing is completed”, or “Printing fails”. In addition to the manner mentioned in the foregoing UI embodiments, in some embodiments, as shown in an example in one or more ofFIG.18AandFIG.18B, the electronic device may also display the prompt information in an area1803in a user interface1801, where the prompt information may be used to prompt the user with the print status of the selected picture. Herein, the area1803in the user interface1801may be used to display device options/a device option corresponding to a nearby first device and/or a cloud first device discovered by the electronic device. The user interface1801may be the user interface41shown as an example inFIG.4C, or may be the user interface51shown as an example inFIG.5C, or may be the user interface61shown as an example inFIG.6CorFIG.6D-1andFIG.6D-2, or may be the user interface71shown as an example inFIG.7C. The area1803may be the area421in the user interface41, or may be the area521in the user interface51, or may be the area621in the user interface61, or may be the area703or the area705in the user interface71. In some embodiments, as shown in an example inFIG.18A, the prompt information may be progress information displayed on a ring progress bar1805around a printer icon. The progress information displayed on the ring progress bar1805may be used to prompt the user with print states, for example, “Print task is queuing . . . ”, “Printing . . . ”, and “Printing is completed”. In some embodiments, as shown in an example inFIG.18A, the prompt information may alternatively be text information1807displayed below a printer icon. The text information1807may be used to describe print states, for example, “Print task is queuing . . . ”, “Printing . . . ”, “Printing is completed”, and “Printing fails”. In some embodiments, as shown in an example inFIG.18B, only a printer option selected by the user, for example, an icon1809and text information “Yunpeng's Canon TS318 . . . ”, and a print status in which the printer prints the picture selected by the user, for example, a progress bar1811and text information “2018030335.jpg is being printed” may be displayed in the area1803. An Existing Printer Application or Service is Opened in Response to a Detected Operation Used to Trigger Printing. As shown inFIG.19A, an option such as a printer icon1905of a device discovered by the electronic device may be displayed in an area1903in a user interface1901. Herein, the area1903in the user interface1901may be used to display device options/a device option corresponding to a nearby first device and/or a cloud first device discovered by the electronic device. The user interface1901may be the “moment share interface” mentioned in the foregoing content. The user interface1901may be the user interface41shown as an example inFIG.4C, or may be the user interface51shown as an example inFIG.5C, or may be the user interface61shown as an example inFIG.6CorFIG.6D-1andFIG.6D-2, or may be the user interface71shown as an example inFIG.7C. The area1903may be the area421in the user interface41, or may be the area521in the user interface51, or may be the area621in the user interface61, or may be the area703or the area705in the user interface71. In some embodiments, as shown in examples inFIG.19BandFIG.19C, in response to a detected operation (for example, a touch operation performed on the icon1905) performed on the printer option, where the operation may be used to trigger a printer corresponding to the printer option to print a picture selected by the user, the electronic device may open an existing printer application or service (for example, a “Mopria” print service). For example, as shown in examples inFIG.19BandFIG.19C, a user interface1907provided by the “Mopria” print service may be used by the user to connect the electronic device to the printer discovered by the electronic device, for example, “Yunpeng's Canon TS318 . . . ”. The user may connect the electronic device to the printer by tapping a control1911in a window1909. The example is merely used to explain this application, and the user interface provided by the existing printer application or service is not limited in this application. It can be learned from the foregoing UI embodiments that the electronic device may automatically discover the printer when identifying the scenario in which the user shares the picture. If the user expects to print data, the user may select the discovered printer for printing, so that a process of printing the picture by using the electronic device is intuitive and simple for the user. UI Embodiments in which Projection or Screen Mirroring is Performed by Using the Electronic Device in the Scenario in which the User Shares the Picture that are Provided in this Application A user interface displayed when the electronic device automatically discovers a device (for example, a projector or a display) is first described. As shown in an example inFIG.20A, a projector option (for example, a projector icon2001) and/or a display option (for example, a display icon2003) may be displayed in the user interface. FIG.20Amerely shows an example of an embodiment of the user interface. The user interface is the “moment share interface” mentioned above, and is displayed when the electronic device detects an operation of selecting a picture for sharing. For an embodiment of the user interface, refer to the user interface that is displayed when the electronic device automatically discovers the nearby device and/or the cloud device and that is mentioned in the foregoing UI embodiments. Details are not described herein again. For screen mirroring or projection, the foregoing manner used to prompt the user with the current state of the printer is also applicable to prompting the user with a current state of the projector or the display. Second, a related user interface used by the user to select a projector for projection or used by the user to select a display for screen mirroring is described. For example, the related user interface used by the user to select the projector for projection may be the same as the related user interface used by the user to select the display for screen mirroring. An example in which the user selects the display for screen mirroring is used below for description. In some embodiments, in the user interface shown as an example inFIG.20A, in response to a detected operation (for example, a touch operation performed on the display icon2003) performed on the display option, where the operation may be used to trigger screen mirroring, the electronic device may trigger a display corresponding to the display option to display a picture selected by the user, and may further display a user interface201shown as an example inFIG.20B. The user interface201may be used by the user to perform screen mirroring control, for example, start content, pause content, stop content, play content in a previous page, play content in a next page, turn up a volume, or turn down a volume. As shown inFIG.20B, the user interface201may include but is not limited to an area2005and an area2003. The area2005may support the user in performing screen mirroring control, for example, starting content, pausing content, stopping content, playing content in a previous page, playing content in a next page, turning up a volume, or turning down a volume. A process in which the display plays the picture selected by the user may be displayed in the area2013, for example, a picture that is being currently played is a picture2009, a next to-be-played picture is a picture2011, and a previous played picture is a picture2007.FIG.20Bmerely shows an example of an embodiment provided in this application, and an embodiment of the user interface used by the user to perform screen mirroring control is not limited in this application. In some other embodiments, in response to a detected operation (for example, a touch operation performed on the display icon2003) performed on the display option, where the operation may be used to trigger screen mirroring, the electronic device may trigger a display (for example, a “TCL display”) corresponding to the display option to display, based on a default display setting, a picture selected by the user. For example, a next picture is switched to for playing every 2 seconds by default. Similar to an embodiment of selecting the printer by the user to print the data such as the picture, in an embodiment, in the “moment share interface”, in response to a detected operation performed on the projector option, where the operation may be used to trigger a projector corresponding to the projector option to project data selected by the user, the electronic device may trigger the projector corresponding to the projector option to project the data selected by the user. In addition, similar to the foregoing embodiments shown as examples inFIG.5AtoFIG.5J, in response to a detected operation used to trigger screen mirroring or projection, the electronic device may further display a user interface used to pay a screen mirroring fee or a projection fee, where the user interface may be similar to the user interface used to pay the print fee that is shown as an example inFIG.5EandFIG.5F. An embodiment of the user interface used to pay the screen mirroring fee or the projection fee is not limited in this application. Third, a user interface used to feed back a screen mirroring status or a projection status of the picture selected by the user is described. For example, the user interface is similar to the user interface used to feed back the print status of the selected picture. For screen mirroring or projection, the prompt information in the notification window may be used to prompt the user with the screen mirroring status or the projection status of the selected picture. For example, the prompt information in the notification window may be text information “Screen mirroring task is queuing”, “TCL display is displaying . . . ”, “Screen mirroring is completed”, “Screen mirroring fails”, or the like. In addition to the text information, the prompt information may be further information in another form such as a picture or an animation. For an embodiment of the user interface used to feed back the screen mirroring status or the projection status of the picture selected by the user, refer to the user interface used to feed back the print status of the selected picture. Details are not described herein again. For screen mirroring or projection, related extensions of the notification window may also be used to prompt the user with the screen mirroring status or the projection status. The foregoing embodiment used to prompt the user with the print status is also applicable to prompting the user with the screen mirroring status or the projection status. It may be understood that for content that is not mentioned in the UI embodiments in which projection or screen mirroring is performed by using the electronic device, refer to the foregoing UI embodiments in which printing is performed by using the electronic device. Details are not described herein again. It can be learned that similar to the embodiment of selecting the printer by the user to print the data such as the picture, in this embodiment, when identifying the scenario in which the user shares the picture, the electronic device may automatically discover the nearby projector or display, and intuitively present, to the user, the option of the nearby projector or display discovered by the electronic device, so that the user taps the nearby projector or display option (for example, the icon) to trigger the nearby projector or display to perform projection or screen mirroring on the picture selected by the user, and user experience is intuitive and simple. Other Scenarios in this Application Another Scenario in this Application: A Scenario in which a User Shares a File FIG.21Ashows an example of a user interface211of “File browser” displayed by an electronic device such as a smartphone. “File browser” may support the user in viewing a file stored in the electronic device, or may support the user in browsing a file in a cloud server. “File browser” is a file management application on an electronic device such as a smartphone, and may also be referred to as “File manager”. A name of the application is not limited in this application. As shown inFIG.21A, the user interface211may include a status bar2101, an application title bar2103, and a file area2109. For the status bar2101, refer to the status bar201in the user interface21shown inFIG.2A. Details are not described herein again. The application title bar2103may include a back button2105and a current page indicator2107. The back button2105is an app-level back button, and may be used to back to a logical upper level. The current page indicator2107may be used to indicate a current page, for example, may be text information “File browser”. In addition to the text information, the current page indicator2107may be further an icon. One or more files, for example, a file in a WORD format, a file in a PDF format, and a file in a PPT format, may be displayed in the file area2109. When the electronic device detects an upward swipe operation or a downward swipe operation in the file area2109, the electronic device may update the file displayed in the file area2109, so that the user browses the file. For example, the user may swipe up or down in the file area2109to browse the file. In addition to performing the upward swipe operation or the downward swipe operation, the user may further swipe left or right in the file area2109to browse the file. The user interface211may further include a navigation bar (not shown). For the navigation bar, refer to the navigation bar251in the user interface21shown inFIG.2A. Details are not described herein again. As shown inFIG.21A, the electronic device detects, in the user interface211, an operation of selecting one or more files for sharing. In this case, the electronic device may identify that a current scenario is the scenario in which the user shares the file. The electronic device may display a “moment share interface” in response to the operation detected by the electronic device. A device option (for example, information such as an icon or text information) corresponding to a device such as a printer, a projector, or a display discovered by the electronic device may be displayed in the “moment share interface”. In this way, the user may select, in the “moment share interface”, the printer for printing by performing an operation such as tapping a printer option, or may select, in the “moment share interface”, the projector for projection or the display for screen mirroring by performing an operation such as tapping a projector option or a display option. An embodiment of the operation of selecting the file for sharing is not limited in this application. In other words, the user may select an object such as a file in “File browser” for sharing, and may print the object such as the selected file, or project the object such as the selected file, or perform screen mirroring on the object such as the selected file, or the like. In this application, an operation of sharing the object such as the selected file may be referred to as a first operation. In addition to the file in “File browser”, the scenario in which the user shares the file may further include that the user shares a file in another application, for example, a file in an application such as an e-book. In addition to the file in the electronic device, the scenario in which the user shares the file may further include that the user shares the file in the cloud server. One or more ofFIG.21BtoFIG.21Dshows an example of a “moment share interface”, namely, a user interface212, in the scenario in which the user shares the file. Same as the “moment share interface” in the foregoing scenario in which the user shares the picture, the user interface212may also include an area, namely, an area2115, used to display one or more applications, and may further include an area, namely, an area2113, used to display a nearby device option and/or a cloud device option. A difference lies in that for the scenario in which the user shares the file, an area2111in the user interface212may be used to display one or more files. For an embodiment of the “moment share interface” in the scenario in which the user shares the file, refer to the “moment share interface” in the foregoing scenario in which the user shares the picture. Details are not described herein again. A page layout of the “moment share interface” in the scenario in which the user shares the file is not limited in this application. Still another scenario in this application: a scenario in which a user shares a web page FIG.22Ashows an example of a user interface221of “Web browser” displayed by an electronic device such as a smartphone. “Web browser” may support the user in browsing a web page in a cloud server, or may support the user in viewing a web page stored in the electronic device. “Web browsing” is a file management application on an electronic device such as a smartphone. A name of the application is not limited in this application. As shown inFIG.22A, the user interface221may include a status bar2201and an area2203. For the status bar2201, refer to the status bar201in the user interface21shown inFIG.2A. Details are not described herein again. A web page may be displayed in the file area2203. The user interface221may further include a navigation bar. For the navigation bar, refer to the navigation bar251in the user interface21shown inFIG.2A. Details are not described herein again. As shown inFIG.22A, the electronic device detects, in the user interface221, an operation of selecting the web page for sharing. In this case, the electronic device may identify that a current scenario is the scenario in which the user shares the web page. The electronic device may display a “moment share interface” in response to the operation detected by the electronic device. A device option (for example, information such as an icon or text information) corresponding to a device such as a printer, a projector, or a display discovered by the electronic device may be displayed in the “moment share interface”. In this way, the user may select, in the “moment share interface” by performing an operation such as tapping a printer option, the printer to print the web page, or may select, in the “moment share interface” by performing an operation such as tapping a projector option or a display option, the projector to project the web page projection or the display to perform screen mirroring on the web page. An embodiment of the operation of selecting the web page for sharing is not limited in this application. In other words, the user may select an object such as a web page in “Web browser” for sharing, and may print the object such as the selected web page, or project the object such as the selected web page, or perform screen mirroring on the object such as the selected web page, or the like. In this application, an operation of sharing the object such as the selected web page may be referred to as a first operation. One or more ofFIG.22BtoFIG.22Dshows an example of a “moment share interface”, namely, a user interface221, in the scenario in which the user shares the web page. Same as the “moment share interface” in the foregoing scenario in which the user shares the picture, the user interface221may also include an area, namely, an area2209, used to display one or more applications, and may further include an area, namely, an area2207, used to display a nearby device option and/or a cloud device option. For an embodiment of the area2207in the “moment share interface” in the scenario in which the user shares the web page, refer to the “moment share interface” in the foregoing scenario in which the user shares the picture. Details are not described herein again. The user interface221shown as an example inFIG.22BtoFIG.22Dand the “moment share interface” in the scenario in which the user shares the web page are different in terms of a page layout. A page layout of the “moment share interface” in the scenario in which the user shares the web page is not limited in this application. Yet Another Scenario in this Application: A Scenario in which a User Shares Characters FIG.23Ashows an example of a user interface231provided by an instant messaging application (for example, WeChat or QQ) on an electronic device such as a smartphone. As shown inFIG.23A, the user interface231may include a status bar2301and an area2303. For the status bar2301, refer to the status bar201in the user interface21shown inFIG.2A. Details are not described herein again. One or more text messages2305may be displayed in the area2303. As shown inFIG.23A, the electronic device detects, in the user interface231, an operation of selecting characters2307for sharing. In this case, the electronic device may identify that a current scenario is the scenario in which the user shares the characters. The electronic device may display a “moment share interface” in response to the operation detected by the electronic device. A device option (for example, information such as an icon or text information) corresponding to a device such as a printer, a projector, or a display discovered by the electronic device may be displayed in the “moment share interface”. In this way, the user may select, in the “moment share interface” by performing an operation such as tapping a printer option, the printer to print the characters selected by the user, or may select, in the “moment share interface” by performing an operation such as tapping a projector option or a display option, the projector to project the characters selected by the user or the display to perform screen mirroring on the characters selected by the user. An embodiment of the operation of selecting the characters for sharing is not limited in this application. In other words, the user may select an object such as characters for sharing, and may print the object such as the selected characters, or project the object such as the selected characters, or perform screen mirroring on the object such as the selected characters, or the like. The characters may be characters in various text display interfaces, for example, characters in a chat window of instant messaging, characters on a web page, or characters in an e-book. In this application, an operation of sharing the object such as the selected characters may be referred to as a first operation. In some embodiments, the electronic device may first convert the characters selected by the user into a file (for example, a WORD file or a PDF file) or a file in a format such as a picture, and then transmit the file to the printer, the projector, or the display selected by the user, so that the printer prints the file, the projector projects the file, or the display displays the file. In some other embodiments, the electronic device may first convert the characters selected by the user into an audio file, and then transmit the audio file to an audio playback device such as a sound box selected by the user, so that the audio playback device plays the audio file. One or more ofFIG.23BtoFIG.23Dshows an example of a “moment share interface”, namely, a user interface232, in the scenario in which the user shares the characters. Same as the “moment share interface” in the foregoing scenario in which the user shares the picture, the user interface232may also include an area, namely, an area2315, used to display one or more applications, and may further include an area, namely, an area2313, used to display a nearby device option and/or a cloud device option. For an embodiment of the area2313in the “moment share interface” in the scenario in which the user shares the characters, refer to the “moment share interface” in the foregoing scenario in which the user shares the picture. Details are not described herein again. FIG.23BtoFIG.23Dmerely show an example of an implementation of the user interface232. A page layout of the “moment share interface” in the scenario in which the user shares the characters is not limited in this application. In addition to the characters displayed in the “instant messaging application”, the scenario in which the user shares the characters may further include that the user shares characters displayed in another application, for example, characters displayed in an application such as an e-book or characters displayed on a web page. It can be learned that the electronic device may automatically discover the device such as the printer, the projector, or the display when identifying a scenario in which the user shares an object such as a picture, a document, a web page, or characters. If the user expects to print data such as a picture, a document, a web page, or characters, the user may select the discovered printer for printing, so that a process of printing the data by using the electronic device is intuitive and simple for the user. Similarly, if the user expects to project data such as a picture, a document, a web page, or characters, the user may select the discovered projector for projection, so that a process of projecting the data by using the electronic device is intuitive and simple for the user. If the user expects to perform screen mirroring on data such as a picture, a document, a web page, or characters, the user may select the discovered display for screen mirroring, so that a process of performing screen mirroring on the data by using the electronic device is intuitive and simple for the user. In addition to the application scenarios described above, the application scenario in this application may further include a scenario in which the user shares multimedia data such as audio or a video. In this application, when identifying a scenario in which the user shares data (for example, an audio file, a video file, or a voice message obtained during instant messaging), the electronic device may further automatically discover a nearby multimedia playback device, for example, an audio playback device (for example, a sound box) or a video playback device. If the user expects to play multimedia data such as audio or a video, the user may select, in the “moment share interface”, the multimedia playback device discovered by the electronic device to play the audio or the video selected by the user. Therefore, an operation is simple and effective. In other words, the user may select the object such as the audio file, the video file, or the voice message obtained during instant messaging for sharing, and may print the selected object, or project the selected object, or perform screen mirroring on the selected object, or the like. In addition to the scenarios described above, the scenario in this application may further include a scenario in which the user shares a food preparation recipe such as a recipe, or the like. In this application, when identifying the scenario in which the user shares the data, the electronic device may further automatically discover a nearby smart home device, for example, a smart cooking device. The smart cooking device may identify the recipe selected by the user, or the electronic device may convert the recipe selected by the user into a data format that can be identified by the smart cooking device. If the user expects to output dishes indicated by the recipe, the user may select, in the “moment share interface”, the smart cooking device discovered by the electronic device. Therefore, an operation is simple and effective. It can be learned that in the various scenarios in this application, when identifying a scenario in which the user shares an object such as a picture, a document, a web page, characters, audio, or a video, the electronic device may automatically discover the device such as the printer, the projector, the display, the audio playback device, or the video playback device, and display the “moment share interface”, where the nearby device option and/or the cloud device option may be displayed in the “moment share interface”. In this way, the user may select the device such as the printer discovered by the electronic device to perform processing such as printing on data in the electronic device or cloud data accessed by the electronic device. Therefore, an operation is simple and intuitive. Different from the foregoing UI embodiments, in the following to-be-described UI embodiments, the user may further first select a device discovered by the electronic device, such as a printer, a projector, a display, or a multimedia playback device, and then select data on which the user needs to perform printing, projection, screen mirroring, playback, or the like. An example in which the user prints the picture by using the electronic device is used below for description. FIG.24Ashows an example of a user interface241of “Gallery” displayed by an electronic device such as a smartphone. Same as the user interface31shown inFIG.3A, one or more pictures may also be displayed in the user interface241. A control2403may be displayed in a menu2401in the user interface241, and the control2403may be used to listen to an operation of enabling “Moment share”. In response to an operation that is detected in the user interface241and that is performed on the control2403, the electronic device may enable “Moment share”, and may further display a user interface243shown as an example inFIG.24BorFIG.24C. The operation may be used to trigger enabling of “Moment share”. In addition, the operation that is detected in the user interface241and that is used to trigger enabling of “Moment share” may be further an operation in another form, for example, a gesture operation of drawing a circle counterclockwise in the user interface241. This is not limited in this application. The user interface243shown as an example inFIG.24Bmay be displayed by the electronic device when the electronic device does not discover a nearby device or a cloud device. The user interface243shown as an example inFIG.24Cmay be displayed by the electronic device when the electronic device discovers a nearby device or a cloud device. The user interface243merely shows an example of an implementation of the “moment share interface”. For an embodiment of the “moment share interface”, refer to the user interface that is displayed when the electronic device automatically discovers the nearby device and/or the cloud device and that is mentioned in the foregoing UI embodiments. Details are not described herein again. A difference lies in that in the UI embodiments, in the “moment share interface”, the user may first select a printer discovered by the electronic device, and then select a picture that needs to be printed by the user. As shown inFIG.24D, in response to an operation (for example, a touch operation performed on a printer icon2427) that is detected in an area2413and that is performed on a printer option, when one or more pictures are selected, the electronic device may determine a printer corresponding to the printer option as the printer selected by the user. The electronic device may further update operation prompt information below the icon2427from text information “Tap to select” to text information “Tap to print”. In some embodiments, when determining the printer selected by the user, the electronic device may further update a picture displayed in an area2411. An updated picture displayed in the area2411may be a picture that the printer selected by the user supports printing, for example, a picture format is supported by the printer. When the printer is selected by the user, the electronic device may trigger, in response to the detected operation (for example, the touch operation performed on the printer icon2427) performed on the printer option, the printer to print the one or more selected pictures. In some embodiments, the one or more selected pictures may be set by the electronic device. For example, when detecting an operation used to select a printer, the electronic device sets, to a selected state, all pictures supported by the printer selected by the user. In some other embodiments, the one or more selected pictures may be determined by the user. The user can first tap a printer icon to select a printer, and then select a picture in the area2411. Similarly, in the “moment share interface”, the user may alternatively first select a projector discovered by the electronic device, and then select data that needs to be projected by the user. In the “moment share interface”, the user may alternatively first select a display discovered by the electronic device, and then select data on which the user needs to perform screen mirroring. In the “moment share interface”, the user may alternatively first select a multimedia playback device discovered by the electronic device, and then select data that needs to be played by the user. For an embodiment, refer to the embodiment that is of printing the picture by using the electronic device and that is shown as an example inFIG.24AtoFIG.24D. Details are not described again. In conclusion, in this application, “Gallery” displaying a picture, “File browser” displaying a file, “Web browser” displaying a web page, an application displaying characters, or the like may be referred to as a first application. The user may share a selected object (for example, an object such as a picture, a document, a web page, or characters) in the first application with the printer for printing, with the projector for projection, or with the display for screen mirroring. The selected object may be an object selected by the user. In this application, a user interface that is of the first application and that is used to display an object may be referred to as a first user interface, for example, a user interface that is of “Gallery” and that is used to display a picture, a user interface that is of “File browser” and that is used to display a file, or a user interface that is of “Web browser” and that is used to display a web page. In this application, the “moment share interface” may be referred to as a second user interface. For an embodiment of the second user interface, refer to the foregoing UI embodiments. Details are not described herein again. In this application, in the “moment share interface”, an area used to display one or more objects (for example, a picture, a document, a web page, or characters) may be referred to as a first area, and an area used to display a service option (for example, an application icon such as a WeChat icon, a Mailbox icon, or a Messages icon) may be referred to as a second area. In an embodiment (for example, the embodiments inFIG.4AtoFIG.5C), an area that is in the “moment share interface” and that is used to display a nearby device option and/or a cloud device option may be referred to as a third area. In another embodiment (for example, the embodiments inFIG.7AtoFIG.7C), an area that is in the “moment share interface” and that is used to display a nearby device option may be referred to as a third area, and an area that is in the “moment share interface” and that is used to display a cloud device option may be referred to as a fourth area. For embodiments of the first area, the second area, the third area, and the fourth area, refer to the foregoing UI embodiments. Details are not described herein again. In this application, an operation that is detected in the “moment share interface” and that is of selecting a first device to process (for example, print, project, or display) a selected object may be referred to as a second operation, for example, an operation of selecting a printer to print the selected object, an operation of selecting a projector to project the selected object, or an operation of selecting a display to display the selected object. The second operation may be an operation that is detected in the “moment share interface” and that is performed on a device option (for example, a printer option, a projector option, or a display option), for example, a touch operation performed on a device icon. For a form of the second operation, refer to the foregoing UI embodiments. Details are not described herein again. In this application, an operation that is detected in the “moment share interface” and that is of selecting an application or a protocol to share data may be referred to as a third operation, for example, an operation of selecting WeChat to share data or an operation of selecting Mailbox to share data. The third operation may be an operation that is detected in the “moment share interface” and that is performed on a service option, for example, a touch operation performed on an application icon. For a form of the third operation, refer to the foregoing UI embodiments. Details are not described herein again. In this application, an operation that is detected in the “moment share interface” and that is of selecting a user option to share data may be referred to as a fourth operation. The fourth operation may be an operation that is detected in the “moment share interface” and that is performed on a user option, for example, a touch operation performed on a user icon. For a form of the fourth operation, refer to the foregoing UI embodiments. Details are not described herein again. In this application, an operation used to enable “Moment share” may be referred to as a fifth operation. In an embodiment, the fifth operation may be an operation that is detected by the electronic device and that is used to enable “Moment share” in the “moment share interface”, and may be an operation performed on a first interactive element. In another embodiment, the fifth operation may be an operation that is detected by the electronic device and that is performed on the interactive element263in the window261shown inFIG.2B-2. In this application, an operation used to enable “local moment share” may be referred to as a sixth operation. In an embodiment, the sixth operation may be an operation that is detected by the electronic device and that is used to enable “local moment share” in the “moment share interface”, and may be an operation performed on a second interactive element. In another embodiment, the window261shown inFIG.2B-2may include an interactive element (similar to the interactive element263) used to enable “local moment share”, and the sixth operation may be an operation that is detected by the electronic device and that is performed on the interactive element. In this application, an operation used to enable “cloud moment share” may be referred to as a seventh operation. In an embodiment, the seventh operation may be an operation that is detected by the electronic device and that is used to enable “cloud moment share” in the “moment share interface”, and may be an operation performed on a second interactive element. In another embodiment, the window261shown inFIG.2B-2may include an interactive element (similar to the interactive element263) used to enable “cloud moment share”, and the seventh operation may be an operation that is detected by the electronic device and that is performed on the interactive element. In this application, “Moment share” may be referred to as a first communications service, and the first communications service is used by the electronic device to discover the first device and the second device by using one or more of Bluetooth, a WLAN, or cellular mobile data. “Local moment share” may be referred to as a second communications service, and the second communications service is used by the electronic device to discover the first device and the second device by using one or more of Bluetooth and a WLAN. “Cloud moment share” may be referred to as a third communications service, and the third communications service is used by the electronic device to discover the first device and the second device by using one or more of a WLAN or a cellular network. A System Architecture and a Data Sharing Method Provided in this Application are Described in the Following Embodiments. An Overall Procedure of the Data Sharing Method Provided in this Application is First Described. The Procedure May Include the Following Operations. Operation 1: The electronic device may display a first graphical user interface, and one or more objects are displayed in the first user interface. For descriptions of the first user interface and the object, refer to the foregoing content. Details are not described herein again. Operation 2: The electronic device may detect a first operation of sharing a selected object, and in response to the first operation, the electronic device may display a second user interface, and discover a first device and a second device. The second user interface may be used to display one or more user options, one or more device options, and one or more service options, the device option corresponds to the first device discovered by the electronic device, and the user option corresponds to the second device discovered by the electronic device. The device option may include one or more of the following: a printer option, a projector option, and a display option. The first device includes one or more of the following: a printer, a projector, a display, and the like. The second device may include a mobile phone, a tablet computer, a personal computer, or the like. Herein, for descriptions of the first operation and the first device, refer to the foregoing content. Details are not described herein again. Operation 3: The electronic device may detect a second operation performed on the device option, and the electronic device may trigger, in response to the second operation, the first device corresponding to the device option on which the second operation is performed to process the selected object, where the processing includes one or more of the following: printing, projection, and displaying. Details are as follows: If the device option on which the second operation is performed is the printer option, and the printer option corresponds to the printer discovered by the electronic device, the electronic device triggers the printer corresponding to the printer option to print the selected object. If the device option on which the second operation is performed is the projector option, and the projector option corresponds to the projector discovered by the electronic device, the electronic device triggers the projector corresponding to the projector option to project the selected object. If the device option on which the second operation is performed is the display option, and the display option corresponds to the display discovered by the electronic device, the electronic device triggers the display corresponding to the display option to display the selected object in a screen mirroring manner. In addition, the device option displayed in the second user interface may further include an audio device option, for example, another device such as a sound box option. The user may further perform other processing on the selected object by using another device option. Therefore, an operation is simple and intuitive. It can be learned that when detecting a scenario in which the user shares an object such as a picture, a document, or a web page, the electronic device may automatically provide the printer option, the projector option, the display option, or the like for the user. If the user expects to print the selected object, the user may select, by using the printer option, the printer discovered by the electronic device for printing. Therefore, an operation is simple and effective. Similarly, a projection process of performing projection by using the electronic device, a screen mirroring process of performing screen mirroring by using the electronic device, and the like are also more intuitive, simple, and effective for the user. The Following Describes, in Detail by Using an Example in which Printing is Performed by Using the Electronic Device, the Data Sharing Method Provided in this Application. Method Embodiment in FIG.25B-1and FIG.25B-2 In the method embodiment inFIG.25B-1andFIG.25B-2, “Moment share” may be used to support the user in sharing data with a device near the electronic device. The nearby device may include a nearby first device, for example, a printer, a projector, or a display, or may include a nearby second device. In an embodiment, enabling “Moment share” may be enabling one or more of a WLAN or Bluetooth of the electronic device. After enabling “Moment share”, the electronic device may discover the device near the electronic device by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN. The method embodiment inFIG.25B-1andFIG.25B-2corresponds to the embodiments shown as the examples inFIG.4AtoFIG.4H. For example, the user interface displayed by the electronic device in the method embodiment inFIG.25B-1andFIG.25B-2may be each user interface described in the embodiments shown as the examples inFIG.4AtoFIG.4H. A Communications System2500for Data Sharing is First Described. As shown in an example inFIG.25A, the communications system2500may include an electronic device2501, a mobile phone2515, and one or more printers, such as a printer2503, a printer2505, a printer2507, and a printer2509. The electronic device2501may be the electronic device mentioned in the foregoing embodiments. The electronic device2501may be implemented as the electronic device100shown as an example inFIG.1A, and may be a portable electronic device such as a mobile phone or a tablet computer. For example, the electronic device2501may include one or more of a Bluetooth (BT) module and a WLAN module. The electronic device2501may transmit a signal by using one or more of the Bluetooth (BT) module and the WLAN module to detect or scan a device near the electronic device2501, so that the electronic device2501can discover a nearby device (for example, the printer) by using one or more wireless communications technologies such as Bluetooth or a WLAN, establish a wireless communication connection to the nearby device, and share data with the nearby device (for example, the printer) by using the one or more wireless communications technologies such as Bluetooth or the WLAN. The Bluetooth (BT) module may provide a Bluetooth communication solution including one or more of classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE). The WLAN module may provide a WLAN communication solution including one or more of Wi-Fi direct, a Wi-Fi LAN, or Wi-Fi SoftAP. The printer2503may be a printer with a Bluetooth (BT) module. The printer2503may receive or transmit a wireless signal by using the Bluetooth (BT) module. The Bluetooth (BT) module in the printer2503may provide a Bluetooth communication solution including one or more of classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE). The printer2505may be a printer with a WLAN module. The printer2505may receive or transmit a wireless signal by using the WLAN module. The WLAN module in the printer2505may provide a WLAN communication solution including one or more of Wi-Fi direct, a Wi-Fi LAN, or Wi-Fi SoftAP. The printer2507may be a printer with a Bluetooth (BT) module and a WLAN module. The printer2507may receive or transmit a wireless signal by using one or more of the Bluetooth (BT) module and the WLAN module. The Bluetooth (BT) module may provide a Bluetooth communication solution including one or more of classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE). The WLAN module may provide a WLAN communication solution including one or more of Wi-Fi direct, a Wi-Fi LAN, or Wi-Fi SoftAP. Same as the printer2505, the printer2509may also be a printer with a WLAN module. The printer2509and the electronic device2501may be located in a same local area network (LAN) by accessing a Wi-Fi access point2511. As shown inFIG.25A, the electronic device may discover the printer2503by using one or more Bluetooth communications technologies such as classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE), establish a communication connection to the printer2503, and may share data with the printer2503by using the one or more Bluetooth communications technologies such as classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE). As shown inFIG.25A, the electronic device may discover the nearby printer2505by using one or more WLAN communications technologies such as Wi-Fi direct or Wi-Fi SoftAP, establish a communication connection to the printer2503, and may share data with the printer2505by using the one or more WLAN communications technologies such as Wi-Fi direct or Wi-Fi SoftAP. As shown inFIG.25A, the electronic device may discover the printer2507by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct, or Wi-Fi SoftAP, establish a communication connection to the printer2507, and may share data with the printer2507by using the one or more WLAN communications technologies such as Bluetooth, Wi-Fi direct, or Wi-Fi SoftAP. As shown inFIG.25A, the electronic device may discover, by using a wireless communications technology, namely, a Wi-Fi LAN, the printer2507that is located in the same local area network (LAN) as the electronic device, and may share data with the printer2509by using the local area network (LAN). In some other embodiments, the communications system2500may further include a cloud server2513, and data such as a picture or a video may be stored in the cloud server2513. The electronic device2501may access the cloud server2513, so that the user can use the electronic device2501to browse the data such as the picture stored in the cloud server2513. It may be understood that a structure shown in an embodiment does not constitute a limitation on the communications system2500. In some other embodiments of this application, the communications system2500may include more or fewer devices than those shown in the figure. For example, the communications system2500may further include a projector with one or more of a Bluetooth (BT) module and a WLAN module, a display with one or more of a Bluetooth (BT) module and a WLAN module, and another device with one or more of a Bluetooth (BT) module and a WLAN module, for example, a sound box, and may further include a mobile phone (for example, the mobile phone2515), a tablet computer, a personal computer, and the like. Second, Based on the Communications System2500Shown inFIG.25A, the Method Embodiment inFIG.25B-1andFIG.25B-2 is Described in Detail by Using an Example in which Printing is Performed by Using the Electronic Device. FIG.25B-1andFIG.25B-2show an overall procedure of a data sharing method. As shown inFIG.25B-1andFIG.25B-2, the method may include the following operations. S2501and S2503: Enable “Moment share” in advance. S2501: The electronic device may detect an operation used to enable “Moment share”. S2503: The electronic device may enable “Moment share” in response to the detected operation used to enable “Moment share”. The operation may be the fifth operation. For details, refer to the related descriptions of the fifth operation in the foregoing content. For example, enabling “Moment share” may be enabling one or more of a WLAN or Bluetooth of the electronic device. After enabling “Moment share”, the electronic device may discover a first device near the electronic device and a second device near the electronic device by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN. In some embodiments, it can be learned with reference to the embodiment inFIG.2B-1andFIG.2B-2that the user may perform a downward swipe operation on the status bar201to open the window261, and may tap the on/off control263of “Moment share” in the window261to enable “Moment share”. In other words, the operation mentioned herein that is used to enable “Moment share” may be an operation of tapping the on/off control263of “Moment share” in the window261. For example, before sharing data, the user may trigger the electronic device to enable “Moment share”. In some embodiments, S2501and S2503may be optional. Alternatively, after opening the “moment share interface”, the user may trigger enabling of “Moment share” in the “moment share interface”. For details, refer to S2509and S2511. Enabling “Moment share” in advance may be optional. S2504: The electronic device displays a first user interface. One or more objects may be displayed in the first user interface. The object may include a picture, a document, a web page, characters, an audio file, a video file, and the like. For example, the first user interface is a user interface of “Gallery”, and the one or more objects are one or more pictures displayed in the user interface of “Gallery”. For another example, the first user interface may be a user interface of “File browser”, and the one or more objects are one or more files displayed in the user interface of “File browser”. For details, refer to the related descriptions in the foregoing content. Details are not described herein again. S2505: The electronic device may detect an operation of sharing a selected object. The operation is the first operation. For details, refer to the related descriptions of the first operation in the foregoing content. Herein, the selected object may include one or more of the following objects: a selected picture, a selected file, a selected web page, selected characters, a selected audio file, a selected video file, and the like. Herein, for the operation of sharing the selected object, refer to the related descriptions in the foregoing UI embodiments. Details are not described herein again. In some embodiments, the selected object may be stored in the electronic device. In some other embodiments, the selected object may alternatively be stored in a cloud server, for example, the cloud server2513in the communications system2500shown as an example inFIG.25A, and an object such as a picture may be stored in the cloud server2513. The electronic device2501may access the cloud server2513, so that the user can use the electronic device2501to browse the object such as the picture stored in the cloud server2513. S2507: The electronic device may display a “moment share interface” in response to the detected first operation. The “moment share interface” includes a first area, a second area, and a third area, the first area is used to display one or more selected objects, the second area is used to display one or more service options, and the third area is used to display one or more user options and one or more device options. For descriptions of the user option, the device option, and the service option, refer to the related descriptions in the foregoing content. Details are not described herein again. The “moment share interface” is a second user interface. For an embodiment of the “moment share interface”, refer to the related descriptions of the “moment share interface” in the foregoing embodiments shown as the examples inFIG.4AtoFIG.4C. Details are not described herein again. S2509: The electronic device may detect, in the “moment share interface”, an operation used to enable “Moment share”. Herein, the operation used to enable “Moment share” is the fifth operation. For details, refer to the related descriptions of the fifth operation in the foregoing content. In some embodiments, the fifth operation may be an operation performed on a first interactive element. For descriptions of the first interactive element, refer to the related descriptions in the foregoing content. Details are not described herein again. S2511: The electronic device may enable “Moment share” in response to the detected operation used to enable “Moment share”. The operation may be the fifth operation. For details, refer to the related descriptions of the fifth operation in the foregoing content. For example, enabling “Moment share” may be enabling one or more of a WLAN or Bluetooth of the electronic device. After enabling “Moment share”, the electronic device may discover a first device near the electronic device and a second device near the electronic device by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN. S2509and S2511may be optional. Alternatively, the user may trigger the electronic device in advance to enable “Moment share”. For details, refer to S2509and S2511. For example, when “Moment share” is enabled, the user does not need to retrigger the electronic device to enable “Moment share”. S2513: When “Moment share” is enabled, the electronic device may discover nearby first devices, for example, a printer 1, a printer 2, . . . , and a printer n, where n is a positive integer, and N>2; and the electronic device may further discover a nearby second device, for example, a nearby mobile phone or a nearby tablet computer. Referring to the communications system2500shown as an example inFIG.25A, the printer discovered by the electronic device may be the printer2503, the printer2505, the printer2507, or the printer2509. An embodiment of discovering the printer by the electronic device is described in detail in the following content. Details are not described herein. When the nearby first device and/or the nearby second device are/is discovered, the electronic device may refresh the “moment share interface”, and, may refresh the third area in the “moment share interface”. One or more of a nearby device option and a nearby user option may be displayed in the refreshed third area. The nearby device option corresponds to the nearby first device discovered by the electronic device through “Moment share”, and the nearby user option corresponds to the nearby second device discovered by the electronic device through “Moment share”. For example, if the electronic device discovers the printer 1, the printer 2, . . . , and the printer n, corresponding device options—a printer option corresponding to the printer 1, a printer option corresponding to the printer 2, . . . , and a printer option corresponding to the printer n may be displayed in the refreshed third area. The user can select, by using the printer option, the printer to print the selected object. In some embodiments, a nearby device (for example, a printer) may further feed back a current state of the nearby device to the electronic device. For example, the nearby device is busy or consumables are used up. Correspondingly, the electronic device may display indication information indicating the current state. For details about how the electronic device displays the indication information indicating the current state, refer to the foregoing UI embodiments and related extension parts. Details are not described herein again. S2515to S2531: Trigger, in response to a detected operation of selecting a nearby printer for printing, the printer selected by the user to print the object selected in the first operation. For example, the operation of selecting the nearby printer for printing may be an operation performed on the printer option. The printer option may be displayed in the third area in the “moment share interface”. In some embodiments, the electronic device may provide the following manners of responding to the detected operation (for example, a touch operation performed on a printer icon) performed on the printer option. Manner 1: The electronic device may first display, in response to the detected operation (for example, the touch operation performed on the printer icon) performed on the printer option, a user interface used by the user to perform a print setting. For details, refer to S2517. For the user interface used by the user to perform the print setting, refer to the user interface43shown inFIG.4D. Details are not described herein again. In response to a detected operation of performing a print setting, the electronic device may determine, as a print setting corresponding to an object such as a picture selected by the user, the print setting (such as a color or a paper size) selected by the user. For details, refer to S2517. Then, the electronic device may trigger the printer to print, based on the print setting selected by the user, the object such as the picture selected by the user. For details, refer to S2521to S2531. When triggering the printer to print the object such as the picture selected by the user, the electronic device may indicate, to the printer, the print setting corresponding to the picture selected by the user, so that the printer can perform printing based on the print setting selected by the user. It can be learned from that in Manner 1, when the user triggers printing, the user interface43used by the user to perform the print setting may be provided, so that the user performs the print setting, for example, sets a quantity of to-be-printed copies, a paper size, and a print color. Manner 2: The electronic device may trigger, in response to the detected operation (for example, the touch operation performed on the printer icon) performed on the printer option, the printer (for example, the printer 1) selected by the user to print, based on a default print setting, a picture selected by the user. For details, refer to S2521to S2531. For example, a default quantity of to-be-printed copies is 1, a default paper size is A4, and a default print color is black and white. It can be learned that in Manner 2, when the user triggers printing, a print service based on the default print setting may be provided, and the user does not need to perform the print setting, so that a quantity of operations can be reduced. With reference to Manner 1 or Manner 2, the following describes an embodiment in which the electronic device triggers the printer (for example, the printer 1) selected by the user to perform printing. An embodiment may include the following operations. S2521: The electronic device may establish a communication connection to the printer (for example, the printer 1) selected by the user. For example, if the printer selected by the user is the printer2503in the communications system2500shown as an example inFIG.25A, the electronic device may establish a Bluetooth communication connection to the printer. If the printer selected by the user is the printer2505in the communications system2500shown as an example inFIG.25A, the electronic device may establish a Wi-Fi direct communication connection (for example, a P2P connection), a SoftAP connection, or the like to the printer. If the printer selected by the user is the printer2507in the communications system2500shown as an example inFIG.25A, the electronic device may establish a Bluetooth communication connection, a Wi-Fi direct communication connection (for example, a P2P connection), a SoftAP connection, and/or the like to the printer. For example, the electronic device may send a connection establishment request to the printer, and the printer may return a connection establishment success response to the electronic device. When establishing the communication connection to the printer, the electronic device may further perform device information negotiation with the printer. The device information negotiation may be mainly used by the electronic device to know a file format supported by the printer, whether the printer is currently busy, and the like. In some embodiments, S2521is optional. If the printer selected by the user is the printer2509in the communications system2500, and the printer2509and the electronic device2501are in a same LAN, the electronic device2501may transmit data to the printer, and does not need to re-establish a communication connection. For example, when the electronic device discovers the printer by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), and Wi-Fi SoftAP, the electronic device needs to establish the communication connection to the printer after discovering the printer. When the electronic device discovers the printer by using a wireless communications technology, namely, a Wi-Fi LAN, data transmission may be performed because the electronic device and the printer are already in the same LAN. S2523to S2527: The electronic device may send a print request to the printer (for example, the printer 1) selected by the user. The print request may be used to request the printer to print an object such as a picture selected by the user. After receiving the print request sent by the electronic device, the printer may return a print request response to the electronic device. For example, the electronic device may send the print request to the printer by using the established communication connection (for example, a Bluetooth communication connection and/or a Wi-Fi direct communication connection), or send the print request to the printer by using the LAN. For example, the printer may send the print request response to the electronic device by using the established communication connection (for example, a Bluetooth communication connection and/or a Wi-Fi direct communication connection), or send the print request response to the electronic device by using the LAN. In some embodiments, the print request may carry indication information, and the indication information may be used to indicate a print setting corresponding to the picture selected by the user, for example, a quantity of to-be-printed copies, a color, or a paper size. The print setting may be selected and set by the user in the user interface (for example, the user interface43shown as an example inFIG.4D) used to perform the print setting, or may be a default print setting. In some embodiments, referring to S2525, after receiving the print request sent by the electronic device, the printer may perform print preparation. In some embodiments, the print preparation may include but is not limited to the following processing: performing pressurization, ink injection, bubble removal, and the like on a pipeline system of the printer. S2525may be optional. The printer does not need to perform print preparation before performing each print task. S2529: After receiving the print request response returned by the printer, the electronic device may transmit, to the printer, the object such as the picture selected by the user. For example, the electronic device may transmit, to the printer by using the established communication connection (for example, a Bluetooth communication connection and/or a Wi-Fi direct communication connection), the object such as the picture selected by the user, or transmit, to the printer by using the LAN, the object such as the picture selected by the user. S2531: After receiving the object such as the picture transmitted by the electronic device, the printer may perform printing. In some embodiments, the printer may perform printing based on the print setting corresponding to the object such as the picture. The print setting corresponding to the object such as the picture may be carried in the print request sent by the electronic device. The print setting may be selected and set by the user in the user interface (for example, the user interface43shown as an example inFIG.4D) used to perform the print setting, or may be a default print setting of the electronic device. In some other embodiments, the printer may print the object such as the picture based on a default printer setting on a printer side. S2533to S2539: Feed back a print status. Referring to S2533to S2539, the printer may feed back, to the electronic device in a data printing process, a print status of the object such as the picture selected by the user (refer to S2533), or may feed back, to the electronic device after data printing ends, a print status of the object such as the picture selected by the user (refer to S2537). After receiving the print status fed back by the printer, the electronic device may display prompt information to prompt the user with the print status. In some embodiments, the print status of the object such as the picture selected by the user may include but is not limited to: a first print state, which may indicate that a print task of the object such as the picture selected by the user is in a print task queue of a printer, and is waiting in the queue to be processed by the printer; a second print state, which may indicate that the printer is printing the object such as the picture selected by the user; a third print state, which may indicate that printing of the object such as the picture selected by the user is complete; and a fourth print state, which may indicate that the printer fails to print the object such as the picture selected by the user. In some embodiments, the electronic device may display, in the notification window471shown as an example in one or more ofFIG.4EtoFIG.4H, the prompt information475indicating the print status. As shown inFIG.4EtoFIG.4H, the prompt information475may be text information “Print task is queuing . . . ”, “Printing . . . ”, “Printing is completed”, “Printing fails”, or the like. For a manner of displaying, in the notification window471, the prompt information475indicating the print status, refer to the related content in the foregoing embodiments shown as examples inFIG.4AtoFIG.4Hand the related extensions of the notification window, or refer to the embodiments inFIG.18AandFIG.18B. Details are not described herein again. In addition to the manner of displaying the prompt information that is provided in the foregoing UI embodiments, the electronic device may further display the prompt information in another manner. This is not limited in this application. The following describes several implementations in which the electronic device discovers the nearby first device. Manner 1: The electronic device discovers the nearby first device by using a wireless communications technology, namely, Wi-Fi direct. In some embodiments, the electronic device may broadcast a probe request. After obtaining the probe request through listening, the nearby first device (for example, a device such as a printer) may return a probe response to notify the electronic device of existence of the nearby first device. In some other embodiments, the nearby first device (such as a printer, a projector, or a display) may periodically send a beacon frame. The electronic device may discover the nearby first device by listening to the beacon frame sent by the nearby first device. In other words, the electronic device may actively discover the nearby first device, or may passively discover the nearby first device. Manner 2: The electronic device discovers the nearby first device by using a wireless communications technology, namely, Bluetooth. In some embodiments, a Bluetooth device (for example, a printer, a projector, or a display with a Bluetooth module) near the electronic device may perform Bluetooth broadcast. The electronic device may perform Bluetooth scanning to scan a broadcast frame that is broadcast by the nearby Bluetooth device, to discover the nearby Bluetooth device. Manner 3: The electronic device discovers the device in a same Wi-Fi LAN. In some embodiments, the electronic device may determine an IP address range of the LAN based on an IP address and a subnet mask that are of the electronic device in the LAN, and then may discover the device in the LAN in a unicast polling manner. In addition, the electronic device may further discover the device in the LAN by using a broadcast message or a multicast message in the LAN. In addition to the foregoing several manners in which the electronic device discovers the nearby first device, in actual application, the electronic device may further discover the nearby first device based on a wireless communications technology such as Bluetooth, Wi-Fi direct, or a Wi-Fi LAN in another manner. This is not limited in this application. It can be learned that in the embodiment inFIG.25B-1andFIG.25B-2, if “Moment share” is enabled, when detecting the operation of selecting the object such as the picture for sharing, the electronic device automatically discovers the nearby device, updates the “moment share interface”, and displays the device option in the “moment share interface”. In addition, in some embodiments, when detecting that “Moment share” is enabled, the electronic device may further automatically discover the nearby device without waiting until the operation of selecting the object such as the picture for sharing is detected. When detecting the operation of selecting the object such as the picture for sharing, the electronic device may display the nearby device option in the “moment share interface”, so that the user can more quickly see, in the “moment share interface”, a nearby device discovered by the electronic device, thereby improving efficiency of performing printing, projection, displaying, or the like by the user by using the electronic device. Similar to the method for printing the data by using the nearby printer discovered by the electronic device that is shown inFIG.25B-1andFIG.25B-2, in a method for projecting data by using a nearby projector discovered by the electronic device, the electronic device may discover the nearby projector in a manner of discovering the nearby printer, and then the electronic device may trigger, in response to an operation that is detected in the “moment share interface” and that is of selecting the projector to project an object such as a picture selected by the user, the projector to perform projection. A difference lies in that to trigger the projector to perform projection, the electronic device sends a projection request instead of the print request to the projector selected by the user. The projector may perform projection based on the projection request. Similar to the method for printing the data by using the nearby printer discovered by the electronic device that is shown inFIG.25B-1andFIG.25B-2, in a method for performing screen mirroring on data by using a nearby display discovered by the electronic device, the electronic device may discover the display near the electronic device in a manner of discovering the nearby printer, and then the electronic device may trigger, in response to an operation that is detected in the “moment share interface” and that is of selecting the display to display an object such as a picture selected by the user, the display to perform displaying. A difference lies in that to trigger the display to perform displaying, the electronic device sends a display request instead of the print request to the display selected by the user. The display may perform projection based on the display request. A method for playing, by using a nearby multimedia device discovered by the electronic device, a multimedia file selected by the user, and the like may also be similar to the method for printing the data by using the nearby printer discovered by the electronic device that is shown inFIG.25B-1andFIG.25B-2. Details are not described again. In some embodiments, when detecting the operation of sharing the selected object, the electronic device may discover only a nearby device and/or a cloud device that are/is suitable for processing the selected object. An embodiment is described below. In some embodiments, when the operation of sharing the selected object is detected, if the selected object (namely, an object selected by the user) is an object that can be printed, such as a picture, a document, a web page, or characters, in response to the operation, the electronic device may discover a nearby printer and/or a cloud printer, and display the discovered nearby printer and/or the discovered cloud printer in the “moment share interface”. Otherwise, the electronic device may not discover a nearby printer and/or a cloud printer, or may discover a nearby printer and/or a cloud printer but does not display, in the “moment share interface”, a device option corresponding to the nearby printer and/or a device option corresponding to the cloud printer. In this way, an interface area of the “moment share interface” can be saved, and a problem that processing fails because the user selects an inappropriate printer can also be avoided, thereby avoiding unnecessary troubles for the user, and improving use efficiency of the electronic device. Objects that cannot be printed may include one or more of the following: an audio file, a video file, an installation package of an application, an intermediate file obtained through software compilation, and the like. In some embodiments, the objects that cannot be printed currently may also be converted by the electronic device into file formats that support printing, for example, the audio file is converted into a document. In this case, the objects may also be printed. In some embodiments, when the operation of sharing the selected object is detected, if the selected object (namely, an object selected by the user) is an object that can be projected, such as a video file, a picture, a document, a web page, or characters, in response to the operation, the electronic device may discover a nearby projector and/or a cloud projector, and display, in the “moment share interface”, a nearby projector option corresponding to the discovered nearby projector and/or a cloud projector option corresponding to the cloud projector. Otherwise, the electronic device may not discover a nearby projector and/or a cloud projector, or may discover a nearby projector and/or a cloud projector but does not display, in the “moment share interface”, a device option corresponding to the nearby projector and/or a device option corresponding to the cloud projector. In this way, an interface area of the “moment share interface” can be saved, and a problem that processing fails because the user selects an inappropriate projector can also be avoided, thereby avoiding unnecessary troubles for the user, and improving use efficiency of the electronic device. Objects that cannot be projected may include one or more of the following: an installation package of an application, an intermediate file obtained through software compilation, and the like. In some embodiments, the objects that cannot be projected currently may also be converted by the electronic device into file formats that support projection, for example, the intermediate file is converted into a file such as a picture or a video. In this case, the objects may also be printed. In some embodiments, when the operation of sharing the selected object is detected, if the selected object (namely, an object selected by the user) is an object that can be displayed, such as a video file, a picture, a document, a web page, or characters, in response to the operation, the electronic device may discover a nearby display and/or a cloud display, and display, in the “moment share interface”, a nearby display option corresponding to the discovered nearby display and/or a cloud display option corresponding to the cloud display. Otherwise, the electronic device may not discover a nearby display and/or a cloud display, or may discover a nearby display and/or a cloud display but does not display, in the “moment share interface”, a device option corresponding to the nearby display and/or a device option corresponding to the cloud display. Similarly, when the operation of sharing the selected object is detected, if the selected object (namely, an object selected by the user) can be played by a media device, the electronic device may discover a nearby media playback device and/or a cloud media playback device that can play the object, and display, in the “moment share interface”, a nearby media playback device option corresponding to the discovered nearby media playback device and/or a cloud media playback device option corresponding to the cloud media playback device. Otherwise, the electronic device may not discover a nearby media playback device and/or a cloud media playback device, or may discover a nearby media playback device and/or a cloud media playback device but does not display, in the “moment share interface”, a device option corresponding to the nearby media playback device and/or a device option corresponding to the cloud media playback device. The following describes an implementation of selecting a plurality of different types of devices at a time to process selected objects. In some embodiments, the electronic device may detect, in the “moment share interface”, an operation of dragging the selected objects to a plurality of different types of device options, and the electronic device may trigger, in response to the operation, the plurality of different types of devices to respectively process data allocated to the plurality of different types of devices. Data allocated to a device may be data dragged to the device option. For example, as shown inFIG.25C, selected objects that need to be shared by the user include a video2503and a picture2504. In the “moment share interface” (namely, the user interface251), when the electronic device detects an operation of dragging the video2503to an icon2508of “EPSON projector”, and detects an operation of dragging the picture2504to an icon2512of “JIAPUWEI TH880”, in response to the two operations, the electronic device may trigger the projector “EPSON projector” to project the video2503, and may trigger the printer “JIAPUWEI TH880” to print the picture2504. In other words, the user may drag the selected objects to the different types of device options in the “moment share interface”, to allocate the selected objects to the different types of devices for different processing, so that an operation of performing printing, projection, and the like in parallel can be implemented, thereby greatly improving office efficiency of the user. In addition to the selected objects, the user may further drag an unselected object to a device option in the “moment share interface”. In addition to the drag operation, in the “moment share interface”, the operation used to allocate the data to the plurality of different types of devices may be further presented in another form. This is not limited in this application. In some other embodiments, the electronic device may detect, in the “moment share interface”, an operation of selecting a plurality of different types of devices, for example, detect an operation of consecutively tapping a plurality of different types of device options. Herein, the consecutive tapping may be a plurality of tap operations performed in a preset time period (for example, 1 second). The plurality of different types of devices selected in the operation are selected devices. The electronic device may trigger, in response to the operation according to a preset allocation policy, the selected different types of devices to process the selected objects, thereby improving use efficiency and user experience. The preset allocation policy may be randomly allocating the selected objects to a plurality of selected printers for printing, or evenly allocating the selected objects to a plurality of selected printers for printing. The preset allocation policy may alternatively be that each selected printer prints all selected objects. The preset allocation policy is not limited in this application. In an embodiment, the selected objects that need to be shared by the user may include a plurality of types of data, such as a picture, a video, and audio. In this case, the preset allocation policy may be allocating the selected object to a selected device that can process the selected object. For example, as shown inFIG.25C, selected objects that need to be shared by the user include a video2503and a picture2504. In the “moment share interface” (namely, the user interface251), when the electronic device detects an operation of consecutively tapping an icon2508and an icon2512, in response to the operation, the electronic device may allocate the video2503to a projector “EPSON projector” and trigger the projector “EPSON projector” to project the video2503, and further allocate the picture2504to a printer “JIAPUWEI TH880” and triggers the printer “JIAPUWEI TH880” to print the picture2504. In this way, the user may select different types of discovered devices at a time to process a plurality of objects (including a picture, a video, and the like) of different data types, thereby greatly improving use efficiency and improving user experience. It may be understood that for content that is not mentioned in the method embodiment inFIG.25B-1andFIG.25B-2, reference may be made to the embodiments shown as examples inFIG.4AtoFIG.4Hand the related extensions. Details are not described herein again. According to the method embodiment inFIG.25B-1andFIG.25B-2, the electronic device may automatically discover the device such as the printer, the projector, or the display when identifying a scenario in which the user shares an object such as a picture, a document, a web page, or characters. If the user expects to print data such as a picture, a document, a web page, or characters, the user may select the discovered printer for printing, so that a process of printing the data by using the electronic device is intuitive and simple for the user. Similarly, if the user expects to project data such as a picture, a document, a web page, or characters, the user may select the discovered projector for projection, so that a process of projecting the data by using the electronic device is intuitive and simple for the user. If the user expects to perform screen mirroring on data such as a picture, a document, a web page, or characters, the user may select the discovered display for screen mirroring, so that a process of performing screen mirroring on the data by using the electronic device is intuitive and simple for the user. Method Embodiment in FIG.26B-1and FIG.26B-2 In the method embodiment inFIG.26B-1andFIG.26B-2, “Moment share” may be used to support the user in sharing data with a device near the electronic device. The nearby device may include a nearby first device, for example, a printer, a projector, or a display, or may include a nearby second device. In an embodiment, enabling “Moment share” may be enabling one or more of a WLAN or Bluetooth of the electronic device. After enabling “Moment share”, the electronic device may discover the device near the electronic device by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN. The method embodiment inFIG.26B-1andFIG.26B-2corresponds to the embodiments shown as the examples inFIG.5AtoFIG.5J. For example, the user interface displayed by the electronic device in the method embodiment inFIG.26B-1andFIG.26B-2may be each user interface described in the embodiments shown as the examples inFIG.5AtoFIG.5J. A Communications System2600for Data Sharing is First Described. As shown in an example inFIG.26A, the communications system2600may include a server2615, a server2617, an electronic device2601, a mobile phone2619, and one or more printers, such as a printer2603, a printer2605, a printer2607, and a printer2609. The server2615may be configured to control a printer connected to the server2615to provide a print service, and the server2617may be configured to provide a payment settlement service between the user and a print service provider. The server2615may be a server of the print service provider. The server2617may be a payment server of the print service provider, or may be a payment server of a third-party payment service provider. The server2615and the server2617may be connected by using a LAN or WAN communications technology, so that the server2615initiates payment to the server2617, and the server2617sends a payment result to the server2615. For details, refer to the related content in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. The electronic device2601may be the electronic device mentioned in the foregoing UI embodiments. For details, refer to the electronic device2501in the communications system2500shown inFIG.25A. Details are not described herein again. The electronic device may be connected to the server2617by using a cellular mobile communications technology or a WAN communications technology, to facilitate payment interaction with the server2617. For details, refer to the related content in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. The printer2603may be a printer with a Bluetooth (BT) module. For details, refer to the printer2503in the communications system2500shown inFIG.25A. Details are not described herein again. The printer2605may be a printer with a WLAN module. For details, refer to the printer2505in the communications system2500shown inFIG.25A. Details are not described herein again. As shown inFIG.26A, the printer2605may be a printer for which a fee needs to be paid. The printer2605may further include a cellular mobile communications module (for example, a 3G/LTE/5G communications module). The printer2605may be connected to the server2615and the server2617by using one or more of a cellular mobile communications technology or a WAN communications technology. The printer2607may be a printer with a Bluetooth (BT) module and a WLAN module. For details, refer to the printer2507in the communications system2500shown inFIG.25A. Details are not described herein again. As shown inFIG.26A, the printer2607may also be a printer for which a fee needs to be paid. The printer2607may be connected to the server2615and the server2617by using one or more of a WAN communications technology and Bluetooth. Same as the printer2605, the printer2609may also be a printer with a WLAN module. The printer2609and the electronic device2601may be located in a same local area network (LAN) by accessing a Wi-Fi access point2611. As shown inFIG.26A, the electronic device2601may discover the printer2603by using one or more Bluetooth communications technologies such as classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE), establish a communication connection to the printer2603, and may share data with the printer2603by using the one or more Bluetooth communications technologies such as classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE). As shown inFIG.26A, the electronic device2601may discover the nearby printer2605by using one or more WLAN communications technologies such as Wi-Fi direct or Wi-Fi SoftAP, establish a communication connection to the printer2605, and may share data with the printer2605by using the one or more WLAN communications technologies such as Wi-Fi direct or Wi-Fi SoftAP. As shown inFIG.26A, the electronic device2601may discover the printer2607by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct, or Wi-Fi SoftAP, establish a communication connection to the printer2607, and may share data with the printer2607by using the one or more WLAN communications technologies such as Bluetooth, Wi-Fi direct, or Wi-Fi SoftAP. As shown inFIG.26A, the electronic device2601may discover, by using a wireless communications technology, namely, a Wi-Fi LAN, the printer2609that is located in the same local area network (LAN) as the electronic device, and may share data with the printer2609by using the local area network (LAN). In some embodiments, the communications system2600may further include a cloud server2613, and data such as a picture may be stored in the cloud server2613. The electronic device2601may access the cloud server2613, so that the user can use the electronic device2601to browse the data such as the picture stored in the cloud server2613. It may be understood that a structure shown in an embodiment does not constitute a limitation on the communications system2600. In some other embodiments of this application, the communications system2600may include more or fewer devices than those shown in the figure. For example, in addition to the devices shown inFIG.26A, the communications system2600may further include a projector with one or more of a Bluetooth (BT) module and a WLAN module, a display with one or more of a Bluetooth (BT) module and a WLAN module, and another device with one or more of a Bluetooth (BT) module and a WLAN module, for example, a sound box, and may further include a mobile phone (for example, the mobile phone2619), a tablet computer, a personal computer, and the like. Second, Based on the Communications System2600Shown inFIG.26A, the Method Embodiment inFIG.26B-1andFIG.26B-2 is Described in Detail by Using an Example in which Printing is Performed by Using the Electronic Device. FIG.26B-1andFIG.26B-2show an overall procedure of another data sharing method. As shown inFIG.26B-1andFIG.26B-2, the method may include the following operations. S2601and S2603: Enable “Moment share” in advance. For details, refer to S2501and S2503in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2604: The electronic device displays a first user interface. For details, refer to S2504in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2605: The electronic device may detect an operation of sharing a selected object. For details, refer to S2505in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2607: The electronic device may display a “moment share interface” in response to the detected first operation. For details, refer to S2507in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2609: The electronic device may detect, in the “moment share interface”, an operation used to enable “Moment share”. For details, refer to S2509in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2611: The electronic device may enable “Moment share” in response to the detected operation used to enable “Moment share”. For details, refer to S2511in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2613: When “Moment share” is enabled, the electronic device may discover nearby first devices, for example, a printer 1, a printer 2, . . . , and a printer n, where n is a positive integer, and N>2; and the electronic device may further discover a nearby second device, for example, a nearby mobile phone or a nearby tablet computer. For details, refer to S2513in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2615to S2631: Trigger, in response to a detected operation of selecting a nearby printer for printing, the printer selected by the user to print the object selected in the first operation. For example, the operation of selecting the nearby printer for printing may be an operation performed on a printer option. The printer option may be displayed in a third area in the “moment share interface”. Different from S2515to S2531in the method embodiment inFIG.25B-1andFIG.25B-2, if a fee needs to be paid for the printer (for example, the printer 1) selected by the user, as shown inFIG.26B-1andFIG.26B-2, the printer may wait for an instruction from the server2615in the communications system2600shown as an example inFIG.26Abefore performing printing. After determining that the user successfully pays the print fee, the server2615may send a print instruction to the printer. A process may include but is not limited to the following operations. S2623: The electronic device may send a print request to the printer (for example, the printer 1) selected by the user. S2623-1: After receiving the print request, the printer may report the print request of the user to the server2615, where the print request may carry identification information of the user and indication information indicating a print setting. The identification information of the user may be information that can be used to uniquely identify a user identity, such as an international mobile subscriber identity (international mobile subscriber identity, IMSI). This is not limited in this application. The print setting may be used to determine a print fee. The print setting may be selected and set by the user in a user interface (for example, the user interface43shown as an example inFIG.4D) used to perform the print setting, or may be a default print setting. S2623-2: After receiving the print request of the user that is reported by the printer, the server2615may initiate a payment request to the server2617in the communications system2600shown as an example inFIG.26A. The payment request may carry identification information of the user and order information. For example, the order information may include a print fee that needs to be paid by the user, for example, “¥12.00”. The order information may further include identification information of a payee, and the like. S2624: After receiving the payment request initiated by the server2615, the server2617may perform payment interaction with the electronic device. In some embodiments, the payment interaction may include but is not limited to: The server2617may send, to the electronic device, the user interface used by the user to perform the print setting. The electronic device may display the user interface used by the user to perform the print setting. The payment interfaces may be shown as examples inFIG.5EandFIG.5F. However, an embodiment of the user interface is not limited in this application. The electronic device may send, to the server2617, a payment password entered by the user in the user interface used by the user to perform the print setting, so that the server2617confirms the payment password. The server2617may return a payment result to the electronic device, for example, a payment success or failure. S2624-1: After the payment is completed, the server2617may feed back a payment result to the server2615. S2624-2: After determining that the user successfully pays the print fee, the server2615may send the print instruction to the printer (for example, the printer 1) selected by the user. For content that is not mentioned in Phase 5, refer to Phase 5 in the embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. In addition to S2623to S2624-2shown inFIG.26B-1andFIG.26B-2, if the server2617is a payment server of a print service provider to which the server2615belongs, the print service provider may provide a recharge service such as “Personal wallet”. If balances of an account-recharged user are sufficient, the server2617may perform automatic payment each time the user performs print consumption. If the payment succeeds, the server2617may feed back, to the server2615, a payment result that the payment succeeds. In this way, the user does not need to enter the payment password each time. S2633to S2639: Feed back a print status. For details, refer to S2533to S2539in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. It may be understood that for content that is not mentioned in the method embodiment inFIG.26B-1andFIG.26B-2, reference may be made to the embodiments shown as examples inFIG.5AtoFIG.5Jand the related extensions, or reference may be made to the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. According to the method embodiment inFIG.26B-1andFIG.26B-2, when identifying a scenario in which the user shares a picture, the electronic device may automatically discover a nearby printer, and intuitively present, to the user, the nearby printer discovered by the electronic device. If a fee needs to be paid for the printer selected by the user, the electronic device may display a payment page after the user selects the printer, and trigger the printer for printing after the payment succeeds. In this way, the user may select the printer for which the fee needs to be paid for printing, so that an operation is intuitive and simple. In addition, a method for projecting data by using a nearby projector for which a fee needs to be paid, a method for displaying data by using a nearby display for which a fee needs to be paid, a method for playing data by using a nearby multimedia device for which a fee needs to be paid, and the like may be similar to the method for printing the data by using the nearby printer for which the fee needs to be paid that is shown inFIG.26B-1andFIG.26B-2. Details are not described again. In this way, the user may select the projector for which the fee needs to be paid for projection, may select the display for which the fee needs to be paid for screen mirroring, and the like, so that an operation is intuitive and simple. Method Embodiment in FIG.27B-1and FIG.27B-2 In the method embodiment inFIG.27B-1andFIG.27B-2, “Moment share” may be used to support the user in sharing data with a device near the electronic device, for example, a nearby printer, or may be used to support the user in sharing data with a cloud device, for example, a cloud printer. In an embodiment, enabling “Moment share” may be enabling cellular mobile data, a WLAN, and Bluetooth, or may be enabling cellular mobile data and a WLAN, or may be enabling cellular mobile data and Bluetooth, or may be enabling a WLAN and Bluetooth. After enabling “Moment share”, the electronic device may discover the device near the electronic device by using one or more technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN, or may discover the cloud device by using a cellular mobile communications network technology or a WAN communications technology. The method embodiment inFIG.27B-1andFIG.27B-2corresponds to the embodiments shown as the examples inFIG.6AtoFIG.6Jor the embodiments shown as the examples inFIG.7AtoFIG.7C. For example, the user interface displayed by the electronic device in the method embodiment inFIG.27B-1andFIG.27B-2may be each user interface described in the embodiments shown as the examples inFIG.6AtoFIG.6Jor the embodiments shown as the examples inFIG.7AtoFIG.7C. A Communications System2700for Data Sharing is First Described. As shown in an example inFIG.27A-1andFIG.27A-2, the communications system2700may include a server2715, a server2717, an electronic device2701, a mobile phone2731, and one or more printers, such as a printer2703, a printer2705, a printer2707, and a printer2709, and the communications system may further include a server2719, a server2721, a printer2723, a printer2725, and a printer2727. The server2715may be a server that provides one or more services such as a cloud printing service, a cloud projection service, or a cloud screen mirroring service. The server2715may be configured to control a printer connected to the server2715to provide a print service, may be configured to control a projector connected to the server2715to provide a projection service, may be configured to control a display connected to the server2715to provide a screen mirroring service, or may be configured to control a media playback device or the like connected to the server2715to provide a media data playback service. The server2717may be configured to provide a payment settlement service between the user and a print service provider. For the server2715, refer to the server2615in the communications system2600shown inFIG.26A. For the server2717, refer to the server2617in the communications system2600shown inFIG.26A. Details are not described herein again. The printer2703may be a printer with a Bluetooth (BT) module. The printer2705may be a printer with a WLAN module. The printer2707may be a printer with a Bluetooth (BT) module and a WLAN module. Same as the printer2705, the printer2709may also be a printer with a WLAN module. For the printer2703, refer to the printer2603in the communications system2600shown inFIG.26A. For the printer2705, refer to the printer2605in the communications system2600shown inFIG.26A. For the printer2707, refer to the printer2607in the communications system2600shown inFIG.26A. For the printer2709, refer to the printer2609in the communications system2600shown inFIG.26A. Details are not described herein again. The server2721may be a server of a cloud printing service provider, and is configured to control a cloud printer to provide a print service. The server2719may be configured to provide a payment settlement service between the user and the cloud printing service provider. The server2719may be a payment server of the cloud printing service provider, or may be a payment server of a third-party payment service provider. The server2721and the server2719may be connected by using a LAN or WAN communications technology, so that the server2721initiates payment to the server2719, and the server2719sends a payment result to the server2721. For details, refer to the related content in the method embodiment inFIG.26B-1andFIG.26B-2. Details are not described herein again. The electronic device2701may be the electronic device mentioned in the foregoing UI embodiments. For details, refer to the electronic device2501in the communications system2500shown inFIG.25A. Details are not described herein again. The electronic device may be connected to the server2719by using a cellular mobile communications technology or a WAN communications technology, may discover a cloud printer by using the server2719, and may communicate with the cloud printer by using the server2719, for example, transmit data to the cloud printer and receive feedback from the cloud printer. The electronic device may be connected to the server2721by using a cellular mobile communications technology or a WAN communications technology, to facilitate payment interaction with the server2721. For details, refer to the related content in the method embodiment inFIG.26B-1andFIG.26B-2. Details are not described herein again. The printer2723may be a printer with a WLAN module. The printer2723may be connected to the server2721by using a WAN communications technology, to communicate with the server2721. The printer2725may be a printer with a cellular mobile communications (such as 3G, LTE, or 5G) processing module. The printer2725may be connected to the server2721by using a cellular mobile communications (such as 3G, LTE, or 5G) technology, to communicate with the server2721. The printer2727may be a printer with a WLAN module and a cellular mobile communications (such as 3G, LTE, or 5G) processing module. The printer2727may be connected to the server2721by using a WAN communications technology and/or a cellular mobile communications (such as 3G, LTE, or 5G) technology, to communicate with the server2721. In some embodiments, the electronic device2701may discover the printer2703by using one or more Bluetooth communications technologies such as classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE), establish a communication connection to the printer2703, and may share data with the printer2703by using the one or more Bluetooth communications technologies such as classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (BLE). In some embodiments, the electronic device2701may discover the nearby printer2705by using one or more WLAN communications technologies such as Wi-Fi direct or Wi-Fi SoftAP, establish a communication connection to the printer2605, and may share data with the printer2705by using the one or more WLAN communications technologies such as Wi-Fi direct or Wi-Fi SoftAP. In some embodiments, the electronic device2701may discover the printer2707by using one or more wireless communications technologies such as Bluetooth, Wi-Fi direct, or Wi-Fi SoftAP, establish a communication connection to the printer2707, and may share data with the printer2707by using the one or more WLAN communications technologies such as Bluetooth, Wi-Fi direct, or Wi-Fi SoftAP. In some embodiments, the electronic device2701may discover, by using a wireless communications technology, namely, a Wi-Fi LAN, the printer2709that is located in the same local area network (LAN) as the electronic device, and may share data with the printer2709by using the local area network (LAN). In some embodiments, after the electronic device2701is connected to the server2721in a network, the server2721may provide the electronic device2701with a device list of cloud devices (such as the printer2723, the printer2725, and the printer2727) connected to the server2721, so that the electronic device2701can discover the cloud devices. In some embodiments, the communications system2700may further include a cloud server2713, and data such as a picture may be stored in the cloud server2713. The electronic device2701may access the cloud server2713, so that the user can use the electronic device2701to browse the data such as the picture stored in the cloud server2713. It may be understood that a structure shown in an embodiment does not constitute a limitation on the communications system2700. In some other embodiments of this application, the communications system2700may include more or fewer devices than those shown in the figure. For example, in addition to the devices shown inFIG.27A-1andFIG.27A-2, the communications system2700may further include a projector with one or more of a cellular mobile communications processing module, a Bluetooth (BT) module, and a WLAN module, a display with one or more of a cellular mobile communications processing module, a Bluetooth (BT) module, and a WLAN module, and another device with one or more of a cellular mobile communications processing module, a Bluetooth (BT) module, and a WLAN module, for example, a sound box, and may further include a mobile phone (for example, the mobile phone2731), a tablet computer, a personal computer, and the like. Second, Based on the Communications System2700Shown inFIG.27A-1andFIG.27A-2, the Method Embodiment inFIG.27B-1andFIG.27B-2is Described in Detail by Using an Example in which Printing is Performed by Using the Electronic Device. FIG.27B-1andFIG.27B-2show an overall procedure of still another data sharing method. As shown inFIG.27B-1andFIG.27B-2, the method may include the following operations. S2701: The electronic device may detect an operation used to enable “Moment share”. S2703: The electronic device may enable “Moment share” in response to the detected operation used to enable “Moment share”. The operation may be the fifth operation. For details, refer to the related descriptions of the fifth operation in the foregoing content. In an embodiment, enabling “Moment share” may be enabling cellular mobile data, a WLAN, and Bluetooth, or may be enabling cellular mobile data and a WLAN, or may be enabling cellular mobile data and Bluetooth, or may be enabling one or more of a WLAN or Bluetooth. After enabling “Moment share”, the electronic device may discover the device near the electronic device by using one or more technologies such as Bluetooth, Wi-Fi direct (such as Wi-Fi P2P), Wi-Fi SoftAP, and a Wi-Fi LAN, or may discover the cloud device by using a cellular mobile communications network technology or a WAN communications technology. S2704: The electronic device displays a first user interface. For details, refer to S2504in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2705: The electronic device may detect an operation of sharing a selected object. For details, refer to S2505in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. S2707: The electronic device may display a “moment share interface” in response to the detected first operation. In some embodiments, for an embodiment of the “moment share interface”, refer to the related descriptions of the “moment share interface” shown as an example inFIG.6AtoFIG.6C. Details are not described herein again. In some other embodiments, for an embodiment of the “moment share interface”, refer to the related descriptions of the “moment share interface” shown as an example inFIG.7AtoFIG.7C. Details are not described herein again. S2709and S2711: The electronic device may detect, in the “moment share interface”, an operation used to enable “Moment share”. In one case, the “moment share interface” may be the “moment share interface” shown as an example inFIG.6AtoFIG.6C. In this case, the operation is the fifth operation. For details, refer to the related descriptions of the fifth operation in the foregoing content. The fifth operation may be an operation performed on a first interactive element. For descriptions of the first interactive element, refer to the related descriptions in the foregoing content. Details are not described herein again. The electronic device may enable “Moment share” in response to the fifth operation. In another case, the “moment share interface” may be the “moment share interface” shown as an example inFIG.7AtoFIG.7C. In this case, “Moment share” may be classified into “local moment share” and “cloud moment share”. Embodiments of S2709and S2711may be described below. In some embodiments, the electronic device may detect, in the “moment share interface”, an operation used to enable “local moment share”. The operation is the sixth operation. For details, refer to the related descriptions of the sixth operation in the foregoing content. The electronic device may enable “local moment share” in response to the sixth operation. Enabling “local moment share” may be enabling one or more of Bluetooth or a WLAN. In some embodiments, the electronic device may detect, in the “moment share interface”, an operation used to enable “cloud moment share”. The operation is the seventh operation. For details, refer to the related descriptions of the seventh operation in the foregoing content. The electronic device may enable “cloud moment share” in response to the detected seventh operation. S2713: When “Moment share” is enabled, the electronic device may discover a nearby device and/or a cloud device. Herein, the nearby device may include a nearby first device and a nearby second device. The cloud device may include a cloud first device. The cloud device may further include a cloud second device. For an embodiment of discovering the nearby device (for example, a printer n) by the electronic device, refer to S2513in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. An embodiment of discovering the cloud device (for example, a printer 1 or a printer 2) by the electronic device is described below. In some embodiments, after the electronic device may be connected to a server of a cloud printing service provider in a network by using a cellular mobile communications technology or a WAN communications technology, the server may provide the electronic device with a device list of another device connected to the server, so that the electronic device can discover the another device, and the another device may be a cloud device discovered by the electronic device. For example, if the device list includes indication information of the printer 1 and the printer 2, for example, a network identifier (for example, an IP address), the electronic device can discover the printer 1 and the printer 2. In addition, the electronic device may further discover the cloud device in another manner, for example, discover another device in a WAN according to a WAN discovery protocol. This is not limited in this application. The electronic device may refresh the “moment share interface” when “Moment share” is enabled. Details are as follows: In one case, the “moment share interface” may be the “moment share interface” shown as an example inFIG.6AtoFIG.6C. In this case, the electronic device may refresh a third area in the “moment share interface”. One or more of a device option and a user option may be displayed in the refreshed third area. The nearby device option corresponds to the nearby first device and/or the cloud first device discovered by the electronic device through “Moment share”, and the nearby user option corresponds to the nearby second device and/or the cloud second device discovered by the electronic device through “Moment share”. For example, if the electronic device discovers the printer 1, the printer 2, . . . , and the printer n, corresponding device options—a printer option corresponding to the printer 1, a printer option corresponding to the printer 2, . . . , and a printer option corresponding to the printer n may be displayed in the refreshed third area. The user can select, by using the printer option, the printer to print the selected object. In another case, the “moment share interface” may be the “moment share interface” shown as an example inFIG.7AtoFIG.7C. In this case, the electronic device may refresh a third area and a fourth area in the “moment share interface”. One or more of a nearby device option and a nearby user option may be displayed in the refreshed third area. The nearby device option corresponds to the nearby first device discovered by the electronic device through “Moment share”, and the nearby user option corresponds to the nearby second device discovered by the electronic device through “Moment share”. Both a remote device option and a remote user option may be displayed in the refreshed fourth area. The cloud device option corresponds to the cloud first device discovered by the electronic device through “cloud moment share”, and the cloud user option corresponds to the cloud second device discovered by the electronic device through “Moment share”. For example, if the electronic device discovers the printer 1, the printer 2, . . . , and the printer n, a printer option corresponding to the printer n may be displayed in the refreshed third area, and a printer option corresponding to the printer 1 and a printer option corresponding to the printer 2 may be displayed in the refreshed fourth area. The user can select, by using the printer option, the printer to print the selected object. S2715to S2728: Trigger, in response to a detected operation of selecting a nearby printer for printing, the printer selected by the user to print the object selected in the first operation. The printer option may include a nearby printer option and a cloud printer option. For example, the operation of selecting the printer for printing may be an operation performed on the printer option. In one case, the “moment share interface” may be the “moment share interface” shown as an example inFIG.6AtoFIG.6C. In this case, the nearby printer option and the cloud printer option may be displayed in a third area in the “moment share interface”. In another case, the “moment share interface” may be the “moment share interface” shown as an example inFIG.7AtoFIG.7C. In this case, the nearby printer option may be displayed in a third area in the “moment share interface”, and the cloud printer option may be displayed in a fourth area in the “moment share interface”. In some embodiments, the electronic device may detect an operation that the user selects a nearby printer to perform printing. The operation may be an operation that is detected by the electronic device in the “moment share interface” and that is performed on a nearby printer option, for example, a touch operation performed on a nearby printer icon. The “moment share interface” may be the user interface61described in the embodiments shown as examples inFIG.6AtoFIG.6C, or may be the user interface71described in the embodiments shown as examples inFIG.7AtoFIG.7C. For a manner in which the electronic device responds to the detected operation that the user selects the nearby printer to perform printing, refer to S2515to S2531in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. If a fee needs to be paid for the nearby printer selected by the user, for a manner in which the electronic device responds to the detected operation, reference may be further made to S2615to S2631in the method embodiment inFIG.26B-1andFIG.26B-2. Details are not described herein again. A cloud printer is selected for printing. In some other embodiments, the electronic device may detect an operation that the user selects the cloud printer to perform printing. The operation may be an operation that is detected by the electronic device in the “moment share interface” and that is performed on a cloud printer option, for example, a touch operation performed on a cloud printer icon. The “moment share interface” may be the user interface61described in the embodiments shown as examples inFIG.6AtoFIG.6C, or may be the user interface71described in the embodiments shown as examples inFIG.7AtoFIG.7C. The electronic device may provide the following manners of responding to the detected operation that the user selects the cloud printer (for example, a printer 1) to perform printing. Manner 1: The electronic device may first display, in response to the detected operation (for example, a touch operation performed on an icon of the printer 1) performed on a device option corresponding to the cloud printer (for example, the printer 1), a user interface used by the user to perform a print setting. For details, refer to S2717. In response to a detected operation of performing a print setting, the electronic device may determine, as a print setting corresponding to data such as a picture selected by the user, the print setting (such as a color or a paper size) selected by the user. For details, refer to S2719. Then, the electronic device may trigger the cloud printer (for example, the printer 1) to print, based on the print setting selected by the user, the data such as the picture selected by the user. For details, refer to S2720to S2728. For an embodiment of the user interface used by the user to perform the print setting, refer to the user interface43shown inFIG.4D. Details are not described herein again. Manner 2: The electronic device may trigger, in response to the detected operation (for example, a touch operation performed on a printer icon) performed on a device option corresponding to the cloud printer (for example, the printer 1), the cloud printer (for example, the printer 1) selected by the user to print, based on a default print setting, a picture selected by the user. For details, refer to S2720to S2728. For example, a default quantity of to-be-printed copies is 1, a default paper size is A4, and a default print color is black and white. It can be learned that in Manner 2, when the user triggers printing, a print service based on the default print setting may be provided, and the user does not need to perform the print setting, so that a quantity of operations can be reduced. Different from triggering the nearby printer to perform printing, if the printer (for example, the printer 1) selected by the user is the cloud printer, as shown inFIG.27B-1andFIG.27B-2, the printer may wait for a print instruction from the server2721in the communications system2700shown as an example inFIG.27A-1andFIG.27A-2before performing printing. After determining that the user successfully pays a print fee, the server2721may send the print instruction to the cloud printer. A process may include but is not limited to the following operations. S2720: The electronic device may send a print request to the server2721in the communications system2700shown as example inFIG.27A-1andFIG.27A-2. The print request may be used to request the cloud printer selected by the user to print data such as a picture selected by the user. In some embodiments, the print request may carry a printer option, identification information of the user, and indication information indicating a print setting corresponding to data such as a picture selected by the user. The printer option may be a device identifier of the printer, or may be a network identifier of the printer, for example, an IP address. The identification information of the user may be information that can be used to uniquely identify a user identity, such as an international mobile subscriber identity (IMSI). This is not limited in this application. The print setting may be used to determine a print fee. The print setting may be selected and set by the user in a user interface (for example, the user interface43shown as an example inFIG.4D) used to perform the print setting, or may be a default print setting. S2721: After receiving the print request of the user that is reported by the printer, the server2721may initiate a payment request to the server2719in the communications system2700shown as an example inFIG.27A-1andFIG.27A-2. The payment request may carry identification information of the user and order information. For example, the order information may include a print fee that needs to be paid by the user, for example, “¥12.00”. The order information may further include identification information of a payee, and the like. S2722: After receiving the payment request initiated by the server2721, the server2719may perform payment interaction with the electronic device. For an embodiment of the payment interaction, refer to the related content in the method embodiment inFIG.26B-1andFIG.26B-2. Details are not described herein again. S2723: After the payment is completed, the server2719may feed back a payment result to the server2721. S2724to S2726: After determining that the user successfully pays the print fee, the electronic device may transmit the data such as the picture selected by the user to the server2721. Correspondingly, the server2721may transmit the data such as the picture from the electronic device to the cloud printer (for example, the printer 1) selected by the user, and may send, based on the printer option carried in the print request, the print instruction to the cloud printer selected by the user. The print instruction may carry the indication information indicating the print setting, to instruct the printer to perform printing based on the print setting. S2728: After receiving the print instruction sent by the server2721, the cloud printer (for example, the printer 1) selected by the user may print the data such as the picture selected by the user. In some embodiments, the printer may perform printing based on the print setting corresponding to the data such as the picture. The print setting corresponding to the data such as the picture may be carried in the print request sent by the electronic device. The print setting may be selected and set by the user in the user interface (for example, the user interface43shown as an example inFIG.4D) used to perform the print setting, or may be a default print setting of the electronic device. In some other embodiments, the printer may print the data such as the picture based on a default printer setting on a printer side. In some embodiments, the cloud printer may print data while receiving the data from the server2721without waiting until all data is received before printing starts, and may delete the printed data in time. In this way, storage load of the cloud printer can be reduced, and print efficiency can also be improved. In some embodiments, referring to S2527, after receiving the print instruction, the printer may perform print preparation. S2528may be optional. The printer does not need to perform print preparation before performing each print task. S2729to S2731: Feed back a print status. In some embodiments, the printer may feed back, to the server2721in a data printing process, a print status of the data such as the picture selected by the user, and then the server2721may feed back the print status to the electronic device. Alternatively, the printer may feed back, to the server2721after data printing ends, a print status of the data such as the picture selected by the user, and then the server2721may feed back the print status to the electronic device. In some embodiments, after receiving the print status fed back by the server2721, the electronic device may display prompt information to prompt the user with the print status. For descriptions of the print status, refer to S2533to S2539in the method embodiment inFIG.25B-1andFIG.25B-2. Details are not described herein again. It may be understood that for content that is not mentioned in the method embodiment inFIG.26B-1andFIG.26B-2, reference may be made to the embodiments shown as examples inFIG.6AtoFIG.6J, the embodiments shown as examples inFIG.7AtoFIG.7C, and the related extensions, or reference may be made to the method embodiment inFIG.25B-1andFIG.25B-2and the method embodiment inFIG.26B-1andFIG.26B-2. Details are not described herein again. According to the method embodiment inFIG.26B-1andFIG.26B-2, when identifying the scenario in which the user shares the picture, the electronic device may automatically discover the nearby printer and/or the cloud printer, and intuitively present, to the user, the nearby printer and/or the cloud printer discovered by the electronic device, so that the user taps the nearby printer option or the cloud printer option (for example, the icon) to trigger the nearby printer or the cloud printer to print the picture selected by the user, and user experience is intuitive and simple. Similar to the method for printing the data by using the cloud printer discovered by the electronic device that is shown inFIG.27B-1andFIG.27B-2, in a method for projecting data by using a cloud projector discovered by the electronic device, the electronic device may discover the cloud projector in a manner of discovering the cloud printer, and then the electronic device may trigger, in response to an operation that is detected in the “moment share interface” and that is of selecting the cloud projector to project data such as a picture selected by the user, the cloud projector to perform projection. A difference lies in that to trigger the cloud projector to perform projection, the electronic device sends a projection request instead of the print request to the server2721. The projector may perform projection according to a projection instruction sent by the server2721. Similar to the method for printing the data by using the cloud printer discovered by the electronic device that is shown inFIG.27B-1andFIG.27B-2, in a method for performing screen mirroring on data by using a cloud display discovered by the electronic device, the electronic device may discover the cloud display in a manner of discovering the cloud printer, and then the electronic device may trigger, in response to an operation that is detected in the “moment share interface” and that is of selecting the cloud display to perform screen mirroring on data such as a picture selected by the user, the cloud display to perform displaying. A difference lies in that to trigger the cloud display to perform displaying, the electronic device sends a display request instead of the print request to the server2721. The display may perform displaying according to a display instruction sent by the server2721. In addition, a method for playing data by using a cloud multimedia device and the like may be similar to the method for printing the data by using the cloud printer that is shown inFIG.27B-1andFIG.27B-2. Details are not described again. According to the context, the term “when” used in the foregoing embodiments may be interpreted as a meaning of “if” or “after” or “in response to determining” or “in response to detecting”. Similarly, according to the context, the phrase “when it is determined that” or “if (a stated condition or event) is detected” may be interpreted as a meaning of “if it is determined that” or “in response to determining” or “when (a stated condition or event) is detected” or “in response to detecting (a stated condition or event)”. All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, the embodiments may be implemented completely or partially in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of this application are completely or partially generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer readable storage medium, or may be transmitted from a computer readable storage medium to another computer readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line) or wireless (for example, infrared, radio, or microwave) manner. The computer readable storage medium may be any usable medium accessible by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid state disk), or the like. One of ordinary skilled in the art may understand that all or some of the procedures of the methods in the embodiments may be implemented by a computer program instructing related hardware. The program may be stored in a computer readable storage medium. When the program is executed, the procedures of the methods in the embodiments may be included. The foregoing storage medium includes any medium that can store program code, such as a ROM, a random access memory RAM, a magnetic disk, or an optical disc. | 316,034 |
11861248 | DETAILED DESCRIPTION At least one embodiment provide an image forming apparatus that can prevent printed matter produced by a print job from being handed over to a user different from a user associated with the print job and a control method thereof. In general, according to at least one embodiment, an image forming apparatus including a stop unit (e.g., a stop device), a setting unit (e.g., a setting device), and a restart unit (e.g., a restart device) is provided. The stop unit is configured to stop a print job in progress if there is an occurrence of an abnormality that makes the print job unable to continue. The setting unit is configured to set the print job stopped by the stop unit to a skip state. The restart unit is configured to restart the print job set to the skip state by the setting unit according to a restart instruction by a user associated with the print job and not to receive the restart instruction by a second user different from the original user associated with the print job. Hereinafter, an example of at least one embodiment will be described with reference to the accompanying drawings. In at least one embodiment, a multifunction peripheral (MFP) having a function as an image forming apparatus will be described as an example. FIG.1is a block diagram illustrating a circuit configuration of a main part of an MFP1according to at least one embodiment. The MFP1includes a processor10, a main memory11, an auxiliary storage unit12(e.g., an auxiliary memory), an operation and display unit13(e.g., an operation and display device), a scan unit14(e.g., a scanner), a print unit15(e.g., a printer), a facsimile unit16(e.g., a facsimile machine), a communication unit17(e.g., a communication device or interface), a transmission line18, and the like. The processor10, the main memory11, the auxiliary storage unit12, the operation and display unit13, the scan unit14, the print unit15, the facsimile unit16, and the communication unit17are connected through the transmission line18. A computer that performs information processing for controlling the MFP1is configured by connecting the processor10, the main memory11, and the auxiliary storage unit12through the transmission line18. The processor10corresponds to a central part of the computer. The processor10executes information processing (e.g., by executing instructions stored in the main memory11) for controlling one or more parts of the computer in order to realize various functions as the MFP1according to an information processing program such as an operating system and an application program. The main memory11corresponds to a main memory portion of the computer. The main memory11includes a non-volatile memory area and a volatile memory area. The main memory11stores the information processing program described above in the non-volatile memory area. The main memory11may store data necessary for the processor10to execute a process for controlling each part in the non-volatile or volatile memory area. In the main memory11, the volatile memory area is used as a work area where data is appropriately rewritten by the processor10. The auxiliary storage unit12corresponds to an auxiliary storage portion of the computer. As the auxiliary storage unit12, for example, an electric erasable programmable read-only memory (EEPROM), a hard disk drive (HDD), a solid state drive (SSD), or various other well-known storage devices can be used. The auxiliary storage unit12stores data used by the processor10for performing various processes and data generated by the processes in the processor10. The auxiliary storage unit12may store the information processing program described above. A part of the storage area of the auxiliary storage unit12is used as an area for storing a job management table TAA. The job management table TAA is a data table for managing a print job (hereinafter referred to as an uncompleted job) that is not completed. The uncompleted job is a print job in a start waiting state. Alternatively, the uncompleted job is a print job that was started once but skipped. FIG.2is a table schematically illustrating a configuration of a data record DRA included in the job management table TAA. The job management table TAA includes the data record DRA with which the uncompleted job is associated. The data record DRA includes fields FAA, FAB, FAC, FAD, FAE, FAF, and FAG. In the field FAA, a job identifier (ID) as an identifier of the uncompleted job is set. In the field FAB, the date and time when the uncompleted job was generated is set. In the field FAC, a user ID of the user associated with the uncompleted job is set. In the field FAD, a file path of an image file representing an image to be printed by the uncompleted job is set. In the field FAE, data representing a state of the uncompleted jobs is set. In the field FAF, the date and time when the uncompleted job was skipped is set in the field FAF. In the field FAG, data indicating in which state the uncompleted job is skipped is set. The operation and display unit13inputs an operation by the user and displays various information for presenting to the user. The operation and display unit13may appropriately include various operation devices and display devices such as a touch panel, a keyboard, a key switch, an LED lamp, or a liquid crystal display panel. The scan unit14reads a document and generates image data of an image displayed on the document. The print unit15prints the image represented by the image data on recording paper. The print unit15includes a well-known print device such as an electrophotographic image forming device. The facsimile unit16performs various well-known processes for performing image communication conforming to a facsimile standard through a communication network (not illustrated) such as a public switched telephone network (PSTN). The communication unit17executes a communication process for performing data communication through a communication network2. For example, an existing communication device for a local area network (LAN) can be used as the communication unit17. The communication network2may be the Internet, virtual private network (VPN), LAN, public communication network, mobile communication network, and the like, used alone or in an appropriate combination. For example, the LAN is used as s the communication network2. A computer terminal3is an information processing device having a function of data communication through the communication network2. The computer terminal3is, for example, an information terminal for requesting the MFP1to execute the print job through the communication network2. Next, an operation of the MFP1configured as described above will be described. The contents of the processes described below are examples, and it is possible to appropriately change the order of some processes, omit some processes, or add another process. In the MFP1, the processor10controls each part of the MFP1in order to control a print function, a copy function, a scan function, a facsimile function, and the like in the same manner as those performed by the existing the MFP of the same type. The description of information processing for this control will be omitted. In the following, the management of print jobs will be described. When the processor10is started in an operation mode that enables the print job to be executed, the processor10executes information processing (hereinafter referred to as a management process) based on the information processing program stored in the main memory11or the auxiliary storage unit12. FIGS.3and4are flowcharts of the management process. As ACT1inFIG.3, the processor10waits for a login request by a user. When a user intends to request the MFP1to execute some function accompanied by a print job, the user requests login by, for example, a predetermined operation in the operation and display unit13. In response to this request, the processor10determines that the result in ACT1is YES and proceeds to ACT2. The functions accompanied by the print job include, for example, the print function, the facsimile function, and the copy function. The print function is a function of printing by the print unit15in response to a print request through the communication network2. The facsimile function is a function of printing an image, which is acquired by facsimile communication by the facsimile unit16, by the print unit15. The copy function is a function of printing an image, which is obtained by scanning a document with the scan unit14, by the print unit15. As ACT2, the processor10performs an authentication process for authenticating the user who requested the login. For example, the processor10allows the user to input the user ID and a password, and authenticates the user as a user who is associated with a combination of the user ID and the password. Alternatively, the processor10authenticates the user, for example, based on authentication information read from an ID card by a card reader (not illustrated). The user authentication method may be any method such as other well-known methods. In ACT3, the processor10checks whether or not the authentication in ACT2is successful. Then, if the authentication fails, the processor10determines that the result in ACT2is NO and returns to a waiting state of ACT1. In this case, the processor10may execute a notification operation such as causing a screen to display information for notifying the user that the authentication fails on the operation and display unit13. The processor10may also return to ACT2. If the authentication is successful, the processor10determines that the result in ACT3is YES and proceeds to ACT4. As ACT4, the processor10checks whether or not the print unit15can start print. Then, if the processor10cannot start print, the processor10determines that the result in ACT4is NO and proceeds to ACT5. As ACT5, the processor10checks whether or not a logout is instructed. Then, if the corresponding instruction cannot be checked, the processor10determines that the result in ACT5is NO and returns to ACT4. Thus, as ACT4and ACT5, the processor10waits for the start of print to be possible or logout to be instructed. In this case, the processor10may execute the notification operation such as generating a notification on a screen for notifying the user that print cannot be started to be displayed on the operation and display unit13. If the user gives up the execution of the print job, the user instructs the logout by a predetermined operation or the like in the operation and display unit13. Then, the processor10determines that the result in ACT5is YES and proceeds to ACT6. As ACT6, the processor10logs out the logged-in user. Then, the processor10returns to the waiting state of ACT1. Now, if the print unit15can start print, the processor10determines that the result in ACT4is YES and proceeds to ACT7inFIG.4. As ACT7, the processor10checks whether or not an uncompleted job is present. Then, if the uncompleted job is not present, the processor10determines that the result in ACT7is NO and proceeds to ACT8. As ACT8, the processor10checks whether or not the start of a new print job is instructed. Then, if the corresponding instruction cannot be checked, the processor10determines that the result in ACT8is NO and proceeds to ACT9. As ACT9, the processor10checks whether or not logout is instructed. Then, if the corresponding instruction cannot be checked, the processor10determines that the result in ACT9is NO and returns to ACT8. Thus, as ACT8or ACT9, the processor10waits for an instruction to start or logout made. Now, when a new function accompanied by a print job is requested, the processor10executes an update process for updating the job management table TAA for managing a print job (hereinafter referred to as a new job) for the function separately from the management process. Then, in the update process, the processor10adds a new data record DRA associated with the requested print job to the job management table TAA. The processor10sets each data in each field of this new data record DRA as follows. The processor10determines a new job ID and sets the new job ID in the field FAA so that the new job can be distinguished from one or more other print jobs. The processor10sets, for example, the date and time when the data record DRA is generated in the field FAB. The processor10may set the date and time at another optional time point, such as the time point when an event that triggered generation of a new job occurs, in the field FAB. When the user ID can be specified, the processor10sets the user ID in the field FAC. For example, when the new job is related to the print function, the processor10sets the user ID of a requester in the field FAC when the user ID of the requester can be acquired. For example, when the new job is related to the facsimile function, the processor10sets a user ID of a recipient in the field FAC when the recipient is designated. For example, when the new job is related to the copy function, the processor10sets the user ID of the logged-in user in the field FAC. When the user ID cannot be specified, the processor10sets predetermined invalid data in the field FAC. When the user ID cannot be specified, the processor10may leave the field FAC in a blank state. The processor10sets a file path of the image file to be printed in the new job in the field FAD. The processor10stores, for example, an image file including image data transmitted along with a request for the print function or the facsimile function and received by the facsimile unit16or the communication unit17in the auxiliary storage unit12, and sets the file path of the image file in the field FAD. For example, in the case of a request for the copy function, the processor10determines the file path for the image file including the image data obtained by the scan unit14and sets the file path in the field FAD. The processor10sets data indicating that the print job is in the start waiting state in the field FAE. The processor10sets predetermined invalid data in the fields FAF and FAG. The processor10may leave the fields FAF and FAG in the blank state. The processor10may not include the fields FAF and FAG in the data record DRA generated here. The processor10adds the data record DRA to the job management table TAA whenever a new job occurs even in the state where an uncompleted job exists. Thus, the job management table TAA does not include any data record DRA if there is no uncompleted job. The job management table TAA includes only one data record DRA if only one uncompleted job is present. The job management table TAA includes a plurality of data records DRAs associated with a plurality of uncompleted jobs, respectively, when the plurality of uncompleted jobs exist. For example, if the job management table TAA includes at least one data record DRA, the processor10determines that an uncompleted job is present and determines that the result in ACT7inFIG.4is YES and proceeds to ACT10. As ACT10, the processor10causes a list screen, for example, to be displayed on the operation and display unit13. The list screen represents a list of uncompleted jobs, and is a screen for allowing the user to designate a print job to be executed from among the uncompleted jobs. For example, the processor10extracts a data record DRA in which the user ID set in the field FAC matches the user ID of the logged-in user, and a data record DRA in which invalid data is set in the field FAC, from the data records DRA included in the job management table TAA. Then, the processor10shows the uncompleted jobs with which the data record DRA extracted in this way is associated in the list. The user determines one of the uncompleted jobs displayed on the list screen as a print job to be executed, and designates the uncompleted job by, for example, a predetermined operation in the operation and display unit13. Alternatively, if the user instructs the start of a new print job that is not an uncompleted job in order to use the copy function or the like, the user instructs the start of the job by, for example, a predetermined operation in the operation and display unit13. As ACT11, the processor10checks whether or not an uncompleted job to be executed is designated. Then, if the corresponding designation cannot be checked, the processor10determines that the result in ACT11is NO and proceeds to ACT12. As ACT12, the processor10checks whether or not the start of the new print job is instructed. Then, if the corresponding instruction cannot be checked, the processor10determines that the result in ACT12is NO and proceeds to ACT13. As ACT13, the processor10checks whether or not the logout is instructed. Then, if the corresponding instruction cannot be checked, the processor10determines that the result in ACT13is NO and returns to ACT11. Thus, as ACT11to ACT13, the processor10waits for a print job to be designated, or to be instructed to start or log out. If the user designates one of the uncompleted jobs as described above, the processor10determines that the result in ACT11is YES and proceeds to ACT14. If the start of the print job is instructed as described above, the processor10determines that the result in ACT8or ACT12is YES, and proceeds to ACT14. As ACT14, the processor10starts execution of the designated uncompleted job or the instructed new print job. The processor10instructs the print unit15to start print, for example, accompanied by designation of a target image file. The processor10designates the image file, for example, by notifying the file path set in the field FAD of the data record DRA associated with the uncompleted job or the instructed new print job. When the copy function is required, the processor10instructs the scan unit14to start scanning accompanied by the notification of the above file path. When the print function or facsimile function is required, the print unit15prints an image based on the image data included in the image file specified by the notified file path. When the copy function is requested, the scan unit14scans the set document to generate image data and stores the image data in the auxiliary storage unit12as an image file specified by the notified file path described above. The print unit15prints an image based on the image data included in the image file specified in the notified file path, that is, the image file stored in the auxiliary storage unit12as described above. As ACT15, the processor10checks whether or not the print job under execution is completed. Then, if the completion of the print job cannot be checked, the processor10determines that the result in ACT15is NO and proceeds to ACT16. As ACT16, the processor10checks whether or not the print unit15is stopped abnormally. Then, if the abnormal stop cannot be checked, the processor10determines that the result in ACT16is NO and returns to ACT15. Thus, as ACT15and ACT16, the processor10waits for the completion or abnormal stop of the print job. When the print unit15completes print based on all the image data included in the image file specified by the notified file path, the print unit15notifies the processor10of the completion of print. However, the print unit15stops the print operation when print cannot be continued due to some abnormality such as a paper jam or running-out of paper. Then, in this case, the print unit15notifies the processor10of the abnormal stop. In this way, the print unit15has a function as a stop unit. When the completion of print is notified as described above, the processor10determines that the result in ACT15is YES, returns to ACT7, and repeats the subsequent actions in the same manner as described above. When the abnormal stop is notified as described above, the processor10determines that the result in ACT16is YES and proceeds to ACT17. As ACT17, processor10interrupts the print job under execution. As ACT18, the processor10sets the interrupted print job to a skip state. The processor10rewrites, for example, the field FAE of the data record DRA associated with the interrupted print job with data representing the skip state. The processor10rewrites, for example, the field FAF of the data record DRA associated with the interrupted print job with the current date and time. For example, the processor10rewrites the field FAG of the data record DRA associated with the interrupted print job with data indicating how far the interrupted print job is completed. Thus, the computer having the processor10as a central part functions as a skipping unit (e.g., a skipping device) by executing information processing based on the information processing program by the processor10. The processor10then proceeds to ACT19. When the processor10is in the waiting state of ACT8or ACT9, or in the waiting state of ACT11to ACT13, if the logout instructed, for example, by a predetermined operation in the operation and display unit13by the user, the processor10determines that the result in ACT9or ACT13is YES, and proceeds to ACT19. As ACT19, the processor10logs out. Then, the processor10returns to the waiting state of ACT1inFIG.3. When the processor10proceeds to ACT19from ACT18, the processor10releases an operated state in which the user authenticated by the authentication process in ACT2inFIG.2is the operator according to the abnormal stop of the print job. Then, the processor10sets this operated state according to the success of the authentication process in ACT2inFIG.2. Thus, the computer having the processor10as the central part functions as an operator management unit (e.g. an operator management device) by executing information processing based on the information processing program by the processor10. When the processor10returns to the waiting state of ACT1inFIG.3through ACT18inFIG.4, the interrupted print job is left as the uncompleted job in the skip state. In this case, the processor10logs out without a logout instruction by the user, and shifts to a state of waiting for a new login request. Thus, even if the abnormality that caused the abnormal stop is resolved and the print unit15is in a state where print can be stated, the processor10does not print the interrupted print job. Then, the print job in the skip state is included as one of the uncompleted jobs in the list displayed on the list screen displayed by ACT10inFIG.4, and the processor10enables the print job in the skip state to be designated as the print job to be executed. Then, when the print job in the skip state is designated by the user, the processor10instructs the print unit15to perform the print operation following the previous interruption based on the data which is set in the field FAG of the data record DRA with which the print job is associated. When the processor10executes information processing based on the information processing program in this way, the computer having the processor10as the central part functions as a restart unit. The processor10executes information processing (hereafter referred to as a deletion process) based on the information processing program stored in the main memory11or the auxiliary storage unit12separately from the update process and management process described above every time a predetermined execution timing is reached. The execution timing may be optionally determined by the designer or administrator of the MFP1. As an example, the execution timing is assumed to be the timing at regular time intervals such as every 24 hours, the timing when a free capacity of the auxiliary storage unit12becomes equal to or less than a predetermined threshold value, and the like. FIG.5is a flowchart of the deletion process. As ACT21, the processor10checks whether or not an uncompleted job that is not selected is present in this deletion process. For example, when the processor10first proceeds to ACT21after the start of the deletion process, if the job management table TAA includes at least one data record DRA, the processor10determines that an unselected uncompleted job is present. Then, if there is an uncompleted job that is not selected, the processor10determines that the result in ACT21is YES and proceeds to ACT22. As ACT22, the processor10selects one uncompleted job. As ACT23, the processor10checks whether or not the selected uncompleted job is in the skip state. For example, if the data, which is set in the field FAG of the data record DRA with which the selected uncompleted job is associated, represents the skip state, the processor10determines that the selected uncompleted job is in the skip state. Then, if the selected uncompleted job is in the skip state, the processor10determines that the result in ACT23is YES and proceeds to ACT24. As ACT24, the processor10checks whether or not a holding period for the selected uncompleted job is ended. For example, if the elapsed time from the date and time, which is set in the field FAF of the data record DRA associated with the selected uncompleted job, is equal to or longer than a predetermined time limit for the selected uncompleted job, the processor10determines that the holding period is ended. Then, if the holding period is ended, the processor10determines that the result in ACT24is YES and proceeds to ACT25. As ACT25, the processor10erases the selected uncompleted job. The processor10deletes, for example, the data record DRA associated with the uncompleted job from the job management table TAA. The processor10then returns to ACT21. Thus, by executing information processing based on the information processing program by the processor10, the computer having the processor10as the central part functions as an exclusion unit (e.g., an exclusion device). The processor10determines that the result in ACT23is NO if the selected uncompleted job is not in the skip state and the result in ACT24is NO if the holding period for the selected uncompleted job is not ended, and in either case, the processor passes ACT25and returns to ACT21. That is, the processor10does not erase the uncompleted jobs that are not in the skip state and the uncompleted jobs whose holding periods are not ended. Now, when the processor10returns to ACT21from any of ACT23, ACT24, and ACT25, the processor10checks whether or not there are uncompleted jobs excluding the uncompleted jobs already selected when ACT22was executed so far in this deletion process. Then, when the processor10proceeds to ACT22because there are corresponding uncompleted jobs, the processor10selects one of the uncompleted jobs excluding the uncompleted jobs already selected when ACT22was executed so far. That is, the processor10executes ACT23to ACT25in the same manner as described above while sequentially targeting each of the uncompleted jobs. With this configuration, the processor10excludes the uncompleted job in the skipped state whose holding period is ended from the uncompleted jobs. After executing ACT23to ACT25targeted for all uncompleted jobs as described above, the processor10determines that the result in ACT21is NO and ends the deletion process. As described above, according to the MFP1, the print job stopped abnormally is set to the skip state, and even if the abnormality is resolved, the print job is not automatically restarted. Then, the print job in the skip state is restarted according to an execution instruction by the user associated with the print job. The print job in the skipped state is not targeted for an execution instruction by a user other than the user associated with the print job, and is not restarted according to an instruction by such another user. Accordingly, the print job cannot be restarted without the involvement of the user associated with the print job, and the printed matter produced by the print job can be prevented from being handed over to another user. According to the MFP1, after the abnormally stopped print job is in the skip state, an execution instruction for another print job is received and the other print job can be started according to such an instruction. Accordingly, another job can be executed without waiting for the abnormally stopped print job to be restarted and completed. According to the MFP1, the print job stopped abnormally is set to the skip state, and the interrupted state of the print job is promptly resolved. With this configuration, even if power saving control is performed so as not to shift to a power saving state in the interrupted state of the print job, it is possible to shift to the power saving state without waiting for the abnormally stopped print job to be restarted and completed. In the MFP1, the user is not involved in setting the abnormally stopped print job to the skip state. For that reason, there is a concern that the print job in the skip state may be left unattended for a long time. However, the MFP1excludes the print job in the skip state whose holding period is ended from the uncompleted jobs. With this configuration, the print job left unattended as described above can be prevented from being accumulated in the uncompleted jobs. At least one embodiment can be implemented by being modified in various ways as follows. Similar implementation is possible in various devices with print functions, such as printers, facsimile machines, or copiers. The processor10may restart an uncompleted job associated with a user other than the instructor according to an instruction from an administrator or the like having special authority. The processor10does not need to perform the deletion process. Alternatively, the processor10may perform the deletion process when the deletion process is set to be performed by the administrator of the MFP1or the like. The processor10may automatically cause the job with which the user is not associated to be restarted after the abnormality is resolved without setting the job to the skip state. After setting the job to the skip state in ACT18inFIG.4, the processor10may not shift directly to ACT19, but may shift to ACT19upon receiving a logout instruction or upon the lapse of a predetermined wait time. Each function realized by the processor10by information processing can also be partly or entirely realized by hardware that executes information processing that is not based on a program such as a logic circuit. Each of the functions described above can be realized by combining software control with the hardware such as the logic circuit described above. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. | 31,071 |
11861249 | DETAILED DESCRIPTION Hereafter, a file transfer system according to an embodiment of the disclosure will be described, with reference to the drawings.FIG.1is a block diagram showing a structure and an electrical configuration of a file transfer system according to an embodiment of the disclosure. The file transfer system100includes a plurality of image forming apparatuses1a,1b,and1c(hereinafter, collectively “image forming apparatus1” where appropriate), and a cloud server2. The image forming apparatus1and the cloud server2are connected to a network N, so as to transmit and receive files to and from each other. The number of image forming apparatuses constituting the file transfer system100is not specifically limited. The image forming apparatus1may be a multifunction peripheral having a plurality of functions, such as copying, printing, scanning, and facsimile transmission, or a single-function apparatus such as a printer or a copier. The image forming apparatus1includes a control device11, an input device12, an image reading device13, an image forming device14, a storage device15, a communication device16, and a plurality of human sensors17. The input device12is located on the front face of an apparatus main body. The input device12includes hard keys such as an enter key for confirming operations and settings, and a start key, and a display device121. The input device12receives various instructions, according to user's operation performed on the mentioned keys. The display device121serves to display operation screens and messages, and is unified with a touch panel. The image reading device13reads an image of a document, and acquires a document image. The image forming device14prints the document image read by the image reading device13, and data stored in the storage device15, on a sheet. The storage device15, exemplifying the storage device in the disclosure, is a large-capacity memory unit, for example constituted of an SSD or HDD, for storing image data, various programs, and data tables. The storage device15contains information necessary for user authentication (e.g., user ID and password), and a user box (part of the memory region of the storage device15) in which the user of the image forming apparatus1can store data. The communication device16includes a communication module, to perform data communication with an external device, via the network N. The plurality of human sensors17are provided on the front face of the image forming apparatus1. The human sensor17is, for example, an infrared sensor, or a reflective sensor that emits light, such as infrared light, to detect presence of a person, for example standing in front of the image forming apparatus1, by receiving the light reflected by the person. When the user transfers a file from an image forming apparatus1a(in-hand apparatus) to an image forming apparatus1bset at a different location (remote apparatus), the image forming apparatus1bsuspends accepting an input from another user, while the file transfer operation is being performed. When the image forming apparatus1bbecomes unusable for a long time, other users feel inconvenience. The human sensor17detects whether any user is about to utilize the image forming apparatus1b. When the human sensor17detects the presence of a person for a predetermined time (e.g., 1 to 3 minutes) or longer, while the image forming apparatus1bis restricted from accepting an input, the communication device16of the image forming apparatus1btransmits a signal indicating that there is a waiting user, to the image forming apparatus1a.Upon receipt of the signal from the image forming apparatus1b,the image forming apparatus1adisplays a message to the effect that there is a user intending to use the image forming apparatus1b,on the display device121. The control device11includes a processor, a random-access memory (RAM), and a read-only memory (ROM). The processor is, for example, a central processing unit (CPU), an MPU, or an ASIC. The control device11acts as a controller111, an authenticator112, and a file name changer113, when the processor executes a control program stored in the ROM or the like. Here, the cited components of the control device11may each be constituted in the form of a hardware circuit, instead of being realized by the operation according to the control program. The controller111serves to control the overall operation of the image forming apparatus1. The authenticator112decides whether the user is permitted to make access to the image forming apparatus1, on the basis of the identification information of the user (e.g., user ID) and the password. The file name changer113adds information proper to the image forming apparatus1(e.g., model name) to name of the file, after retrieving the file designated by the user from the storage device15. The controller111exemplifies the apparatus controller in the disclosure. The cloud server2includes a control device21, a storage device22, and a data transmission/reception device23. The control device21includes a processor, a RAM, and a ROM. The processor is, for example, a CPU, an MPU, or an ASIC. The control device21acts as a controller211and an authenticator212, when the processor executes a control program stored, for example, in the ROM. Here, the cited components of the control device21may each be constituted in the form of a hardware circuit, instead of being realized by the operation according to the control program. The controller211serves to control the overall operation of the cloud server2. The authenticator212distinguishes whether the user is permitted to make access to the cloud server2, on the basis of the identification information of the user (e.g., user ID) and the password. The controller211exemplifies the server controller in the disclosure. The storage device22is a large-capacity memory unit, for example constituted of an SSD or HDD, for storing the files. The file is, for example, transmitted from an external device connected to the network N. The data transmission/reception device23includes a communication module, to transmit and receive various files, to and from an external device, via the network N. The storage device22exemplifies the server storage device in the disclosure. FIG.2is a flowchart showing a process of the file transfer operation according to this embodiment.FIG.3toFIG.14each illustrate an example of the screen displayed by the display device121. Hereinafter, the image forming apparatus1being directly operated by the user will be referred to as in-hand apparatus, and the image forming apparatus1set at a different location from the in-hand apparatus will be referred to as remote apparatus. When the user wishes to transfer his/her own file stored in the remote apparatus to the in-hand apparatus, the user logs in in the cloud, from the input device12of the in-hand apparatus (S10).FIG.3illustrates a log-in screen to the cloud server2, displayed by the display device121of the in-hand apparatus. When the user inputs the user ID and the password through the touch panel, and presses an OK button51to input a transmission instruction to the input device12, while the display device121is displaying the log-in screen D1under the control of the controller111, the communication device16transmits the user ID and the password to the cloud server2. In the cloud server2, the data transmission/reception device23receives the user ID and the password, and the authenticator212decides whether the user is permitted to make access to the cloud server2, on the basis of the user ID and the password. To be more specific, the authenticator212decides that the user is permitted to make access to the cloud server2, when the user ID and the password accord with the legitimate user ID and password stored in advance. When the authenticator212of the cloud server2permits the user to log in, the data transmission/reception device23of the cloud server2transmits an image showing a screen for cloud operation (hereinafter, cloud operation screen), to the in-hand apparatus.FIG.4illustrates an example of a cloud operation screen D2, displayed by the display device121of the in-hand apparatus. When the user presses down a device check button52, the input device12receives an instruction to transmit an apparatus list, containing the image forming apparatuses1that can make access to the cloud server2. The data transmission/reception device23of the cloud server2transmits the apparatus list to the in-hand apparatus, according to the transmission instruction (S21). When the communication device16of the in-hand apparatus receives the apparatus list, the controller111causes the display device121to display the apparatus list (S11). FIG.5illustrates an example of the screen showing the apparatus list containing the image forming apparatuses1that can make access to the cloud server2, displayed by the display device121of the in-hand apparatus. In view of a list display screen D3, the user selects the image forming apparatus1that is the transfer source of the file, out of the apparatus list displayed on the list display screen D3. For example, when the user selects the apparatus named as “AAA1000”, and touches a checkbox53for the apparatus name “AAA1000”, the input device12receives an apparatus selection instruction, and the controller111causes the display device121to display a check mark, at the position where the checkbox53is displayed, on the list display screen D3. When the user presses down a connection button54, the input device12receives an access instruction, and the communication device16of the in-hand apparatus makes an access request, to the image forming apparatus1(remote apparatus) designated by the selection instruction (S12). In the remote apparatus, when the communication device16receives the access request from the in-hand apparatus, the input device12suspends accepting an input from another user (S30). At this point, the controller111causes the display device121to display a screen D4announcing that the apparatus is temporarily unusable.FIG.6illustrates an example of the screen displayed by the display device121of the remote apparatus, while the in-hand apparatus is in contact with the remote apparatus. Further, the communication device16of the remote apparatus transmits a log-in screen D5to the in-hand apparatus (S31). In the in-hand apparatus, when the communication device16receives the log-in screen D5, the controller111causes the display device121to display the log-in screen D5(S13).FIG.7illustrates an example of the log-in screen D5for logging in in the remote apparatus, displayed by the display device121of the in-hand apparatus. The user inputs the user ID and the password for logging in in the remote apparatus, through the input device12. When the user presses down an OK button55after inputting the user ID and the password, the input device12receives an authentication request. The communication device16of the in-hand apparatus transmits the user ID and the password that have been inputted, and also the authentication request, to the remote apparatus (S13). In the remote apparatus, when the communication device16receives the user ID, the password, and the authentication request, the authenticator112decides whether the user is permitted to log in in the remote apparatus, on the basis of the user ID and the password. The authenticator112decides that the user is permitted to log in in the remote apparatus, when the user ID and the password accord with the legitimate user ID and password stored in advance. When the authenticator112of the remote apparatus decides that the user is permitted to log in in the remote apparatus, on the basis of the user ID and the password, and permits the user to log in in the remote apparatus, the communication device16transmits a user box screen D6to the in-hand apparatus (S33). The user box screen D6includes information indicating the file name and the file format, as accompanying information. In the in-hand apparatus, when the communication device16receives the user box screen D6, the controller111causes the display device121to display the user box screen D6(S14).FIG.8illustrates an example of the user box screen D6corresponding to the remote apparatus. The user box refers to a memory region prepared in the storage device15of the image forming apparatus1with respect to each of the users, and in which a scanned image read by the image reading device13or facsimile reception data is temporarily stored. The user box screen D6exhibits a list of the files stored in the user box, with respect to the authenticated user. In view of the user box screen D6, the user selects the desired file out of the user box, and touches a checkbox for the desired file, for example a checkbox56for the file named as “doc0001”. When the user touches the checkbox56, the input device12receives the file selection instruction, and the controller111causes the display device121to display a check mark in the checkbox56. Then, when the user presses down an upload button57, the input device12receives a file transmission instruction. The communication device16transmits the file transmission instruction, and the name of the file designated by the file selection instruction, to the remote apparatus. In the remote apparatus, when the communication device16receives the file transmission instruction and the file name, the file name changer113retrieves the file corresponding to the file name from the storage device15, and changes the file name by adding thereto a character string proper to the apparatus (identification information indicating the remote apparatus). Then the communication device16transmits the file including the file name changed as above, to the cloud server2(S36). Here, when the controller111of the in-hand apparatus causes the display device121to display the user box screen D6at S14, the controller111may display the user box screen D6in such a form that rejects the file selection instruction (e.g., gray-out display), for the files other than the files of a predetermined file format, out of the files contained in the file list on the user box screen D6. The predetermined file format may be, for example, a PRN format. In this case, the input device12accepts the file selection instruction, only with respect to the files of the predetermined file format, out of the files contained in the file list. In the cloud server2, when the data transmission/reception device23receives the file having the file name changed as above, the controller211stores the file in the storage device22(S22). While the communication device16of the remote apparatus is transmitting the file designated by the user to the cloud, the controller111of the in-hand apparatus causes the display device121to display a window D7, notifying that the file is being uploaded.FIG.9illustrates an example of the window D7displayed by the display device121of the in-hand apparatus. The character string proper to the apparatus refers to, for example, the model name or machine name of the image forming apparatus1(e.g., AAA1000), determined so as to enable the apparatus that is the transfer source, to be identified. In addition, the file name changer113may add the date and time that the file has been retrieved from the storage device15, to the file name, in addition to the character string proper to the apparatus. In this case, the user can be made aware, upon acquiring the file in the in-hand apparatus, from which remote apparatus the file has been transmitted, and also at which time point the file was retrieved from the storage device15of the remote apparatus, in view of the file name. In this embodiment, as described above, the remote apparatus includes the file name changer113, which changes the file name by adding thereto the character string proper to the apparatus, and the communication device16transmits the file with the changed file name to the cloud server2(S36). Instead, the remote apparatus may be without the file name changer113, but the cloud server2may include a file name changer similar to the file name changer113. In this case, the remote apparatus may transmit, from the communication device16, the retrieved file and the file name to the cloud server2, and the file name changer of the cloud server2may add, when the data transmission/reception device23of the cloud server2receives the file and the file name, the information proper to the remote apparatus that has transmitted the file to the file name of the received file, and then the controller211may store the file and the file name to which the information proper to the remote apparatus has been added, in the storage device22. In this case, for example, the control device21acts as the controller211, the authenticator212, and the file name changer, by executing a control program including changing the file name. After the user has logged in in the remote apparatus via the in-hand apparatus, the human sensor17of the remote apparatus detects whether a person is present in front of the remote apparatus. When the predetermined time has elapsed after the human sensor17of the remote apparatus detected the presence of the person (YES at S34), the communication device16of the remote apparatus transmits information for notifying that there is a waiting user intending to use the remote apparatus, to the in-hand apparatus (S35). In the in-hand apparatus, when the communication device16receives the mentioned information, the controller111causes the display device121to display a message D8, for the user operating the in-hand apparatus.FIG.10illustrates an example of the screen showing the message D8, displayed by the display device121of the in-hand apparatus. In view of the message D8, the user operating the in-hand apparatus can be made aware that there is another user wishing to use the remote apparatus. Here, in the cloud server2, when the communication device16receives the file having the changed file name, and the controller211stores the file in the storage device22, the communication device16transmits a user box screen D9, showing the file list stored in the user box of the cloud server2at this point, to the in-hand apparatus (S22). In the in-hand apparatus, when the communication device16receives the user box screen D9, the controller111causes the display device121to display the user box screen D9.FIG.11illustrates an example of the user box screen D9, displayed by the display device121of the in-hand apparatus, after the file designated by the user is uploaded to the cloud server2from the remote apparatus, and showing the file list stored in the cloud server2. Icons56to58each represent the file uploaded from the remote apparatus and stored in the storage device22of the cloud server2. Then the user selects the icon of the file to be transferred to the in-hand apparatus, touches the icon of the selected file, and presses down a download button59. In the example shown inFIG.11, the file represented by the icon58is selected by the user. When the user presses down the download button59in this state, the input device12receives a file designation instruction, and the communication device16transmits the file designation instruction to the cloud server2. When the data transmission/reception device23of the cloud server2receives the file designation instruction, the controller211retrieves the file designated by the file designation instruction from the storage device22, and the data transmission/reception device23transmits the retrieved file to the in-hand apparatus (S22). In the in-hand apparatus, when the communication device16receives the file from the cloud server2, the controller111stores the received file, in the region of the user box for the user who has logged in, in the storage device15(S15). As an example shown inFIG.12, while the data transmission/reception device23of the cloud server2is transmitting the file designated by the user to the in-hand apparatus, the controller111of the in-hand apparatus causes the display device121to display a screen D10indicating that the file is being downloaded. The controller111of the in-hand apparatus causes the display device121to display, upon storing the file in the storage device15, a file list D11showing the files stored in the user box for the corresponding user, in the storage device15of the in-hand apparatus.FIG.13illustrates an example of the screen containing the file list D11, showing the files stored in the user box for the corresponding user, in the storage device15of the in-hand apparatus. As shown in the file list D11inFIG.13, the file name of the transferred file includes the name of the apparatus that is the transfer source (e.g., AAA1000 or BBB1500). Accordingly, the user can immediately identify the apparatus that is the transfer source of each of the files. In addition, as shown inFIG.14, the controller111may cause the display device121to display the file names of the respective apparatuses that are the transfer source, in different background colors or in different character colors, to thereby allow the user to easily distinguish the apparatuses that are the transfer source. With the arrangement according to the foregoing embodiment, as described thus far, the user operating the in-hand apparatus can log in in the remote apparatus set at a different location, and acquire the file in the user box of the remote apparatus, via the cloud server2. In this process, the file name changer113of the remote apparatus adds the character string proper to the apparatus that is the transfer source to the file name, and therefore the user can easily identify the apparatus that is the transfer source of the file, on the in-hand apparatus. Further, while the user is performing the file transfer operation, after logging in in the remote apparatus from the in-hand apparatus, the input device12of the remote apparatus suspends accepting an input. However, since the image forming apparatus1includes the human sensor17, it can be detected that there is another user intending to use the remote apparatus. In addition, when the human sensor17of the remote apparatus keeps detecting the presence of a person for a predetermined time or longer, the communication device16of the remote apparatus outputs the information for notifying that another user wishing to use the remote apparatus is waiting, to the in-hand apparatus. Then the message based on such information is displayed by the display device121of the in-hand apparatus, and therefore the user can be made aware, in view of the message, that there is another person intending to use the remote apparatus. For example, in the case of the aforementioned existing file transfer system, although the file can be transferred from another image forming apparatus to the in-hand image forming apparatus by remote operation, there may be cases where the image forming apparatus that is the transfer source of the file is unable to be identified, which may incur confusion. With the arrangement according to the foregoing embodiment, in contrast, the file stored in the image forming apparatus set at a distant location can be transferred to the in-hand image forming apparatus, in such a manner that enables the image forming apparatus that is the transfer source of the file to be easily identified. The disclosure may be modified in various manners, without limitation to the foregoing embodiment. The configurations and processings of the first and second embodiments, described with reference toFIG.1toFIG.14, are merely exemplary, and in no way intended to limit the disclosure to those configurations and processings. While the present disclosure has been described in detail with reference to the embodiments thereof, it would be apparent to those skilled in the art the various changes and modifications may be made therein within the scope defined by the appended claims. | 23,973 |
11861250 | DETAILED DESCRIPTION Embodiment [Configuration of Industrial Printing System X] Firstly, with reference toFIG.1, the overall system configuration of the industrial printing system X according to the present embodiment is described. The industrial printing system X according to the present embodiment is a system that executes design and printing in industrial printing (production printing). Here, in the industrial printing system X according to the present embodiment, the final product such as an output book is set as an “order”, and each component of the order is set as a job. In the industrial printing system X according to the present embodiment, each job for outputting the order is assigned to a component apparatus2and managed by the workflow. The industrial printing system X according to the present embodiment includes a server1, the component apparatus2, and an administrator terminal3, and each apparatus is connected by a network5. The server1is a server for designing variable printing in industrial printing, managing a workflow, and executing process management. The server1is a PC (Personal Computer) server, a dedicated machine, a general-purpose machine, or the like, settled on a so-called cloud or at a user's place. On this basis, the server1designs a variable document by a dedicated design application software (hereinafter, simply referred to as “application”). Further, the server1manages each process of the industrial printing workflow by executing the printing process management application. Specifically, the server1sends and receives various instructions and information to and from the component apparatus2for each process in printing, and it manages the status and requests processing for each component apparatus2. In addition, the server1may be a server that executes a common platform that performs user management, tenant management, security management, notification service for maintenance, prepress management, storage management of each document, management of printing apparatuses, and the like. The above application may run on this server. The component apparatus2is a component that executes various jobs of production printing, and is each apparatus managed by the server1. The component apparatus2includes, for example, a terminal for submission, a terminal for design proofreading, a prepress apparatus, a printing apparatus for production printing, a post-processing apparatus, a shipping management server, and the like. In this embodiment, one of these apparatuses is simply referred to as a component apparatus2. Of the component apparatuses2, each terminal or server can be connected to the server1via a web browser such as a PC or smartphone, a dedicated application, or the like. The administrator terminal3is a terminal used by a printing process administrator, or the like, among users. The administrator terminal allows the user to access the server1to design a variable document by GUI, check the progress status, and request processing. Next, with reference toFIG.2, the control configuration of the server1is described. The server1includes a control unit10, a network transmitting and receiving unit15, a storage unit19, and the like. Each unit is connected to the control unit10and its operation is controlled by the control unit10. The control unit10is an information processing unit that includes a GPP (General Purpose Processor), a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and an ASIC (Application Specific Integrated Circuit, a processor for a specific application), or the like. The control unit10reads out the control program stored in the ROM or HDD of the storage unit19, expands the control program in the RAM, and executes the control program, so that the control unit10can be operated as each part of the functional block as described later. Further, the control unit10controls the entire apparatus according to the instruction information input from the administrator terminal3or the console. The network transmitting and receiving unit15is a network connection unit including a LAN board, a wireless transceiver, and the like, for connecting to the network5. The network5according to the present embodiment is, for example, a LAN (Local Area Network), Wi-Fi, WAN (Wide Area Network), a mobile phone network, a voice telephone network, or the like. The network transmitting and receiving unit15transmits/receives data on a data communication line, and it transmits/receives a voice signal on a voice telephone line. storage unit19is a non-transitory recording medium such as a semiconductor memory, which is a ROM (Read Only Memory), a RAM (Random Access Memory), or the like, an HDD (Hard Disk Drive), or the like. A control program for controlling the operation of the server1is stored in the ROM or HDD of the storage unit19. The control program includes an OS (Operating System), middleware on the OS, services (daemons), various applications, database data, and the like. Among these, various applications include the above-mentioned printing process management application. [Functional Configuration of Server1] Here, with reference toFIG.3, the functional configuration of the server1is described. The control unit10of the server1includes a status management unit100, a process control unit110, a process management unit120, and a post-processing unit130. The storage unit19stores the variable document data300, the job ticket330, and the workflow setting data360. The status management unit100manages the design of the variable document data300according to the workflow setting data360. Specifically, the status management unit100manages the completion status of a plurality of records or a plurality of pages for variable printing. At this time, the status management unit100also manages the completion status of the part data400in which a “part” forming the page is stored. The process control unit110collectively acquires the completed records or completed pages managed by the status management unit100, and it creates the job ticket330. The process control unit110is able to set the job ticket330to a record priority mode or a page priority mode for the output of the variable document data300. Here, in the present embodiment, the record priority mode is a mode in which the collected records are advanced to the next step and output first. The page priority mode is a mode in which the collected pages are advanced to the next process and output first. In the record priority mode, the process control unit110records the record order of the completed records in the plurality of records to the job ticket330. On the other hand, in the page priority mode, the process control unit110acquires a page in which all of the plurality of parts has been completed as completed. At this time, the process control unit110creates the job ticket330by using the substitute part401for an uncompleted part. Then, the process control unit110saves the substitute part401as the data of linking (hereinafter, “link data”) that refers to the external data. The external data may be a file stored in the storage unit19, a file stored in an external terminal, a server, or the like. The process control unit110creates the job ticket330, which prepress, printing, post-processing, and an output destination are specified, according to the modes. The process management unit120performs processing according to the job ticket330. In the present embodiment, the process management unit120performs a prepress process or a print process by using the job ticket330created by the process control unit110. Here, in the record priority mode, the process management unit120uses the information of the record order recorded in the job ticket330in the prepress process. On the other hand, in the page priority mode, the process management unit120records the page order of the completed pages in the job ticket330. In addition, the process management unit120uses information of the page order recorded in the job ticket330at the time of page insertion. Further, the process management unit120also manages the use of the substitute part401of the part data400. Specifically, when the job ticket330has at least one page including the substitute part401, the process management unit120stops the process before the prepress or RIP (Raster Image Processer) process of printing. After that, when the completed part data400corresponding to the substitute part401can be acquired, the process management unit120replaces it with the link data and proceeds to the RIP processing. After that, when the completed part data400can be acquired, the process management unit120replaces the link data with the completed part data400and proceeds to the RIP process. In the present embodiment, the process management unit120causes the component apparatus2to execute each process according to the job ticket330. This process includes prepressing and printing. For printing, the output destination may be an e-mail output or an electronic document output. The post-processing unit130performs post-processing according to the job ticket330for the record or the page that has been performed prepress process or printing process by the process management unit120. This post-processing includes instructions for collating processing and sorting processing. The variable document data300is a file, a database, or the like, which summarizes variable documents used at the time of variable printing and various data related thereto. The variable document data300may be described in, for example, JDF (Job Description Format) and/or JMF (Job Messaging Format). In this embodiment, the variable document data300includes form data310and variable data320. These data may be included in the variable document data300as attribute data. The form data310is data including a common form, or the like, which is used in variable printing. The form data310basically does not change at the time of printing, although the substitute part401may be replaced. The form data310may be, for example, data such as PDF (Portable Document Format), PDL (Page Description Language), PPML (Personalized Print Markup Language) that is format of XML (Extensible Markup Language), or the like. Among these, the PDF may be PDF/X, which is a subset of the standard PDF defined by the International Organization for Standardization (ISO15930), a simpler PDF, or the like. The form data310may include one or more part data400. The part data400may be, for example, a design file that is an image data such as jpg, gif, BMP, PNG, TIFF, PS (Postscript), text and other document data, other type data, or the like. In the present embodiment, the part data400is arranged on each page and RIP processed by prepress to form a page. In the present embodiment, the part data400is directly included in the form data310as data or stored as link data. That means, in the case of the link data, the data body of the part data400may be a separate file. The part data400also includes information on whether or not the substitute part401is used. As for the part data400, when the substitute part401is used, the link data may be saved. Further, each part data400may include information indicating whether or not proofreading has been completed (hereinafter referred to as “completion information”), and information indicating an estimated time or delay time until the proofreading is completed (hereinafter referred to as “delay information”), or the like. In addition, the form data310may include layout information that defines the layout on the page, and the like. The layout information may include format information such as position (coordinates) and size on the page of the form, font size of variable data320, left alignment, center alignment, right alignment, and the like. Further, the form data310may include data for explaining the definition and items of the variable data320, or the like. In the present embodiment, the form data310may be different for each page, or it may be a collection of data divided into page units (hereinafter, referred to as “page data”). In this case, different form data310smay be prepared according to the page order, or different ones may be prepared according to the contents of the variable data320. In addition, the form data310may include proofreading completion information, delay information, and the like, for each data on this page. Among these, the proofreading completion information may be, for example, information indicating a completion level such as first proofreading completed, re-proofreading completed, third-proofreading completed, fourth-proofreading completed, . . . , color proofreading completed, totally-completed. The delay information may be information, or the like, calculated from the completion level of the proofreading completion information or the time when the completion level changes. The variable data320is data for variable output that changes the print content at the time of printing. The variable data320may be, for example, data where printing changes for each copy. Therefore, the variable data320may be embedded in the variable document data300in a tabular format including a plurality of records, a database format such as XML, or the like. Alternatively, the variable data320may be separately added as a file in a format that is easy to manage as a database. In this case, the variable data320may be a database such as a tab-separated or comma-separated file, a spreadsheet application file, another type of database file, a list file, or the like. In the present embodiment, the variable data320may include information indicating whether or not the proofreading has been completed (proofreading completion information), delay information, and the like, for each record. Further, the variable data320may also include link data of the part data400, proofreading completion information, delay information, information on whether or not the substitute part401is used, and the like. The job ticket330is job data for causing the server1or each component apparatus2to execute processing by using the created variable document data300. The job ticket330may be described in JDF (Job Description Format) and/or JMF (Job Messaging Format) as attribute data of processing, for example. For this job ticket330, for example, prepress, printing, post-processing, and an output destination are specified, and commands, data, and the like, to be transmitted to the output destination are set. In the present embodiment, the job ticket330includes collective data340and additional information350. The collective data340is data obtained by collecting data of completed records or completed pages that can be output from the form data310and the variable data320. In the case of the record priority mode, the set data340may mainly include the record data. In addition, in the page priority mode, the collective data340may also include page data. Even in this case, the set data340may include record data. In addition, the set data340may include the link data of the part data400. The additional information350is data including information on the record order of the records included in the collective data340and/or information on the page order of the pages. The record order may be information indicating the position of the record in the variable data, the number of entries, or the like. Further, the page order information may be page number information, page position information in RIP-processed print data, and the like. Further, the additional information350may include proofreading completion information, delay information, or the like, for other records and/or pages. In addition, the additional information350may include proofreading completion information, delay information, and the like, for the part data400in each record or page. Further, the additional information350may include setting information regarding whether or not the record order is specified. In addition to this, the additional information350may include data created by the prepress process, correction contents from the workflow, processing results by offset printing, and the like. Further, the job ticket330may also include changed information according to the prepress process, the print process, or the post process. The workflow setting data360is data for setting a workflow for creating an order which is a final product by combining job templates. In the present embodiment, the workflow setting data360includes setting data for suppressing a delay when performing variable printing by the variable document data300. The setting data includes the setting information of the record priority mode or the page priority mode. In addition, the workflow setting data360may include default data (hereinafter, referred to as “template”). This template contains settings for what job ticket330is generated. The template can also be shared. That is, it may be possible to centrally manage changing, or the like, for setting management. This may be done in the same way as using an instance of an object-oriented language “class” globally. Here, the control unit10of the server1is made to function as a status management unit100, a process control unit110, a process management unit120, and a post-processing unit130by executing the control program stored in the storage unit19. Further, each part of the server1described above becomes a hardware resource for executing the image forming method of the present disclosure. In addition, a part or any combination of the above-mentioned functional configurations may be configured in terms of hardware or circuit by IC, programmable logic, FPGA (Field-Programmable Gate Array), or the like. [Variable Data Process by Industrial Printing System X] Next, with reference toFIGS.4to7, variable data process by the industrial printing system X according to the embodiment of the present disclosure is described. In the variable data process of the present embodiment, first, the variable document data300is created. Then, from a plurality of records, a job ticket330obtained by collectively acquiring the completed records or completed pages of variable data is created. Then, the prepress process or the print process is performed by the created job ticket330. In the variable data process of the present embodiment, the control unit10of the server1mainly executes the control program stored in the storage unit19in cooperation with each unit and uses the hardware resources. In the following, with reference to the flowchart ofFIG.4, mainly, the details of the processing by the server1is described step by step. (Step S101) Firstly, the process control unit110performs the priority selection process. Specifically, the process control unit110starts creating a variable document by using a template, or the like, included in the workflow setting data360according to an instruction of a user who is an administrator, or the like, and manages the creation of the variable document. Therefore, the process control unit110may provide a GUI (Graphical User Interface) or a CUI (Character-based User Interface) by a design application. On this basis, the process control unit110stores the created variable document as the variable document data300in the storage unit19. At the time of creating the variable document data300, the process control unit110acquires an instruction for the record priority mode or the page priority mode by the user by GUI or CUI, and it sets this in the variable document data300. At this time, the process control unit110can also set the number of specific records, the number of specific pages, and the like, for advancing to the next process according to the instruction from the user. For example, the number of specific records can be set to a value of several tens to several thousand, and the number of specific pages can be set to a value of one page to several hundred pages. In addition, the process control unit110can also set a value for the presence or absence of designation to print each record by the order of the record (record order) in the variable data320(hereinafter referred to as “record order indication”). When the record order indication is present, as is described later, the collected records are also printed in order after waiting for the acquisition of other records. Further, the process control unit110can also set whether or not the substitute part401can be used. At the time of these settings, the process control unit110may use the template so that the settings can be commonly used even when other conditions are specified. In addition, when the user does not instruct the above-mentioned set value, the process control unit110can set the default set value by using the template. Further, the status management unit100can also create the template itself by using the GUI or CUI. In addition, the status management unit100can also directly specify by JDF and/or JMF, that is, a programmatic description, or the like, by using a so-called “macro” language. (Step S102) Next, the status management unit100performs variable document design process. The status management unit100creates a variable document in which conditions are set for each record. The status management unit100designs a variable manuscript with the submitted data. Specifically, the status management unit100acquires and designs a variable document including each record and each part from the submission terminal. At the time of these designs, the status management unit100manages whether or not the proofreading is completed for each record and each page. In the present embodiment, the status management unit100acquires and sets the completion information and the delay information for each part data400of the form data310of the variable document data300and each record of the variable data320. At this time, the substitute part401may also be used. (Step S103) Next, the status management unit100performs substitute part recording process. In the present embodiment, when the substitute part401is used for the purpose of suppressing delay, or the like, the status management unit100records in the form data310or the variable data320that the substitute part401is included in the record or the page. That is, the status management unit100can store the record and the page including the substitute part401. Further, the status management unit100may allow the design to be submitted for other parts of the variable document as a condition that the substitute part401is used. (Step S104) Next, the process control unit110determines whether or not the number of records or the number of pages is complete. The process control unit110refers to the form data310and the variable data320of the variable document data300, and it confirms whether or not each part data400in each record or each page has been completed. In the record priority mode, if there are equal or more than set specific number of the completed records, process control unit110determines as Yes. Specifically, for each record in the variable data320, the process control unit110may determine that a record where the contents and the referenced part data400have all been completed is a completed record. Alternatively, in the page priority mode, the process control unit110determines Yes if the number of completed pages is equal to or greater than the specific number of pages. At this time, in the page unit, the process control unit110may determine that a page where all the referenced part data400have been completed (all parts are completed) is a completed page. Here, in the case of setting that the substitute part401can be used, even if the substitute part401is used, the process control unit110may be counted as completed the page if all the other completed part data400is completed. In other cases, the process control unit110determines No. In the case of Yes, the process control unit110advances the process to step S105. In the case of No, the process control unit110returns the process to step S102and continues designing the variable document. (Step S105) If the number of records or the number of pages is completed, the process control unit110performs a job ticket creation process. The process control unit110collectively acquires the completed records or pages managed by the status management unit100, and it creates a job ticket330. Specifically, the process control unit110may generate a job ticket330for which prepress, printing, post-processing, and output destination are specified according to the attributes of the record and the page. At this time, the process control unit110stores the data of collected record or page in the collective data340in the job ticket330. As refer to the example ofFIG.5, in the record priority mode, the process control unit110collectively acquires a specific number of completed records. In this example, the process control unit110stores the acquired record data for the specific number of records as collective data340-1. As refer to the example ofFIG.6, in the case of the page priority mode, the process control unit110collectively acquires the completed pages by a specific number of pages. In this example, the process control unit110collects the acquired page data for the specific number of pages and stores them as collective data340-2and collective data340-3. As refer to the example ofFIG.7, the status management unit100may store the substitute part401of each page as link data in the job ticket330. In this example, the status management unit100may save the substitute part401in the collective data340-2as data that refers to external data, that is, link data. (Step S106) Next, the process control unit110performs additional information addition process. In the record priority mode, the process control unit110records the record order of the completed records in the plurality of records in the additional information350of the job ticket330. That is, assuming data in the case where the record order is meaningful, the record order of the variable data320of the variable document data300is recorded as additional information350. Alternatively, in the page priority mode, the process control unit110records the page order of the completed pages in the additional information350in the job ticket330. At this time, the process control unit110also records the information whether or not the substitute part401is included in the additional information350. Further, the process control unit110may record the information of the unfinished pages in the additional information350as a target page to be inserted later. Further, even in the page priority mode, the process control unit110may record the record order information in the additional information350. As a result, the information in the order of records can be held and used in the job ticket330, and the records can be prevented from being inserted in a misaligned manner. That is, it is possible to check that the record order is the same between the insert page and the page to be inserted. As described above, the process control unit110records the record order information, the page order information, the insertion page information, the information on whether or not the substitute part401is included, and the like, in the additional information350. In addition, the process control unit110may add matching information according to the post-processing process to the job ticket330in any mode. This matching information may be, for example, the part data400of a bar code in which the record order is recorded or the page order is recorded for each page. That is, the process control unit110may set the job ticket330so that the record order and page order are printed as barcodes, or the like, so that they can be used in the post-processing process, and it may add the dedicated part data400. The following is an example of the job ticket330set in this way. <insert-page syntax=“xxx”> <value syntax=“xxx”><insert-page-number syntax=“integer”> Insert page number </insert-page-number> <insert-page-recode-id syntax=“integer”>record number </insert-count> </value> </insert-sheet> (Step S107) Next, the process control unit110determines whether or not the page priority mode is set. The process control unit110determines Yes if the page priority mode is set in the variable document data300. The page priority mode is determined to be No in other cases, that is, if the record priority mode is set. In the case of Yes, the process control unit110advances the process to step S108. In the case of No, the process control unit110advances the process to step S110. (Step S108) In the page priority mode, the process control unit110determines whether or not the substitute part is included. The process control unit110determines Yes if the target page to be prepressed or printed on the job ticket330includes the substitute part401. In other cases, the process control unit110determines No. In the case of Yes, the process control unit110advances the process to step S109. In the case of No, the process control unit110advances the process to step S110. (Step S109) If the page includes the substitute part401, the process management unit120performs a substitute part waiting stop process. Specifically, the process management unit120stops the execution of the processing of the page including the substitute part401before the RIP process of the prepress process or the printing process. Here, the process management unit120replaces the substitute part401when the completed data is received. Specifically, when the actual part data400for the substitute part401has been completed and can be acquired from the submission terminal or the design proofreading terminal of the component apparatus2, the process management unit120can acquire this part data400in the job ticket330via the link data. On this basis, the process management unit120determines that the page from which all the actual part data400has been acquired becomes a completed page, and it acquires this and proceed with the process. That is, the process management unit120causes the page including the substitute part401to be replaced with the corresponding completed part data400, and then the RIP process is executed. On the other hand, the process management unit120may proceed to the RIP processing of the following prepress process or printing process for the page that does not include the substitute part401. (Step S110) Here, the process management unit120performs RIP processing. The process management unit120causes each component apparatus2to execute a job by using created job ticket330. Therefore, firstly, the process management unit120causes the job ticket330to be RIP-processed by the prepress apparatus of the component apparatus2. As a result, print data is generated and transmitted to the printing apparatus. Alternatively, the process management unit120may perform RIP process by using the job ticket330that has been proofread after printing. At this time, the process management unit120can use the record order information recorded in the job ticket330in the prepress process. (Step S111) Next, the process management unit120performs a printing process. The process management unit120causes the printing apparatus of the component apparatus2to perform printing based on the print data generated by the RIP process. The printed printing paper is conveyed to the post-processing apparatus of the component apparatus2. (Step S112) Next, the post-processing unit130determines whether or not printing is completed. The post-processing unit130determines Yes if printing is completed for at least the number of pages that can be post-processed. In other cases, the post-processing unit130determines No. In the case of Yes, the post-processing unit130advances the processing to step S113. In the case of No, the post-processing unit130returns the processing to step S102and continues the processing. (Step S113) If printing is completed, the post-processing unit130performs post-processing execution process. The post-processing unit130causes the post-processing apparatus, the shipping management server, and the like, of component apparatus2to perform each processing. The post-processing unit130performs post-processing according to the job ticket330for the record or the page that has been performed prepress process or printing process by the process management unit120. Specifically, the post-processing unit130causes the post-processing apparatus of the component apparatus2to execute the collating process and the rearrangement (reordering) process. At this time, the post-processing unit130can also refer to the record order and page order information recorded in the additional information350of the job ticket330and use them for sorting and page insertion. In the present embodiment, in the record priority mode, when the additional information350of the job ticket330has the record order designation, the post-processing unit130waits until another record manuscript is printed, and it may perform collation processing according to the designation. Alternatively, in the case of the page priority mode, the post-processing unit130may wait until another page manuscript is printed and perform collating processing according to the insertion page designation. More specifically, for example, the post-processing unit130refers to the record order information and the page order information from the additional information350of the job ticket330. According to this, the post-processing unit130causes the collating machine or sorter of the post-processing apparatus to execute the process of rearranging (sorting) in the order of records and the order of pages. Alternatively, the post-processing unit130may perform a process of reading the barcode, or the like, on the printed page with a camera or a scanner and rearranging the printed pages in the order of records and the order of pages. Further, the post-processing unit130may perform sorting under conditions different from the record order and execute collating processing. For example, it is possible to perform a process of “collating” the address in the record and collecting only the specific address. In addition, the post-processing unit130may process the printed matter for which the post-processing has been completed by the shipping management server. As the above, it completes the variable data process according to the embodiment of the present disclosure. As configured in this way, the following effects can be obtained. Printed matter in typical production printing is composed of multiple parts, each of which is designed and manufactured separately. In general, there are the following process before the printed matter is completed:(1) Ordering(2) Specification confirmationProduct ContentsScheduleCost Estimation(3) A) Typesetting, Design, and ProofreadingDo until the end of proofreadingB) Complete data submission(4) Completion (of Proofreading)(5) Printing(6) Post-processing (Bookbinding)(7) Delivery Here, in a typical variable printing, in the process (3), the printing process cannot be proceeded unless all the pages have been completed or the complete data has been submitted. That is, it was not possible to proceed to the next process work such as prepressing and printing until all the parts were prepared. For this reason, the delivery date may be delayed due to the delay of a specific part. In particular, in variable printing, since it is possible to design differently for each record, the number of parts is increased and the risk of delay is further increased. The server1of the industrial printing system X according to the present embodiment is a server of an industrial printing system that performs variable printing by production printing, including: a status management unit100that manages completion status of a plurality of records or a plurality of pages for the variable printing; a process control unit110that collectively acquires a completed record or a completed page managed by the status management unit100and create a job ticket330; a process management unit120that performs prepress process or printing process by using the job ticket330created by the process control unit110; and a post-processing unit130that performs post-processing according to the job ticket330for a record or a page that has been performed prepress process or printing process by the process management unit120. With this configuration, in variable printing, the risk of delay can be suppressed by proceeding with processing from record manuscripts and page manuscripts that can be processed without waiting for the completion of all records or pages. That is, the overall delay can be minimized by proceeding to the prepressing or printing work of the next process without waiting for the part data400of the record or the page. Therefore, it can be expected that the total cost up to the output of printed matter can be suppressed. In the server1according to the embodiment of the present disclosure, the process control unit110can be set to a record priority mode in which the collected records are output first, or a page priority mode in which the collected pages are output first. With this configuration, in the process control of variable printing, it is possible to select whether to proceed with a record or a page that is considered to cause a production delay, and to manage the printing process reasonably. As a result, the risk of delay can be reduced. In the server1according to the embodiment of the present disclosure, in the case of the record priority mode, the process control unit110records the record order of the completed records in the plurality of records in the job ticket330, and the process management unit120uses information in the record order recorded in the job ticket330in a prepress process. With this configuration, the information in the order of records that has been advanced can be collected and used in Prepress. As a result, even if the data is meaningful in the order of records, it can be summarized later. Therefore, the labor of rearranging can be reduced. In the server1according to the embodiment of the present disclosure, in the case of the page priority mode, the process control unit110, records the page order of the completed pages in the job ticket330, and it causes to use page order information recorded in the job ticket330when inserting a page; and the post-processing unit130uses the page order information recorded in the job ticket330when inserting a page. With this configuration, even if the page order is printed differently in the page priority mode, the post-processing can be rearranged and output in the correct page order. Therefore, the labor of rearranging can be reduced. In the server1according to the embodiment of the present disclosure, the process control unit110creates the job ticket330for an unfinished part data400by using a substitute part401and saves the substitute part401as link data, and when the job ticket330has a page containing the substitute part401, the process management unit120stops the process before prepress process or RIP process of printing, and when completed part data400is capable to obtain, replaces the link data, and advances to the RIP process. With this configuration, in variable printing, it is possible to proceed with processing by using the substitute part401without waiting for the completion of all the manuscripts. As a result, the risk of delay on a page-by-page basis can be suppressed. Other Embodiments In the above-described embodiment, an example of printing and performing post-processing has been described. However, the process management unit120may output an e-mail and an electronic document instead of printing. Alternatively, the process management unit120may send an e-mail to the shipping management server of the component apparatus2and manage the e-mail together with the printed output. Further, a job ticket330for other processes such as changing the process for inspecting after output may be created. Further, in the above-described embodiment, an example is described in which a job ticket330described in JDF and/or JMF is created and each processing of a variable document is performed. However, it may be configured to perform the same control as the job ticket330, such as directly controlling each apparatus without creating a JDF and/or a JMF. By configuring in this way, various configurations can be supported. Further, in the above-described embodiment, the proper use of the prepress apparatus and the printing apparatus of the component apparatus2has not been described. However, the output destination may be selected from a plurality of apparatuses according to the proof-reading status of the record, the page, or the part data400in the page, or the like. That is, it is also possible to select and output a high-speed prepress apparatus or a digital printing apparatus according to the completion information or the delay information, for example, when there is a delay. Further, when the number of records is small, when printing is performed in a small lot, or the like, it is possible to specify a condition that printing is performed only by a digital printing apparatus without using an offset printing apparatus. Further, the offset printing apparatus and the digital printing apparatus may be used properly depending on the record priority mode and the page priority mode or depending on the completion information and the delay information. Alternatively, a digital printing apparatus may be used for a job ticket330in which the number of specific records or a specific number of pages is smaller than a specific threshold value, and an offset printing apparatus may be used when the number of specific records or a specific number of pages is equal to or greater than the specific threshold value. With this configuration, various conditions can be set to perform variable printing that is actually required in an industrial printing system. Further, in the above-described embodiment, it is described that the user selects the record priority mode and the page priority mode. However, it may be automatically set according to the number of records, the attributes and types of the template and the variable document data300, the number of part data400and the substitute part401, and the like. For example, if there are more records than a certain number, the record priority mode may be used. Alternatively, when the number of part data400and substitute part401commonly used for the page is large, the number of part data400itself is small, and the like, the page priority mode may be used. With this configuration, the user does not have to set the record priority mode and the page priority mode, and the usability can be improved. In the above-described embodiment, an example of automatically generating a job ticket330by the process management unit120has been described. However, it may be possible for the user to directly create the job ticket330according to the setting of the condition by the process control unit110. Further, in the above-described embodiment, an example in which the job ticket330is attribute data of processing such as JDF and/or JMF is described. However, the job ticket330may also be capable of using data in a format such as a macro language or a programming language. Further, in the above-described embodiment, as the production printing, an example of variable printing on a printed matter of paper, sending an e-mail, and outputting an electronic document is described, but the present embodiment can also be applied to other production printing. For example, it is also applicable to variable book printing, on-demand printing, and other printing. Further, for example, it can be used for split printing of large-format posters, sheet printing of exteriors and interiors of aircraft and automobiles, manufacturing of electronic parts such as flat displays and electronic substrates, printing of cultured cells, and the like. In this case, as the component apparatus2, an industrial inkjet printer, an industrial robot, various reaction apparatuses, a culture apparatus, and the like can also be used. With this configuration, it can be used for various purposes. Further, in the above-described embodiment, an example in which various processes are performed on the server1has been described. However, a dedicated terminal for creating variable data320may be used, another server for managing workflow may be used, prepress processing may be performed via the administrator terminal3, or an e-mail transmission server may be used. Further, the configuration may be such that the job ticket330is created and controlled by another apparatus. Further, it is needless to say that the configuration and operation of the above-described embodiment are examples and can be appropriately modified and executed without departing from the aim of the present disclosure. | 45,177 |
11861251 | DETAILED DESCRIPTION First Embodiment Hereunder, a first embodiment of the disclosure will be described, with reference to the drawings.FIG.1is a cross-sectional view showing a structure of an image forming apparatus10and a delivery device50, constituting an image forming system100according to the first embodiment of the disclosure.FIG.2is a block diagram showing an internal configuration of the image forming apparatus10and the delivery device50. As shown inFIG.1, the image forming system100includes the image forming apparatus10, and the delivery device50connected to the image forming apparatus10. The image forming apparatus10forms an image on a recording sheet, an example of the recording medium in the disclosure. The delivery device50includes a reversing mechanism for reversing the recording sheet P transported from the image forming apparatus10. The delivery device50delivers the recording sheet P. [Configuration of Image Forming Apparatus10] As shown inFIG.1, the image forming apparatus10is an ink jet recording apparatus. The image forming apparatus10includes an image reading device11, an image forming device12, a conveying unit13, a paper feeding device14, a transport mechanism15, a display device16, and an operation device17. The image reading device11is constituted as an automatic document feeder (ADF), including a document feeding device2that transports a source document placed on a document tray1, and a scanner4that optically reads the source document transported by the document feeding device2, or placed on a platen glass3. The image reading device11emits light to the source document from a light emitter of the scanner4, and receives the reflected light with a charge-coupled device (CCD) sensor, to thereby read the source document and generate image data representing the source image. The image forming device12includes line heads5Y,5M,5C, and5K, respectively corresponding to yellow, magenta, cyan, and black colors. The image forming device12ejects ink droplets of the respective colors from the line heads5Y,5M,5C, and5K, onto a recording sheet P transported by the conveying unit13, according to the image data generated by the image reading device11, thereby forming a color image. The conveying unit13includes a drive roller6A, a follower roller6B, a tension roller6C, a transport belt7, and an adsorption roller8. The drive roller6A is connected to a drive motor. The drive roller6A is driven by the drive motor, to rotate counterclockwise. The transport belt7is an endless belt stretched around the drive roller6A, the follower roller6B, and the tension roller6C. The transport belt7rotates counterclockwise, so as to follow up the rotation of the drive roller6A. The follower roller6B and the tension roller6C rotate counterclockwise, so as to follow up the rotation of the transport belt7. The adsorption roller8is opposed to the follower roller6B, in contact with the transport belt7. The adsorption roller8electrically charges the transport belt7, to thereby electrostatically adsorb the recording sheet P delivered from the paper feeding device14, to the transport belt7. The paper feeding device14includes a paper cassette9A and a manual bypass tray9B. The paper feeding device14draws out the recording sheets P stored in the paper cassette9A or the manual bypass tray9B one by one, with a pickup roller rotated by a paper feeding motor, and delivers the recording sheet P to the transport route T1. The transport mechanism15includes a delivery roller pair31, a transport route T1extending from the paper feeding device14to the delivery roller pair31via the conveying unit13, a transport route T2formed between the delivery roller pair31and the conveying unit13, a plurality of transport roller pairs provided on the transport route T1and the transport route T2, a branch guide32provided at the branch point between the transport route T1and the transport route T2, a transport motor, and an actuator. The transport mechanism15causes the transport roller pair and the delivery roller pair31to rotate, by driving the transport motor, thereby transporting the recording sheet P along the transport route T1or the transport route T2. The branch guide32is made to switch the position by being driven by the actuator, so as to guide the recording sheet P transported along the transport route T1to the delivery roller pair31, or guide the recording sheet P delivered from the delivery device50to the transport route T2. The display device16is constituted of, for example, a liquid crystal display (LCD) or an organic light-emitting diode (OLED) display. The display device16displays various types of screens related to the functions that the image forming apparatus10is configured to perform. The operation device17includes a plurality of hard keys, such as a start key for instructing the start of various operations, relevant to the functions that the image forming apparatus10is configured to perform. The operation device17also includes a touch panel overlaid on the display device16. The user can input, through the operation device17, various types of information, such as the instruction relevant to the functions that the image forming apparatus10is configured to perform. As shown inFIG.2, the image forming apparatus10further includes a control device18, a storage device20, an image processing device21, an image memory22, a communication device23, and an interface (I/F)24. The control device18includes a processor, a random-access memory (RAM), and a read-only memory (ROM). The processor is, for example, a central processing device (CPU), a micro processing device (MPU), or an application specific integrated circuit (ASIC). The control device18is electrically connected to the image reading device11, the image forming device12, the conveying unit13, the paper feeding device14, the transport mechanism15, the display device16, the operation device17, the storage device20, the image processing device21, the image memory22, the communication device23, and the I/F24. The control device18acts as a controller19, when the processor executes a control program stored in the ROM or the storage device20. Here, the controller19may be constituted in the form of a logic circuit, instead of being realized by the operation according to the control program. The controller19controls the operation of each component of the image forming apparatus10. The storage device20is a large-capacity memory unit such as a solid state drive (SSD) or a hard disk drive (HDD). The storage device20stores therein various types of data, and various control programs for realizing the basic functions of the image forming apparatus10. The storage device20contains, as an example of the control programs, an adjustment program for executing a raster image processing (RIP) order adjusting operation, according to this embodiment. The image processing device21executes image processing as necessary, with respect to the image data generated by the image reading device11. The image processing device21also includes a raster image processor. The image processing device21executes the RIP operation including rasterizing the print data represented by page description language (PDL) data, and generating the raster image with respect to each page. The image processing device21stores the raster image of each page generated as above, in the image memory22. The communication device23includes a communication module such as a local area network (LAN) board. The image forming apparatus10can perform data communication, for example with an external device, such as a PC25connected via the network, through the communication device23. The communication device23exemplifies the input device in the disclosure. To the I/F24, the delivery device50is connected. The I/F24includes a plurality of terminals for electrical connection to the delivery device50. A power source is provided for each of the components of the image forming apparatus10, so that those components are activated with the poser from the power source. In this embodiment, the controller19of the image forming apparatus10causes, by operating according to the adjustment program, the image processing device21to execute the RIP operation in forward order, with respect to the print data represented by the PDL data, upon receipt, through the communication device23, of the PDL data indicating one of a combination of reverse order and face-up, and a combination of forward order and face-down, as the combination of the sorting order and the face orientation at delivery of the recording sheet P on which the image has been formed. In addition, the controller19executes the RIP order adjusting operation, including causing the image processing device21to execute the RIP operation in the reverse order, with respect to the print data represented by the PDL data, upon receipt, through the communication device23, of the PDL data indicating one of a combination of forward order and face-up, and a combination of reverse order and face-down. [Configuration of Delivery Device50] As shown inFIG.1, the delivery device50includes a casing, having a sheet inlet51A formed on a first side face on the side of the image forming apparatus10, and a delivery port51B formed on a second side face on the opposite side of the first side face. On the second side face of the casing, an output tray60is provided, at position under the delivery port51B. The delivery device50includes, inside the casing, a delivery roller pair61, a conveying unit51, a branch guide55, transport routes T3to T6, a plurality of transport roller pairs56, rotary drums57A and57B, and sensors58A and58B. The transport routes T4to T6, the branch guide55, the transport roller pair56on the transport routes T4to T6, the rotary drums57A and57B, and the sensor58B constitute the reversing mechanism. The delivery roller pair61is driven by a motor of the drive device62to rotate, to thereby deliver the recording sheet P to the output tray60, through the delivery port51B. The conveying unit51includes a drive roller52A, a follower roller52B, and a transport belt53. The drive roller52A is s driven by a motor of the drive device62, so as to rotate. The transport belt53is an endless belt stretched around the drive roller52A and the follower roller52B. The transport belt53is made to rotate by the rotation of the drive roller52A. The follower roller52B rotates so as to follow up the rotation of the transport belt53. The branch guide55is located at the branch point between the transport route T3and the transport route T4. The branch guide55is made to switch the position by being driven by an actuator of the drive device62, so as to guide the recording sheet P transported by the conveying unit51, to the transport route T3or the transport route T4. The transport route T3extends from the follower roller52B to the delivery roller pair61. The transport route T4extends from the follower roller52B to a position on the lower side of the rotary drum57A. The transport route T5extends from a position on the upper side of the rotary drum57A, to a position on the upper side of the rotary drum57B. The transport route T6extends from the position on the upper side of the rotary drum57B, to the delivery roller pair61. The plurality of transport roller pairs56are located along the transport routes T3to T6. The plurality of transport roller pairs56are each driven to rotate by a motor of the drive device62. The rotary drums57A and57B are located side by side on the lower side of the conveying unit51, such that the respective axial lines of the rotary drums57A and57B become parallel to the axial line of the drive roller52A. The rotary drums57A and57B are driven to rotate by a motor of the drive device62. The sensor58A is located close to an end portion of the transport route T4on the side of the rotary drum57A. The sensor58B is located close to an end portion of the transport route T5on the side of the rotary drum57A. The sensors58A and58B each detect whether the recording sheet P is present, at a predetermined position on the rotary drum57A. Although the type of the sensors58A and58B is not specifically limited, a reflective photo sensor or a transmissive photo sensor is generally employed. The sensors58A and58B each output an ON signal upon detecting the recording sheet P, and outputs an OFF signal when the recording sheet P is undetected. As shown inFIG.2, the delivery device50also includes a control device70, the drive device62, a storage device63, and an I/F64. The control device70includes a processor, a RAM, and a ROM. The processor is, for example, a CPU, an MPU, or an ASIC. The control device70is electrically connected to the drive device62, the storage device63, the I/F64, and the sensors58A and58B. The control device70acts as a controller65, when the processor executes a control program stored in the ROM or the storage device63. Here, the controller65may be constituted in the form of a logic circuit, instead of being realized by the operation according to the control program. The controller65controls the operation of each component of the delivery device50. The drive device62includes a plurality of motors, respectively connected to the drive roller52A, the transport roller pair56, the rotary drum57A and57B, and the delivery roller pair61. The drive device62causes the drive roller52A, the transport roller pair56, the rotary drums57A and57B, and the delivery roller pair61to rotate, by driving the corresponding motors. The drive device62also includes the actuator connected to the branch guide55. The drive device62drives the actuator, to thereby switch the position of the branch guide55. The storage device63is a large-capacity memory unit such as an SSD or an HDD. The storage device63stores therein various types of data, and various control programs for realizing the basic functions of the delivery device50. To the I/F64, the image forming apparatus10is connected. The I/F64includes a plurality of terminals for electrical connection to the image forming apparatus10. A power source is provided for each of the components of the delivery device50, so that those components are activated with the poser from the power source. [Operation] Referring first toFIG.1andFIG.2, the operation of the image forming system100, performed when executing simplex printing, duplex printing, non-reversing delivery, and reversed delivery, will be described hereunder. In the operation described hereunder, the image forming device12forms an image represented by a raster image generated by the image processing device21, on the recording sheet P. [Operation for Simplex Printing] When executing the simplex printing, the controller19of the image forming apparatus10causes the image forming device12to form the image represented by the raster image, on the upper face (in this case, first face) of the recording sheet P transported by the conveying unit13, according to the order of generation of the raster image, by the image processing device21. In the case where a plurality of pages of raster images are to be generated, the controller19causes the image forming device12to sequentially form the images represented by the respective raster images, without standing by for the completion of the generation of the raster images of all the pages. The controller19also deletes the raster image from the image memory22, each time the image formation of that raster image is finished. The controller19causes the transport roller pair on the transport route T1to rotate in a predetermined direction, by driving the transport motor of the transport mechanism15, and switches the position of the branch guide32so as to guide the recording sheet P toward the delivery roller pair31, by driving the actuator of the transport mechanism15. Thus, the controller19causes the transport mechanism15to transport the recording sheet P, having the image formed on the upper face (in this case, first face) thereof, along the transport route T1. Then the controller19causes the delivery roller pair31to deliver the recording sheet P to the delivery device50. At this point, the controller19transmits a first signal requesting to execute the non-reversing delivery, in other words to deliver the recording sheet P without reversing, or a second signal requesting to execute the reversed delivery, in other words to deliver the recording sheet P in a reversed orientation, to the delivery device50through the I/F24. [Operation for Duplex Printing] When executing the duplex printing, the controller19of the image forming apparatus10causes the image forming device12to form the image, for example represented by a first raster image according to the order of generation of the raster image by the image processing device21, on the upper face (in this case, first face) of the recording sheet P transported by the conveying unit13. The controller19causes the transport mechanism15to transport the recording sheet P, having the image formed on the upper face (in this case, first face) thereof, toward the delivery device50along the transport route T1, and causes the delivery roller pair31to deliver the recording sheet P to the delivery device50. At this point, the controller19transmits a third signal requesting to return the recording sheet P, to the delivery device50through the I/F24. Upon receipt of the third signal through the I/F64, the controller65of the delivery device50causes the drive roller52A, and the transport roller pair56on the transport route T4to rotate in a predetermined direction, by driving the motor of the drive device62, and causes the rotary drum57A to rotate counterclockwise. The controller65also switches the position of the branch guide55so as to guide the recording sheet P to the transport route T4, by driving the actuator of the drive device62. Accordingly, the recording sheet P, delivered from the image forming apparatus10through the sheet inlet51A, is transported by the transport belt53toward the follower roller52B, and then guided to the transport route T4by the branch guide55. The recording sheet P guided to the transport route T4is transported by the transport roller pair56on the transport route T4, to be picked up by the rotary drum57A. When the sensor58A detects the trailing edge of the recording sheet P picked up by the rotary drum57A, the controller65causes the drive roller52A, and the transport roller pair56on the transport route T4to rotate in the direction opposite to the predetermined direction, and causes the rotary drum57A to rotate clockwise, by controlling the motor of the drive device62. As result, the recording sheet P picked up by the rotary drum57A is transported by the transport roller pair56toward the conveying unit51along the transport route T4, and guided to the conveying unit51by the branch guide55. The recording sheet P guided to the conveying unit51is transported by the transport belt53toward the sheet inlet51A, and then delivered to the image forming apparatus10, through the sheet inlet51A. After transmitting the third signal, the controller19of the image forming apparatus10causes the transport roller pair on the transport route T2to rotate in the predetermined direction, by driving the transport motor of the transport mechanism15, and switches the position of the branch guide32, so as to guide the recording sheet P to the transport route T2, by driving the actuator of the transport mechanism15. Therefore, the recording sheet P transported from the delivery device50is guided to the transport route T2by the branch guide32, and transported toward the conveying unit13along the transport route T2. In this case, the recording sheet P is transported to the conveying unit13, with upper and lower faces reversed. The controller19causes the image forming device12to form the image represented by a second raster image, according to the order of generation of the raster image by the image processing device21, on the upper face (in this case, second face) of the recording sheet P transported by the conveying unit13. The controller19then causes the transport mechanism15to transport the recording sheet P, having the image formed on the upper face (second face) thereof, along the transport route T1, and causes the delivery roller pair31to deliver the recording sheet P to the delivery device50. At this point, the controller19transmits the first signal or the second signal, to the delivery device50through the I/F24. [Operation for Non-Reversing Delivery] Upon receipt of the first signal through the I/F64, the controller65of the delivery device50causes the drive roller52A, the transport roller pair56on the transport route T3, and the delivery roller pair61to rotate in the predetermined direction, by driving the motor and the actuator of the drive device62, and switches the position of the branch guide55, so as to guide the recording sheet P to the transport route T3. Accordingly, the recording sheet P, delivered from the image forming apparatus10through the sheet inlet51A, is transported by the transport belt53toward the follower roller52B, and guided to the transport route T3by the branch guide55. The recording sheet P guided to the transport route T3is transported along the transport route T3by the transport roller pair56, and then delivered to the output tray60by the delivery roller pair61, through the delivery port51B. [Operation for Reversed Delivery] Upon receipt of the second signal through the I/F64, the controller65of the delivery device50causes the drive roller52A and the transport roller pair56on the transport route T4to rotate in the predetermined direction, and causes the rotary drum57A to rotate counterclockwise, by driving the motor of the drive device62. The controller65also switches the position of the branch guide55so as to guide the recording sheet P to the transport route T4, by driving the actuator of the drive device62. Accordingly, the recording sheet P, delivered from the image forming apparatus10through the sheet inlet51A, is transported by the transport belt53toward the follower roller52B, and guided to the transport route T4by the branch guide55. The recording sheet P guided to the transport route T4is transported along the transport route T4by the transport roller pair56, and then picked up by the rotary drum57A. When the sensor58B detects the trailing edge of the recording sheet P on the rotary drum57A, the controller65causes the transport roller pairs56on the transport routes T5and T6, and the delivery roller pair61to rotate in the predetermined direction, and causes the rotary drums57A and57B to rotate clockwise, by controlling the motor of the drive device62. As result, the recording sheet P picked up by the rotary drum57A is transported by the transport roller pair56on the transport route T5toward the rotary drum57B along the transport route T5, and picked up by the rotary drum57B. The recording sheet P picked up by the rotary drum57B is transported along the transport route T6by the transport roller pair56on the transport route T6, and delivered to the output tray60by the delivery roller pair61through the delivery port51B, with the upper and lower faces reversed. [Operation for RIP Order Adjusting Operation] FIG.3AtoFIG.3Care flowcharts for explaining the RIP order adjusting operation.FIG.4is a table for explaining details of the RIP order adjusting operation.FIG.5AtoFIG.5Hare schematic drawings each showing the status of the recording sheets delivered to the output tray60. Referring toFIG.3AtoFIG.5H, the operation of the image forming system100, performed when executing the RIP order adjusting operation, will be described hereunder. For the following description, it will be assumed that plain paper is employed as the recording sheet P, when the simplex printing is to be executed, and a single-sided glossy paper, only a first face of which is glossy, is employed as the recording sheet P, when the duplex printing is to be executed. InFIG.5AtoFIG.5D, the hatched faces represent the second faces on which the image has not been formed. It is assumed here that, for example, the user designates, through the PC25, the image data of the portable document format (PDF) representing images of the first to fifth pages, and inputs a printing instruction specifying, as print setting, one of the simplex printing and duplex printing, one of forward order and reverse order indicating the sorting order of the recording sheet P, and one of face-down and face-up, indicating the face orientation at delivery. Upon receipt of the printing instruction, the controller of the PC25generates the PDL data indicating the designated image data and the print setting, using a printer driver stored in the storage device of the PC25, and transmits the generated PDL data to the image forming apparatus10, through the communication device of the PC25. Upon receipt of the PDL data through the communication device23, the controller19of the image forming apparatus10starts to execute the RIP order adjusting operation shown inFIG.3AtoFIG.3C. In the RIP order adjusting operation, first, the controller19decides whether the PDL data is indicating the simplex printing (step S10). (1) When Simplex Printing is Designated Upon deciding that the PDL data is indicating the simplex printing (YES at step S10), the controller19decides whether trueness agrees with each other, on the basis of a combination of sorting order and face orientation at delivery indicated by the PDL data (step S11). Here, it is assumed that the controller19defines in advance, with respect to the sorting order, the forward order as “true” and the reverse order as “false”. The controller19also defines in advance, with respect to the face orientation at delivery, the face-down orientation as “true” and the face-up orientation as “false”. (1-1) When Combination of Reverse Order and Face-Up is Designated When the combination of sorting order and face orientation at delivery indicated by the PDL data is the reverse order which is “false”, and the face-up orientation which is also “false”, the controller19decides that the trueness agrees with each other (YES at step S11), and causes the image processing device21to start to execute the RIP operation, as shown in a table40ofFIG.4, in forward order (in this case, in the order from the first page to the fifth page) with respect to the print data indicated by the PDL data (step S12). After step S12, the controller19causes the image forming device12to start to execute the simplex printing (step S13). In the simplex printing of this example, the controller19causes the image forming device12to form the image represented by the raster image on the upper face (in this case, first face) of the recording sheets P, sequentially transported by the conveying unit13, according to the order of generation of the raster images (in this case, in the order from the first page to the fifth page). After step S13, the controller19decides whether the PDL data is indicating the face-up orientation (step S14). In this example, the controller19decides that the PDL data is indicating the face-up orientation (YES at step S14), and transmits the first signal requesting to execute the non-reversing delivery, to the delivery device50through the I/F24(step S15). After step S15, the controller19finishes the RIP order adjusting operation. Upon receipt of the first signal transmitted from the image forming apparatus10, through the I/F64, the controller65of the delivery device50executes the non-reversing delivery operation. Accordingly, as shown inFIG.5A, the plurality of recording sheets P, sorted in reverse order (in this case, from the fifth to the first page) from the side of the leading page (uppermost page, in the case of face-up orientation), are delivered to the output tray60in the face-up orientation, with the printed image oriented upward. (1-2) When Combination of Forward Order and Face-Down is Designated When the combination of sorting order and face orientation at delivery indicated by the PDL data is the forward order which is “true” and the face-down orientation which is also “true”, the controller19decides that the trueness agrees with each other (YES at step S11), and executes the operation of step S12and step S13, as described above. After step S13, the controller19decides that the PDL data is indicating the face-down orientation (NO at step S14), and transmits the second signal requesting to execute the reversed delivery, to the delivery device50through the I/F24(step S16). After step S16, the controller19finishes the RIP order adjusting operation. Upon receipt of the second signal transmitted from the image forming apparatus10, through the I/F64, the controller65of the delivery device50executes the reversed delivery operation. Accordingly, as shown inFIG.5B, the plurality of recording sheets P, sorted in forward order (in this case, from the first to the fifth page) from the side of the leading page (lowermost page, in the case of face-down orientation), are delivered to the output tray60in the face-down orientation, with the printed image oriented downward. (1-3) When Combination of Forward Order and Face-Up is Designated When the combination of sorting order and face orientation at delivery indicated by the PDL data is the forward order which is “true” and the face-up orientation which is “false”, the controller19decides that the trueness does not agree with each other (NO at step S11), and causes the image processing device21to start to execute the RIP operation, as shown in the table40, in reverse order (in this case, in the order from the fifth page to the first page) with respect to the print data indicated by the PDL data (step S17). After step S17, the controller19executes the operation of step S13. In the simplex printing of this example, the controller19causes the image forming device12to form the image represented by the raster image on the upper face (in this case, first face) of the recording sheets P, sequentially transported by the conveying unit13, according to the order of generation of the raster images (in this case, in the order from the fifth page to the first page). After step S13, the controller19decides that the PDL data is indicating the face-up orientation (YES at step S14), and executes the operation of step S15. After step S15, the controller19finishes the RIP order adjusting operation. Upon receipt of the first signal through the I/F64, the controller65of the delivery device50executes the non-reversing delivery operation. Accordingly, as shown inFIG.5C, the plurality of recording sheets P, sorted in forward order (in this case, from the first to the fifth page) from the side of the leading page (uppermost page, in the case of face-up orientation), are delivered to the output tray60in the face-up orientation, with the printed image oriented upward. (1-4) When Combination of Reverse Order and Face-Down is Designated When the combination of sorting order and face orientation at delivery indicated by the PDL data is the reverse order which is “false” and the face-down orientation which is “true”, the controller19decides that the trueness does not agree with each other (NO at step S11), and executes the operation of step S17and step S13as described above. After step S13, the controller19decides that the PDL data is indicating the face-down orientation (NO at step S14), and executes the operation of step S16. After step S16, the controller19finishes the RIP order adjusting operation. Upon receipt of the second signal through the I/F64, the controller65of the delivery device50executes the reversed delivery operation. Accordingly, as shown inFIG.5D, the plurality of recording sheets P, sorted in reverse order (in this case, from the fifth to the first page) from the side of the leading page (lowermost page, in the case of face-down orientation), are delivered to the output tray60in the face-down orientation, with the printed image oriented downward. (2) When Duplex Printing is Designated Upon deciding that the PDL data is indicating the duplex printing (NO at step S10), the controller19decides whether the trueness agrees with each other, on the basis of the combination of sorting order and face orientation at delivery, indicated by the PDL data (step S18). (2-1) When Combination of Reverse Order and Face-Up is Designated The combination of sorting order and face orientation at delivery indicated by the PDL data is the reverse order which is “false” and the face-up orientation which is also “false”, the controller19decides that the trueness agrees with each other (YES at step S18), and causes the image processing device21to start to execute the RIP operation, as shown in the table40, in forward order (in this case, in the order from the first page to the fifth page) with respect to the print data indicated by the PDL data (step S19). After step S19, the controller19causes the image forming device12to start to execute a first duplex printing (step S20). In the first duplex printing, the controller19causes the image forming device12to form the image represented by the raster image generated through an odd-numbered process on the first face, and the image represented by the raster image generated through an even-numbered process on the second face, of each of the recording sheets P sequentially transported by the conveying unit13. In this case, as shown in the table40, the odd-numbered images (in this case, images of first page, third page, and fifth page) are formed in this order on the respective first faces, and the even-numbered images (in this case, images of second page and fourth page) are formed in this order on the respective second faces, of the first to third recording sheets P. Here, the second face of the third recording sheet P remains blank. After step S20, the controller19decides whether the PDL data is indicating the face-up orientation (step S21). Upon deciding that the PDL data is indicating the face-up orientation (YES at step S21), the controller19transmits the second signal to the delivery device50through the I/F24(step S22). After step S22, the controller19finishes the RIP order adjusting operation. Upon receipt of the second signal through the I/F64, the controller65of the delivery device50executes the reversed delivery operation. Accordingly, as shown inFIG.5E, the plurality of recording sheets P, on which the pages are sorted in reverse order (in this case, from the fifth to the first page) from the side of the leading page (uppermost page, in the case of face-up orientation), are delivered to the output tray60in the face-up orientation, with the first face, having the image of the odd-numbered page formed thereon, oriented upward. (2-2) When Combination of Forward Order and Face-Down is Designated When the combination of sorting order and face orientation at delivery indicated by the PDL data is the forward order which is “true” and the face-down orientation which is also “true”, the controller19decides that the trueness agrees with each other (YES at step S18), and executes the operation of step S19and step S20, as described above. Upon deciding, after step S20, that the PDL data is indicating the face-down orientation (NO at step S21), the controller19transmits the first signal to the delivery device50through the I/F24(step S23). After step S23, the controller19finishes the RIP order adjusting operation. Upon receipt of the first signal through the I/F64, the controller65of the delivery device50executes the non-reversing delivery operation. Accordingly, as shown inFIG.5F, the plurality of recording sheets P, on which the pages are sorted in forward order (in this case, from the first to the fifth page) from the side of the leading page (lowermost page, in the case of face-down orientation), are delivered to the output tray60in the face-down orientation, with the first face, having the image of the odd-numbered page formed thereon, oriented downward. (2-3) When Combination of Forward Order and Face-Up is Designated When the combination of sorting order and face orientation at delivery indicated by the PDL data is the forward order which is “true” and the face-up orientation which is “false”, the controller19decides that the trueness does not agree with each other (NO at step S18), and causes the image processing device21to start to execute the RIP operation, as shown in the table40, in reverse order with respect to the print data indicated by the PDL data (step S24). At this point, in the case where the print data consists of even pages, the controller19causes the image processing device21to execute the RIP operation only in reverse order. When the print data consists of odd pages, the controller19causes the image processing device21to execute the RIP operation in reverse order, along with insertion of a blank page at the leading position. In this case, the print data is indicating the fifth page, which is the odd-numbered page, and therefore the controller19causes the image processing device21, as shown in the table40, to generate the blank page as the first page, and then generate the raster images in the order from the fifth page to the first page. After step S24, the controller19causes the image forming device12to start to execute a second duplex printing (step S25). In the second duplex printing, the controller19causes the image forming device12to form the image represented by the raster image generated through an even-numbered process on the first face, and the image represented by the raster image generated through an odd-numbered process on the second face, of each of the recording sheets P sequentially transported by the conveying unit13. In this case, as shown in the table40, the even-numbered images (in this case, images of fifth page, third page, and first page) are formed in this order on the respective first faces, and the odd-numbered images (in this case, images of blank page, fourth page, and second page) are formed in this order on the respective second faces, of the first to third recording sheets P. After step S25, the controller19decides whether the PDL data is indicating the face-up orientation (step S26). Upon deciding that the PDL data is indicating the face-up orientation (YES at step S26), the controller19transmits the second signal to the delivery device50through the I/F24(step S27). After step S27, the controller19finishes the RIP order adjusting operation. Upon receipt of the second signal through the I/F64, the controller65of the delivery device50executes the reversed delivery operation. Accordingly, as shown inFIG.5G, three recording sheets P, on which the pages are sorted in forward order (in this case, from the first to the fifth page) from the side of the leading page (uppermost page, in the case of face-up orientation), are delivered to the output tray60in the face-up orientation, with the first face, having the image of the odd-numbered page formed thereon, oriented upward. (2-4) When Combination of Reverse Order and Face-Down is Designated When the combination of sorting order and face orientation at delivery indicated by the PDL data is the reverse order which is “false” and the face-down orientation which is “true”, the controller19decides that the trueness does not agree with each other (NO at step S18), and executes the operation of step S24and step S25, as described above. Upon deciding, after step S25, that the PDL data is indicating the face-down orientation (NO at step S26), the controller19transmits the first signal to the delivery device50through the I/F24(step S28). After step S28, the controller19finishes the RIP order adjusting operation. Upon receipt of the first signal through the I/F64, the controller65of the delivery device50executes the non-reversing delivery operation. Accordingly, as shown inFIG.5H, three recording sheets P, on which the pages are sorted generally in reverse order (in this case, fifth page, third page, fourth page, first page, and second page) from the side of the leading page (lowermost page, in the case of face-down orientation), are delivered to the output tray60in the face-down orientation, with the first face, having the image of the odd-numbered page formed thereon, oriented downward. As described above, in the case where the combination of reverse order and face-down is designated to execute the duplex printing, the image of the odd-numbered page is formed on the first face (i.e., glossy face) of the recording sheet P, like the case where other combinations are designated, although the pages on the recording sheets P are not sorted in a perfect reverse order. Now, with the aforementioned known technique, in the case where the recording sheets have to be delivered in reverse order, the print data is subjected to the RIP operation in forward order, and the printing is sequentially started from the last page, at the time that the RIP operation of the last page is finished. Accordingly, since the printing is unable to be started until the RIP operation of all the pages is finished, the time required for the printing is prolonged. In addition, the printing is started after the raster images of all the pages are stored in the image memory, and therefore the memory consumption is increased. According to the foregoing embodiment, in contrast, the controller19causes the image processing device21, upon receipt, through the communication device23, of the PDL data indicating one of the combination of reverse order and face-up, and the combination of forward order and face-down, to execute the RIP operation in forward order, with respect to the print data indicated by the PDL data. In addition, the controller19causes the image processing device21, upon receipt, through the communication device23, of the PDL data indicating one of the combination of forward order and face-up, and the combination off reverse order and face-down, to execute the RIP operation in reverse order, with respect to the print data indicated by the PDL data. As described above, when the recording sheets P have to be delivered in reverse order, the RIP operation is executed in reverse order, with respect to the print data. Accordingly, even when the recording sheets P have to be delivered in reverse order, the printing can be started sequentially, in the order of the generation of the pages, without the need to stand by for the completion of the RIP operation of all the pages. Therefore, the time required for the printing can be shortened, compared with the case where the printing is started after the RIP operation of all the pages is finished. In addition, since the raster images are sequentially deleted from the image memory, each time the printing of the corresponding raster image is finished, the increase in memory consumption can be suppressed, compared with the case where the printing is started after the raster images of all the pages are stored in the image memory. According to the foregoing embodiment, when the image processing device21is executing the RIP operation in forward order, and the instruction to execute the duplex printing is received through the communication device23, the controller19causes the image forming device21to form the odd-numbered image on the first page of the recording sheet P, and the even-numbered image on the second face of the recording sheet P. In addition, when the image processing device21is executing the RIP operation in reverse order, and the instruction to execute the duplex printing is received through the communication device23, the controller19causes the image forming device21to form the even-numbered image on the first page of the recording sheet P, and the odd-numbered image on the second face of the recording sheet P. Accordingly, whichever the order of the RIP operation is, the same images can be printed on the first face and the second face. In other words, the image to be formed on the first face can be correctly printed on the first face, and the image to be formed on the second face can be correctly printed on the second face. Therefore, even when the recording sheet P, the finish of which is different between the first face and the second face, is employed, such as a preprinted paper, a single-sided glossy paper, or a single-sided coated paper, a uniform finish quality can be attained. According to the foregoing embodiment, when the PDL data, indicating the combination of sorting order and face orientation at delivery in which the trueness does not agree with each other, and the duplex printing, is received through the communication device23, and the print data is indicating an odd-numbered page, the controller19causes the image processing device21to execute the RIP operation in reverse order, along with the insertion of a blank page at the leading position. Therefore, even when the print data is indicating the odd-numbered page, a uniform finish quality can be surely attained, irrespective of the combination of sorting order and face orientation at delivery. According to the foregoing embodiment, further, upon receipt, through the communication device23, of the instruction to execute the duplex printing and the instruction to execute the face-up delivery, the controller19causes the delivery device50to execute the reversed delivery. In addition, upon receipt, through the communication device23, of the instruction to execute the duplex printing and the instruction to execute the face-down delivery, the controller19causes the delivery device50to execute the non-reversing delivery. Therefore, the user can surely acquire the printed materials, in the desired combination of sorting order and face orientation at delivery of the recording sheets, even when the duplex printing is executed. [Other Variation] Although the image forming device12is configured to form an image on the recording sheet P in the foregoing embodiment, the disclosure is not limited to such embodiment. The image forming device12may form an image on a different recording medium, other than the recording sheet P. For example, an overhead projector (OHP) sheet may be employed, to form an image. The disclosure may be modified in various manners, without limitation to the configuration according to the foregoing embodiment. For example, although the image forming apparatus10is exemplified by the color multifunction peripheral in the embodiments, other types of image forming apparatus, such as a monochrome multifunction peripheral, a copier, or a facsimile machine may be employed instead. In addition, a laser-based image forming apparatus may be employed as the image forming apparatus10, in place of the ink jet recording apparatus. The configurations and processings of the foregoing embodiment, described with reference toFIG.1toFIG.5H, are merely exemplary, and in no way intended to limit the disclosure to those configurations and processings. While the present disclosure has been described in detail with reference to the embodiments thereof, it would be apparent to those skilled in the art the various changes and modifications may be made therein within the scope defined by the appended claims. | 47,392 |
11861252 | DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, referring to the accompanying drawings, a personal computer (hereinafter referred to as a “PC”) using a supporting program according to the present disclosures will be described in detail. The present specification discloses a support program executed by a PC connected with a printer having a printing function. First Embodiment A PC1according to a first embodiment includes, as shown inFIG.1, a controller10including a CPU11and a memory12. The PC1has a controller10including a CPU11and a memory12. The PC1is an example of an information processing device. The CPU11is an example of a computer. The PC1is also equipped with a user interface (hereinafter referred to as a “user IF”)13and a communication interface (hereinafter referred to as a “communication IF”)14, which are electrically connected to the controller10. It is noted that the “controller”10indicated inFIG.1is a generic term for hardware and software used to control the PC1, and does not necessarily represent a single piece of hardware that actually existing in the PC1. The CPU11executes various processes in accordance with programs read from the memory12and/or based on user operations. Various programs, including various application programs (hereinafter simply referred to as “applications”), and various data are stored in the memory12. The memory12is also used as a work area when various processes are executed. It is noted that a buffer provided in the CPU11is also an example of memory. The example of the memory12is not limited to a ROM, a RAM, an HDD, and the like included in the PC1, but can also be a storage medium such as a CD-ROM, a DVD-ROM, and the like that is readable and writable by the CPU11. The user IF13includes hardware for displaying a screen for informing the user of information and hardware for receiving operations by the user. The user IF13may be a combination of a display configured to display information and a mouse, keyboard, and the like having an input receiving function, or a touchscreen panel having both a display function and an input receiving function. The communication IF14includes hardware for communicating with external devices such as the printer2, a management device3and the like. A communication standard of the communication IF14is Ethernet (registered trademark), Wi-Fi (registered trademark), USB, and the like. The PC1may include multiple communication IFs14respectively corresponding to multiple communication standards. The memory12of the PC1stores an operating system (hereinafter referred to as an “OS”)21including a general-use printing program41, a supporting program42, and an editing application43, as shown inFIG.1. The supporting program42is an example of a supporting program. The editing application43is an example of an application program. The OS21is, for example, Windows (registered trademark), macOS (registered trademark), Linux (registered trademark), iOS (registered trademark), or Android (registered trademark). The general-use printing program41is an OS-standard program for executing printing on various printers, such as the printer2, based on the user's instructions. The general-use printing program41supports functions that can be commonly used by multiple models of printers provided by various printer vendors. The general-use printing program41, however, does not support all of the functions that are inherent to the multiple models of printers, and functions the general-use printing program41supports are limited to generic ones. The supporting program42is a program or group of programs that accompanies the processing of the general-use printing program41and executes processing based on instructions from the OS21, and is an application that supports the control of a target hardware. The supporting program42in the present embodiment corresponds to the model of the printer2connected to the PC1. For example, the supporting program42is launched by the general-use printing program41when the instructions to execute printing on the printer2are received using the general-use printing program41. The supporting program42is called, for example, a hardware support application (abbreviated as HSA). The supporting program42is capable of receiving multiple types of instructions from the general-use printing program41and executes various processes based on the received instructions. The supporting program42may be a combination of multiple programs each receiving an execution instruction, or a single program that can execute different processes depending on execution instructions. The supporting program may be a program prepared for each type of a printer by the vendors of the printer. For example, a supporting program for an inkjet printer and another supporting program for a laser printer may be prepared. When, for example, a new printer is connected to the PC1, the OS21of the PC1downloads an appropriate supporting program from a server or the like according to the type of the connected printer, and incorporates the downloaded supporting program into the device. Then, the OS21stores the identification information of the embedded supporting program in the memory12, with associating the identification information of the embedded supporting program with the printer information of the newly connected printer. It is noted that the supporting program may not necessarily be prepared for each printer type, but may be prepared for each of the printer models or series of printer models. The editing application43is, for example, an application for generating and editing image data and document data. The editing application43may be, for example, Word or PowerPoint provided by Microsoft (registered trademark), or an application provided by a vendor of the printer2. The editing application43is configured to receive user operations including instructions to cause the printer2to perform a particular operation. Concretely, the editing application43is configured to, for example, receive, via the user IF13, a print execution instruction to cause the printer2to perform printing. The printer2in the present embodiment is a device having a printing function. The PC1can communicate with the printer2via the communication IF14. The printer2is configured to receive print data, for example, from the PC1or other devices and execute printing based on the received print data. Further, the printer2according to the present embodiment has a restriction function of determining whether printing can be performed based on a usage condition set for each user. The restriction function can be enabled or disabled through the management device3or through the operation panel of the printer2. The memory of the printer2stores function information21that indicates the enablement or disablement of the restriction function. The function information21may include information on the enablement and disablement of other functions, such as the scanning function, the facsimile transmission function, and the like. The printer2according to the present disclosures is also provided with a usage condition DB23storing usage condition set to respective users. FIG.2shows an example of a data structure of the usage condition DB23. The usage condition DB23stores the usage condition each associated with a user name, which indicates a name of the user, and a user ID, which identifies the user. The user ID is an example of identification information. The usage condition is a condition that determine whether printing can be executed on the printer2. The usage condition in the present embodiment restricts the use of specific print settings. The specific print settings are, for example, print settings that contribute to saving consumables such as toner, paper or the like. The usage condition includes, for example, a condition on the number of prints, a condition on color/monochrome, and a condition on toner saving. The condition on the number of prints defines an upper limit of the number of sheets that a user can print in one print job. The “number of prints” condition may be set to “unlimited” to indicate that the number of prints is not limited. The number of prints may be the number of times of printing which is counted for each print job. The number of prints may be the total number of sheets of paper that can be printed within a given period of time, or the total number of times the print job can be executed. The condition on “color/monochrome” indicates whether color printing is restricted or not. In other words, when the use of color printing is restricted and only the use of monochrome printing is permitted, only “monochrome” is set in the “color/monochrome” setting. The condition on toner saving indicates whether or not printing is restricted that toner saving has not been set. In other words, when printing that the toner saving is not set is restricted and printing that the toner saving is set is allowed, “on” is set as the “toner saving” condition, and when printing that toner saving is not set is allowed, “on” and “off” are set as the “toner saving” condition. Further, the usage condition may include a condition on single-sided/double-sided, a condition on aggregate printing (e.g., 2-in-1), and a condition on printing of specific paper types. The “single-sided/double-sided” condition indicates whether or not single-sided printing is restricted. The “aggregate printing” condition indicates whether printing without aggregation set is restricted or not. The “print on specific paper type” condition indicates whether printing on a sheet of a specific paper type, such as postcards, is restricted or not. In the present embodiment, a target user of the identification information is regarded as an individual, but a group or a company (corporation) may be regarded as the target user of the identification information. In the latter case, the usage condition is associated with group identification information that identifies a group as the user and company identification information that identifies a company (corporation) as the user and are stored in the usage condition DB23. Returning toFIG.1, the management device3is a device having a communication function and a data storage function. The management device3is communicatively connected, for example, to a plurality of printers, including the printer2that has a restriction function and a printer200that does not have the restriction function. The management device3is configured to collectively manages the connected printers. The management device3has, for example, a function information management DB31and a usage condition management DB33. The management device3is, for example, a PC used by an administrator or a server set on a network. The function information management DB31is a database for managing function information for each printer connected to the management device3. The function information management DB31stores function information including, for example, information indicating the enablement and disablement of the restriction functions in association with the identification information of the printer. The usage condition management DB33is a database for managing the usage condition set by the user for each printer connected to the management device3. The usage condition management DB33stores the usage condition DB in association with the identification information of the printer. The management device3periodically communicates with the printers connected to the management device3, and synchronizes the data regarding function information and usage condition so that the data represent the latest updates. Therefore, the function information21and the usage condition DB23of printer2are synchronized with the function information management data of printer2by management device3. Therefore, the function information21and usage condition DB23that printer2has are the same as function information and usage information DB, with respect to the printer2, that are stored in the function information management DB31and usage condition management DB33. Next, a procedure of printing including an operation of the supporting program42according to the present embodiment will be described with reference to a sequence diagram shown inFIG.3.FIG.3shows an operation when a print execution instruction to print with the printer2using the general-use printing program41is received by an application that receives the print instructions such as an editing application43, and when the supporting program42corresponding to the printer2has been incorporated in the PC1. Each processing step in processes and flowcharts in the present embodiment basically indicates processing performed by the CPU11in accordance with instructions described in a program such as the supporting program42, and the like. The processing by the CPU11also includes hardware control using an API of the OS21. In this specification, a detailed description of OS21is omitted and the operation of each program is described. In addition, the term “obtain” is used in a concept that does not require a request. As shown inFIG.3, when the editing application43receives a print instruction (A01) with the printer2and various print settings being selected on the print screen after receiving the editing of text, graphics, and the like, the editing application43passes the information about the received print instruction to the OS21. When the print instruction is received, the OS21executes the general-use printing program41and passes the image data, print settings, and other information about the print instruction to the general-use printing program41(A02). The general-use printing program41generates intermediate image data by converting the format of the image data contained in the information about the received print instructions into the format of intermediate image data, and generates a print job including the intermediate image data (A03). The image data passed from the editing application43can be of various types, and the general-use printing program41converts the received image data into the intermediate image data suitable for generating the print data. It is noted that when the image data included in the print instruction is suitable for generating the print data, the generating of the intermediate image data may be omitted and the image data included in the print data may be used as the intermediate image data as is. The intermediate image data generated by the general-use printing program41is, for example, XPS data. The general-use printing program41is configured to output an execution instruction to the supporting program42as the device selected in the print instruction is the printer2and the supporting program42corresponding to the printer2is stored in the memory12(A04). The general-use printing program41causes the supporting program42to operate by the execution instruction and passes the generated intermediate image data to the supporting program42. It is noted that, in A04, the information on the print settings is also passed to the supporting program42along with the intermediate image data. The general-use printing program41may cause the supporting program42to be executed before generating the intermediate image data. The supporting program42may, for example, receive information indicating the print settings included in the print instructions from the general-use printing program41, edit some of the information, and return the same to the general-use printing program41. When the supporting program42receives an execution instruction from the general-use printing program41in A04, the supporting program42executes a print execution determination process (A05). The print execution determination process is for determining whether to execute or cancel the printing for which the execution instruction is received from the general-use printing program. The procedure of the print execution determination process executed in A05will be described with reference to the flowchart shown inFIGS.4A and4B. This print execution determination process is a process realized by the supporting program42and is executed by the CPU11of the PC1. In the print execution determination process, the CPU11first obtains the function information21that contains the information on the enablement/disablement of the restriction function (S1). It is noted that a process in S1is an example of the management information obtaining process. For example, the CPU11requests, via the communication IF14, the transmission of the function information21to the printer2selected in the print instruction. When the CPU11receives the function information21output by the printer2in response to the request via the communication IF14, the CPU11stores the function information in the memory12. It is noted that the CPU11may obtain the function information21from the management device3. For example, the CPU11may transmit the identification information of the printer2to the management device3via the communication IF14. When the management device3receives the identification information of the printer2from the PC1, the management device3extracts the function information associated with the printer2from the function information management DB31and transmits the extracted function information21to the PC1. The CPU11receives the function information21transmitted from the management device3via the communication IF14and stores the same in the memory12. The CPU11determines whether the function information21was successfully obtained in S1(S3). A printer that does not have any restriction functions, such as printer200, does not have the function information21. If such a printer200is selected, the CPU11fails to obtain the function information21(S3: NO). In such a case, since no usage restriction is made and it is assumed that anyone can use any function, the CPU11determines that printing is to be executed (S25) and returns to the process ofFIG.3. On the other hand, as shown inFIGS.4A and4B, when the CPU11obtains the function information21successfully (S3: YES), the CPU11determines whether the information indicating the enablement/disablement of the restriction function, which is contained in the function information21obtained in S1, is enabled (S5). For example, when the obtained function information21includes information indicating the disablement of the restriction function (S5: NO), the CPU11determines that printing is to be executed (S25), since no usage restrictions are made and printing can be performed unconditionally. Thereafter, the CPU11return to the process ofFIG.3. As shown inFIGS.4A and4B, when the obtained function information21includes information indicating that the restriction function is enabled (S5: YES), the CPU11obtains the user ID (S7). The process in S7is an example of the identification information acquisition process. For example, the CPU11displays, via the user IF13, an identification information input screen for inputting a user ID, and receives an input operation of the user ID. In other words, the CPU11obtains the user ID by manual input by the user. It is noted that the CPU11may automatically obtain the account of the login user registered in the OS21as the user ID from the OS21. The CPU11may cancel printing when the user ID cannot be obtained. Further, when the CPU11cannot obtain the user ID automatically from the OS21, the CPU11may display the identification information input screen on the user IF13and switch the user ID input method from automatic input to manual input. In addition, when the user ID is automatically obtained from the OS21, the CPU11may have the user confirm the automatically obtained user ID. In this case, it may be possible to change the user ID at the timing when the user ID is confirmed. After obtaining the user ID, the CPU11displays the print setting screen110as shown inFIG.5via the user IF (S9). The print setting screen110is a screen for receiving input operations of print settings, and includes, for example, a setting area111, a confirmation button112, and a cancel button113. In the setting area111, there is a setting field for setting values for each print setting item. In respective items, the print settings received from the general-use printing program41with the execution instruction in A04ofFIG.3are displayed. The CPU11receives operations to manually change the setting values of respective items displayed on the print settings screen110via the user IF13. The items displayed on the print setting screen110include items corresponding to the usage condition and may include items that cannot be supported by the general-use printing program41. By displaying the print setting screen110before the judgment (S17) described later, print settings specific to the printer2that cannot be supported by the general-use printing program41, can be received, and furthermore, functional restrictions can be set for such specific print settings. As shown inFIGS.4A and4B, the CPU11determines whether to confirm or cancel the print settings (S11). When the cancel button113of110is operated via the user IF13(S11: Cancelled), the CPU11determines that the printing is canceled (S27) and returns to the process ofFIG.3. On the other hand, when the confirmation button112of the print setting screen11shown inFIG.5is operated via the user IF13, the CPU11determines that the print setting is to be confirmed (S11: confirm) and obtains the usage condition (S13). It is noted that the process of S13is an example of the usage condition obtaining process. For example, the CPU11requests the printer2selected in the confirmed print settings to transmit the usage condition. To the request, the user ID obtained in S7is attached. The printer2that receives the request extracts the usage condition associated with the user ID received together with the request from the usage condition DB23and transmits the same to the PC1. The CPU1receives the usage condition from the printer2via the communication IF14and stores the same in the memory12. The CPU11may obtain the usage condition from the management device3. For example, the CPU11extracts the identification information of the printer2from the confirmed print settings, and transmits the extracted identification information of the printer2and the user ID obtained in S7to the management device3via the communication IF14. When the management device3receives the identification information of the printer2, the management device3identified the usage condition DB23associated with the identification information of the printer2with referring to the usage management DB33. Then, the management device3checks the received user ID against the identified usage condition DB23and extracts the usage condition associated with the user ID. Then, the management device3transmits the extracted usage condition to the PC1. The CPU11receives the usage condition transmitted from the management device3via the communication IF14and stores the same in the memory12. The CPU11determines whether the usage condition have been successfully obtained (S15). When the CPU11fails to obtain the usage condition (S15: NO), since whether printing is possible or not cannot be determined using the usage condition, the CPU11determines that printing is cancelled (S27) and returns to the process ofFIG.3. As shown inFIGS.4A and4B, when the CPU11obtains the usage condition successfully (S15: YES), the CPU11determines whether printing can be performed on the printer2(S17). It is noted that a process in S17is an example of the determination process. In S17, the CPU11checks the usage condition obtained in S13against the print settings confirmed in S11, and determines whether printing on the printer2can be performed according to the confirmed print settings. The CPU11determines whether printing on the printer2is determined to be executable or not (S19). When the confirmed print settings satisfy all the usage conditions and when the printing on printer2is determined to be executable (S19: YES), the CPU11determines that printing is to be executed (S25) and returns to the process ofFIG.3. For example, when the print settings received from the general-use printing program41along with the execution instructions (i.e., the print setting which has not receive the change through the print setting screen110that is displayed in S9) satisfy all the usage condition, the CPU11determines that the printing is to be executed. Alternatively, when the print settings for which the execution instructions is received together from the general-use printing program41are first edited via the print setting screen110before the determination process of S17is performed, the CPU11determines that printing is to be executed. On the other hand, as shown inFIGS.4A and4B, when the CPU11determines that the confirmed print settings do not satisfy any of the usage conditions and printing on printer cannot be executed (S19: NO), the CPU11displays an unprintable notification screen on the user IF13(S21). It is noted that a process in S21is an example of the notification process. As shown inFIG.6, a print notification screen displayed on the user IF130includes, for example, a first display field131that displays information based on the usage condition, and a second display field133that displays information about items that do not satisfy the usage condition. Further, the unprintable notification screen130includes a setting change button135and a cancel button136. Since the print notification screen130does not include a print execution button, printing can be avoided from being executed while the usage condition is not satisfied. It is assumed, for example, that, as shown inFIG.5, the user C intends to use the printer2to print a 100-page original document, and in a state where “all” is set to an item of the print range, “color” is set to an item of color/monochrome, and “ON” is set to the item of the toner save, the confirm button112is operated by the user. As shown inFIG.2, the usage condition of the user C is limited such that “monochrome” is set for the item of color/monochrome, and usage of color printing is restricted. In addition, the number of printable pages for the user C is limited to 80 pages, and therefore, the user C cannot perform printing 100 pages of sheets. Furthermore, for the user C, the item of the toner save included in the usage condition is set to “ON” and “OFF” and thus the user can use printing for which the toner save is not set. Therefore, since the print settings the user C has set do not satisfy the usage condition for the user C in terms of the number of pages of the sheets to be printed and the color/monochrome setting, the printing using the confirmed print settings cannot be executed on the printer2. Based on the determination result, the CPU11generates the unprintable notification screen130and displays the same on the user IF13. In other words, the CPU11displays the user name “C” corresponding to the user ID obtained in S7in the login name field of the first display field131to notify that the information about user C is displayed. Then, the CPU11displays in a “PC print” field of the first display field131, and notifies that printing is not possible on the printer2with the current settings. Further, the CPU11displays the information based on the usage condition obtained in S13in the first display field131to notify how the user C is restricted from using the printer2. For example, the CPU11displays “ON” and “Max: 80 sheets” in the field of the limit number of sheets to notify the user that the number of sheets to be printed is limited and that a maximum of 80 pages can be printed. In addition, the CPU11displays “0 pages” in the remaining printable pages field to notify that the number of pages to be printed has exceeded the limit and no more pages can be printed. It is noted, for example, that, when the number of sheets to be printed is limited to 80, and printing of 50 pages are to be performed, since 30 more pages can be printed, the CPU11displays “30 pages” in the remaining printable pages field. The CPU11may, for example, display “not allowed” in the color printing field to notify that color printing is not available. Further, for example, the CPU11may display “allowed” in the toner save OFF print field of the first display field131to notify that printing without toner save setting is available. In addition, the CPU11may display, in the second display field133, the number of sheets printed and color/monochrome setting that do not meet the usage conditions. For example, in the second display field133, messages such as “The number of sheets exceed the setting.” or “Color printing cannot not be performed.” that notify that the usage conditions are not satisfied. Therefore, the user C can visually recognize the items that need to be changed from the contents displayed in the second display field133. In addition, the user C can visually recognize what settings satisfy the usage condition and enable printing on the printer2from the contents of the first display field131. For example, when the user C does not intend to print in monochrome, the user C operates the cancel button136of the unprintable notification screen130shown inFIG.6via the user IF13. In this case, the CPU11receives the cancellation instruction (S23: cancellation instruction), determines that the printing is canceled (S27), and returns to the process ofFIG.3. Thus, printing against the intention of the user C is avoided. On the other hand, for example, when the user C intends to change the current print settings, the user C operates the setting change button135of the unprintable notification screen130shown inFIG.6via the user IF13. When the CPU11receives the setting change instruction (S23: setting change instruction) as shown inFIGS.4A and4B, the CPU11returns to S9, and displays the print setting screen110again. It is noted that the processes in S23and S9are examples of a changing process. The re-displayed print settings screen110shows the most recently confirmed print settings, as shown inFIG.5. For example, the user C may follow indications in the first display filed131and the second display filed133of the unprintable notification screen130, and change the print range setting from “all” to “designation range” and change the designation range to “1-50” via the user IF13as shown inFIG.7. Further, the user C may also change the color/monochrome setting from “color” to “monochrome” via the user IF13. Then, the user C operates the confirmation button112via the user IF13. Then, as shown inFIGS.4A and4B, the CPU11receives the confirmation instruction (S11: YES), and checks the changed print settings against the usage condition to determine whether printing on the printer2is executable or not (S17). It should be noted that the process of S13after the print settings have been changed may be omitted since the user is the same. When the changed print settings satisfy all the usage conditions and it is determined in S17that printing is executable (S19: YES), the CPU11determines that printing is to be executed (S25) since the changed print settings satisfy all the usage conditions and returns toFIG.3. On the other hand, when the changed print settings do not satisfy the usage condition and are determined to be unexecutable in S17(S19: NO), the CPU11redisplays the unprintable notification screen reflecting the determination (S21). When the CPU11receives an instruction to change the settings (S23: setting change instruction) as the user operates the setting change button on the unprintable notification screen, the CPU11redisplays the unprintable notification screen reflecting the determination (S9), and receive the change of the print settings again. In this way, the supporting program42can increase the possibility of using the printer2by accepting changes to the print settings until the print settings satisfy the usage condition. As shown inFIG.3, when the supporting program42determines that printing is to be executed in the print execution determination process of A05(alt: print execution), the supporting program42generates print data (A11). Concretely, the supporting program42performs rasterization based on the intermediate image data based on the execution instructions received from the general-use printing program41in A04to generate the print data representing an image subjected to be printed. The print data is generated by rasterizing the intermediate image data using the print settings, which are received from the general-use printing program41in A04or the print settings changed via the print setting screen110displayed on the user IF13in S23and S9ofFIGS.4A and4B, and satisfy the usage condition set for the user. The print data generated here is data in a format that can be used for printing by the printer2. The print data is, for example, PDL data dedicated to the model of the printer2. Instead of the supporting program42generating the print data in A11, the general-use printing program41may generate the print data. In other words, the general-use printing program41may rasterize the intermediate image data generated in A03to generate the print data. The supporting program42may receive the print data generated by the general-use printing program41and edit the print data based on the changed print settings. The print data generated by the general-use printing program41is print data in a format that can be used for printing on various printers. The print data is, for example, PWGRaster data or PDF data. When the rasterization is performed by the general-use printing program41, the processing by the supporting program42is reduced and increase of the processing is expected to be avoided, and a program size of the supporting program42can be reduced. It is noted that the general-use printing program41does not need to generate intermediate image data when the print data can be generated from the image data included in the print instruction without using the intermediate image data. When the supporting program42generates the print data at A11, the supporting program42transmits, to the printer2, the generated print data, a command to instruct printing, the user ID, and the print settings used to generate the print data (A12). In this case, the supporting program42passes a termination notification indicating that the print job has been transmitted to the general-use printing program41(A13). The printer2that has received the print data obtains the user ID and the print settings which have been received with the print data (A21), and performs the determination process (A22). That is, the printer2extracts the usage condition corresponding to the obtained user ID from the usage condition DB23, and checks the obtained print settings against the extracted usage condition to determine whether or not printing can be performed with the obtained print settings. The printer2executes printing (A23) when the received print settings satisfy all the extracted usage conditions and printing can be performed in the determination process of A22. In this way, the printer2can perform, by itself, printing according to the usage condition even when print data from another PC which does not have print execution determination process as shown, for example, A05ofFIG.3orFIGS.4A and4B. The transmitting of the print data to the printer2may be performed by the general-use printing program41. In other words, the supporting program42may pass the generated print data to the general-use printing program41so that the print data is transmitted from the PC1to the printer2set as the destination. In this case, the general-use printing program41transmits the print data received from the supporting program42to the printer2. In the present embodiment, both transmitting the print data to the printer2by the supporting program42and passing the print data to the general-use printing program41by the supporting program42for transmitting the print data to the printer2are examples of “processing for transmitting a print job regarding the print instruction to the printer.” When it is determined that printing is to be canceled (alt: cancel printing) in the print execution determination process in A05, that is, when the CPU11determines that printing is to be canceled (S27) as printing is canceled via the print setting screen as shown inFIGS.4A and4B(S11: NO), obtaining the usage condition is failed (S12: NO), or printing is canceled via the unprintable notification screen (S23: cancel instruction), the supporting program42passes the information indicating that the print job is cancelled to the general-use printing program41(A31). By canceling the print job when the obtaining of the usage condition is failed, unnecessary processes on the printer2(e.g., further restriction, by the printer2, of printing with respect to the print job that execution has been restricted by the supporting program42) can be avoided. It is noted that processes performed in S15, S27and A31are examples of the cancellation process. The supporting program42outputs a print job only when it is determined that the printing is to be performed in the print execution determination process in A05, that is, only when it is determined that the printing is executable by the printer2in S17ofFIGS.4A and4B. In other words, availability of the functions of the printer2has been determined by the supporting program42through the process in A05inFIG.3or S17inFIGS.4A and4B. Therefore, the processes of A21to A22inFIG.3may be omitted. In such a case, the supporting program42may attach an omission command that omits the execution of the restriction function to the print data which is transmitted to the printer2. The printer2that receives the omission command immediately executes the printing shown in A23. As described in detail above, the supporting program42according to the present embodiment is configured such that when there is a print instruction to the general-use printing program41, the supporting program42obtains the user ID and the usage condition corresponding to the user ID, then determines whether or not printing can be performed on the printer2based on both the user ID and the usage condition corresponding to the user ID, and notifies the user of the determination result via the user IF13. Thus, before printing is performed on printer2, the user to recognize on the PC1whether the printing on printer2can be performed. It is noted that the embodiments disclosed herein are merely examples and do not necessarily limit the invention in any way. Therefore, the technology disclosed herein can naturally be improved and/or transformed in various ways within aspects of the present disclosures. For example, the device connected to the PC1is not necessarily limited to a printer, but can be a multifunctional peripheral, a copier, a facsimile machine, or any other device having a printing function. Further, the number of printers connected to the PC1is not necessarily limited to the number shown in the embodiments but can be two or more. In S17ofFIGS.4A and4B, when printing on the printer2is determined to be executable (alt: executable), the CPU11may notify the user that there are no setting items that do not satisfy the usage condition set by the user, and that printing will be executed based on the print settings. The notification may be made via the user IF13or by voice or other means. For example, the processes of S1to S5inFIGS.4A and4Bmay be omitted, and the supporting program42may perform S7onward whenever the execution instruction is received from the general-use printing program. However, in the case where the function information21including the information indicating the enablement of the restriction function is not obtained, there is no need to check the print settings against the usage condition to determine whether printing is executable on the printer2, and thus the supporting program42can reduce the processing load of the PC1by omitting the processes in S7onward. A setting change button135may be provided on the unprintable notification screen130shown inFIG.6, and the print setting screen110may be redisplayed on the user IF13to accept changes in print settings when printing is determined to be unexecutable (S23: setting change instruction; S9inFIGS.4A and4B). Thus, it is expected that the print settings will be changed to satisfy the usage condition, and the possibility that the printer2is used will be increased. In addition, by making the determination again based on the changed print settings and by receiving the changes to the print settings until it is determined that printing on the printer2is executable (S17inFIGS.4A and4B), the possibility of using printer2is further increased. The supporting program42may execute a storing process to obtain the usage conditions of multiple users (users A to D inFIG.2) set for the printer2and store the same in the memory12. The timing for executing the storing process may be any time before the process of S17shown inFIGS.4A and4B. The timing of executing the storing process may be any time before the supporting program42receives the execution instructions from the general-use printing program41in A04ofFIG.3, before the user ID is obtained in S7ofFIGS.4A and4B, or before the print settings are confirmed in S11. After obtaining the user ID in S7, the supporting program42may check the obtained user ID against the usage condition stored in the memory12, and obtain the usage condition associated with the obtained user ID. Then, the process of S17is performed based on the obtained usage condition. In this way, the supporting program42obtains the usage condition of multiple users in advance, and when a print instruction is received, the supporting program42extracts appropriate usage condition from the usage condition which was obtained in advance. In this way, compared to the case where the usage condition is obtained after the print instruction is received, it is easier to obtain the usage condition earlier. In addition, a time period from when the PC1receives a print instruction to the time the printer2completes printing can be shortened. The process in S15ofFIGS.4A and4Bmay be omitted and printing based on the print instruction may not be canceled when the obtaining of the usage condition fails. After receiving the execution instruction from the general-use printing program41and before performing the first determination process, the supporting program42may not display the print setting screen, but may perform the determination process based on the print settings attached to the execution instruction. In the above-described embodiment, information same as the function information21and information same as the usage condition DB23which the printer2is stored in the function information management DB31and usage condition management DB3of management device3in association with the identification information of the printer2. The configuration may be modified such that, only the printer2may have the function information21and the usage condition DB23, and the printer2may not be connected to the management device3, or the management device3may not have the function information management DB31or the usage condition management DB33. In this case, the supporting program42always obtains the function information21including the information indicating the enablement/disablement of the restriction function and the usage condition set by the user from the printer2. Accordingly, it is expected that the printer2obtains the latest usage condition. Alternatively, the printer2may not have the function information21including the enablement/disablement of the restriction function, or the usage condition set by the user, and management may be performed only by the management device3. In such a case, common usage condition can be used by multiple printers. Even in a printer that does not have the restriction function, the supporting program42can perform the determination process in S17ofFIGS.4A and4Busing the usage information obtained from the management device3, and the supporting program42can restrict the functions that the user can use when performing printing according to the usage condition. Further, one of the printer2and the management device3may have the function information21and the other of the printer2and the management device3may have the usage condition. It is noted that, in the embodiments, only the printing operation is described in detail as the operation of the supporting program42, but the supporting program42may have other roles in addition. The program that executes the processing according to the present disclosures is not necessarily limited to the supporting program42, but can be any program that is configured to receive instructions from the OS21or the general-use printing program41when printing is to be performed using the general-use printing program41. It is further noted that the program may be a print workflow application for which Microsoft Corporation has released specifications. The execution timing of the supporting program42is not necessarily limited to the example of the embodiment. For example, the supporting program42may accept execution instructions directly from the OS21, or the program may be a resident supporting program42. In the case where the program is a resident program, the supporting program42should perform the aforementioned operation upon receiving execution instructions. In any flowchart disclosed in the embodiment, a plurality of processes in any plurality of steps can be executed in any order, or can be executed in parallel, to the extent that there is no inconsistency in the processing content. The processes disclosed in the embodiments may be executed by a single CPU, multiple CPUs, hardware such as an ASIC, or a combination thereof. In addition, the processes disclosed in the embodiments may be realized in various forms, such as a non-transitory computer-readable recording medium in which a program for executing the processing is recorded as computer-executable instructions, or a method. Second Embodiment The procedure of the print execution determination process executed in A05according to a second embodiment of the present disclosure will be described with reference to the flowchart shown inFIGS.8A and8B. This print execution determination process is a process realized by the supporting program42and is executed by the CPU11of the PC1. In the print execution determination process, the CPU11first obtains the function information21that contains the information on the enablement/disablement of the restriction function (S101). It is noted that a process in S1is an example of the management information obtaining process. The CPU11determines whether the function information21was successfully obtained in S1(S3). The printer200, which does not have any restriction functions, does not have the function information21. If such a printer200is selected, the CPU11fails to obtain the function information21(S3: NO). In such a case, since no usage restriction is made and it is assumed that anyone can use any function, the CPU11determines that printing is to be executed (S25) and returns to the process ofFIG.3. On the other hand, as shown inFIGS.8A and8B, when the CPU11obtains the function information21successfully (S103: YES), the CPU11determines whether the information indicating the enablement/disablement of the restriction function, which is contained in the function information21obtained in S1, is enabled (S105). For example, when the obtained function information21includes information indicating the disablement of the restriction function (S105: NO), the CPU11determines that printing is to be executed (S125), since no usage restrictions are made and printing can be performed unconditionally. Thereafter, the CPU11return to the process ofFIG.3. As shown inFIGS.8A and8B, when the obtained function information21includes information indicating that the restriction function is enabled (S105: YES), the CPU11obtains the user ID (S107). The process in S107is an example of the identification information acquisition process. For example, the CPU11displays, via the user IF13, an identification information input screen for inputting a user ID, and receives an input operation of the user ID. In other words, the CPU11obtains the user ID by manual input by the user. It is noted that the CPU11may automatically obtain the account of the login user registered in the OS21as the user ID from the OS21. The CPU11may cancel printing when the user ID cannot be obtained. Further, when the CPU11cannot obtain the user ID automatically from the OS21, the CPU11may display the identification information input screen on the user IF13and switch the user ID input method from automatic input to manual input. In addition, when the user ID is automatically obtained from the OS21, the CPU11may have the user confirm the automatically obtained user ID. In this case, it may be possible to change the user ID at the timing when the user ID is confirmed. After obtaining the user ID, the CPU11displays the print setting screen110as shown inFIG.5via the user IF (S9). The print setting screen110is a screen for receiving input operations of print settings, and includes, for example, a setting area111, a confirmation button112, and a cancel button113. In the setting area111, there is a setting field for setting values for each print setting item. In respective items, the print settings received from the general-use printing program41with the execution instruction in A04ofFIG.3are displayed. The CPU11receives operations to manually change the setting values of respective items displayed on the print settings screen110via the user IF13. The items displayed on the print setting screen110include items corresponding to the usage condition and may include items that cannot be supported by the general-use printing program41. By displaying the print setting screen110before the judgment (S115) described later, print settings specific to the printer2that cannot be supported by the general-use printing program41, can be received, and furthermore, functional restrictions can be set for such specific print settings. As shown inFIGS.8A and8B, the CPU11determines whether to confirm or cancel the print settings (S111). When the cancel button113of110is operated via the user IF13(S111: Cancelled), the CPU11determines that the printing is canceled (S127) and returns to the process ofFIG.3. On the other hand, when the confirmation button112of the print setting screen11shown inFIG.9is operated via the user IF13, the CPU11determines that the print setting is to be confirmed (S111: confirmed) and obtains the usage condition (S113). It is noted that the process of S113is an example of the usage condition obtaining process. For example, the CPU11requests the printer2selected in the confirmed print settings to transmit the usage condition. To the request, the user ID obtained in S107is attached. The printer2that receives the request extracts the usage condition associated with the user ID received together with the request from the usage condition DB23and transmits the same to the PC1. The CPU1receives the usage condition from the printer2via the communication IF14and stores the same in the memory12. The CPU11may obtain the usage condition from the management device3. For example, the CPU11extracts the identification information of the printer2from the confirmed print settings, and transmits the extracted identification information of the printer2and the user ID obtained in S7to the management device3via the communication IF14. When the management device3receives the identification information of the printer2, the management device3identified the usage condition DB23associated with the identification information of the printer2with referring to the usage management DB33. Then, the management device3checks the received user ID against the identified usage condition DB23and extracts the usage condition associated with the user ID. Then, the management device3transmits the extracted usage condition to the PC1. The CPU11receives the usage condition transmitted from the management device3via the communication IF14and stores the same in the memory12. After obtaining the usage condition, the CPU11determines whether or not printing is executable on the printer2(S115). It is noted that a process in S15is an example of a determination process. In S115, the CPU11checks the usage condition obtained in S113against the print settings confirmed in S111, and determines whether printing according to the confirmed print settings is executable or not. The CPU11determines whether printing on the printer2is determined to be executable or not (S117). When the confirmed print settings satisfy all the usage conditions and when the printing on printer2is determined to be executable (S117: YES), the CPU11determines that printing is to be executed (S125) and returns to the process ofFIG.3. For example, when the print settings received from the general-use printing program41along with the execution instructions (i.e., the print setting which has not receive the change through the print setting screen110that is displayed in S109) satisfy all the usage condition, the CPU11determines that the printing is to be executed. Alternatively, when the print settings for which the execution instructions is received together from the general-use printing program41are first edited via the print setting screen110before the determination process of S115is performed, the CPU11determines that printing is to be executed. On the other hand, as shown inFIGS.8A and8B, when the confirmed print settings do not satisfy any of the usage conditions and the printing on the printer2is determined to be unexecutable (S117: NO), the print settings are changed (S119). In other words, the CPU11changes the print settings determined in S111to print settings that satisfy the usage conditions. It is noted that a process in S119is an example of a change process. As shown inFIG.10, for example, the CPU11displays a change notification screen120on the user IF13to notify the contents of the change (S121). It is noted that a process in S21is an example of a notification process. It is assumed, for example, that, as shown inFIG.9, the user A intends to use the printer2to print a 100-page original document, and in a state where “all” is set to an item of the print range, “color” is set to an item of color/monochrome, and “OFF” is set to the item of the toner save, the confirm button112is operated by the user. As shown inFIG.9, the usage condition of the user S is limited such that “monochrome” is set for the item of color/monochrome, and usage of color printing is restricted. In addition, the number of printable pages for the user A is limited to 50 pages, and therefore, the user A cannot perform printing 100 pages of sheets. Furthermore, for the user A, the item of the toner save included in the usage condition is set to “ON” and the printing that the toner save is not set is restricted. Therefore, the printing using the confirmed print settings cannot be executed on the printer2. Therefore, the CPU11changes the color/monochrome setting from “color” to “monochrome” according to the usage conditions. Further, the CPU11changes, for example, the setting of the print range from “all” to “current page.” Furthermore, the CPU11changes the toner save setting from “OFF” to “ON.” After changing the print settings in this way, the CPU11displays, for example, the change notification screen120shown inFIG.10on the user IF13. The change notification screen120displays a preview image121of the image after the change. The change notification screen120displays a change contents display field123indicating the change contents, a print execution button124, a setting change button125, and a cancel button126. The CPU11generates a preview image121by editing the intermediate image data received from the general-use printing program41together with the execution instruction in A04ofFIG.3based on the print settings after changed, and displays the generated preview image121on the user IF13. As described above, the preview image121is displayed in monochrome since the color/monochrome setting has been changed from “color” to “monochrome.” By looking at the preview image121, the user A can recognize that the print setting has been changed from the color to monochrome setting. When the printing range, color/monochrome, and toner save settings are changed as described above, the CPU11displays messages such as “Since the number of sheets exceeds max of 50 sheets, print range has been changed from all to current page” “Color/monochrome setting has been changed from ‘color’ to ‘monochrome’,” “Toner save setting has been changed from ‘ON’ to ‘OFF’,” and the like. This allows the user A to recognize the items of the print settings that have been changed and the concrete details of the changes. In particular, even for changes in print settings that are difficult to grasp from the preview image121, such as the toner saving setting, the user can recognize the changes from the notification displayed in the change contents display field123. Further, it is assumed that a usage condition of a user is restricted such that only “double-sided” is set for the aggregate print setting included in the usage conditions and thus the use of single-sided printing is restricted, and the user selects “no aggregation” in the item of the aggregation print setting on the print setting screen. In this case, the CPU11changes the aggregation setting from “no aggregation” to “aggregation” and displays a preview image121of two pages of images printed on one sheet of paper and/or a notice indicating the change on the user IF13. For another example, when only “double-sided” is set for aggregate print setting included in the usage conditions and the use of single-sided printing is restricted in a usage condition of a user, and the user selects “single-sided” in the aggregate print setting item displayed on the print setting screen, the CPU11changes the print setting to “double-sided printing” and notifies the user of the change. Furthermore, when, for example, a user who is restricted from printing postcards since only “plain paper” is set for printing of a specific paper type included in the usage conditions sets “postcard” for the paper type item on the print setting screen, the CPU11changes the paper type setting from “postcard” to “plain paper” and displays the preview image121with the image printed on plain paper and the details of the change are displayed on IF13. The CPU11determines whether the instructions are received via the change notification screen120(S123). A process in S123is an example of a selection process. When the CPU11receives the operation of the setting change button125via the user IF13, the CPU11receives the setting change instruction (S123: setting change instruction). In this case, the CPU11returns to S109and redisplays the print setting screen110. As shown inFIG.11, the redisplayed print setting screen110reflects the changed print settings. When the user A looks at the display of the change contents display field123of the change notification screen120shown inFIG.10, and recognizes that the maximum number of printable pages is 50 pages, the user A can change the print range setting from “current page” to “designated range” via the user IF13, enter “1-50” as the designated range, and then click the confirmation button11(S111: Confirmed). This allows the supporting program42to receive further changes to the print settings which have been automatically changed by the supporting program42itself. By allowing the supporting program42to manually change the changed print settings, the user A can also change the print settings to another print setting that allows printing by the supporting program42without having to redo the print settings with use of the editing application43, thus usability is improved. The CPU11determines again whether printing can be executable based on the re-set print settings (S115). In this case, the CPU11may omit S113to reduce the processing load. The CPU11checks the print settings after the re-setting against the usage conditions obtained for the user A, and determines whether printing on the printer2is executable or not. When the determination result indicates that printing is unexecutable (S117: NO), the CPU11changes the re-set print settings to print settings that enable printing on the printer2(S119), and notifies the user of the changed settings (S121). The description of the subsequent processes is omitted since they have been described above. Thus, the supporting program42allows the CPU11to send the print job to the printer2until the print settings meet the conditions for use by user A. When the user A does not intend to perform printing in monochrome, the user A operates the cancel button126of the change notification screen120shown inFIG.10via the user IF13. In this case, as shown inFIG.8, the CPU11receives the cancellation instruction (S123: cancelation instruction), determines that the printing is canceled (S127), and returns to the process ofFIG.3. Thus, the supporting program42can cancel printing even after the print settings have been changed. When the printing with the changed print settings is allowed, the user A operates the print execution button124of the change notification screen120shown inFIG.10via the user IF13. When the CPU11receives the print execution instruction (S123: print execution instruction), the CPU11determines that printing is to be executed (S125) and returns to the process ofFIG.3. As shown inFIG.3, when determining that the printing is to be executed in the print execution determining process in A5(alt: execute printing), the supporting program42generates the print data (A11). Concretely, the supporting program42performs rasterization based on the intermediate image data in response to the execution instruction received from the general-use printing program41in A04, and generates print data representing the image to be printed. The print data is data generated by rasterizing the intermediate image data using the print settings received from the general-use printing program41in A04or the print settings changed (including the re-changed print settings) in S119ofFIG.8B. The print data generated here is data in a format that can be used for printing by the printer2. The print data is, for example, PDL data dedicated to the model of the printer2. As described in detail above, the supporting program42of the present embodiment obtains the user ID and the usage conditions corresponding to the user ID of the user when there is a print instruction to the general-use printing program41, and determines whether the printer2can execute printing according to the print settings for the print instruction based on both of the obtained information. When the result of the determination indicates that printing on the printer is unexecutable, the supporting program42changes the print settings for the print instruction to other print settings with which the printing by the printer2can be performed. This makes it possible to use the printer2in accordance with the usage conditions set for the user concerned, and reduces the possibility that a print job is sent to the printer2with the print settings with which printing is unexecutable. It is noted, for example, that S121ofFIG.8may be omitted, and the supporting program42may not provide a notification indicating the changes of the print settings before executing a process to transmit the print job to the printer2. However, by providing a notification indicating the changes of the print settings, the user can recognize the items of the print settings that have been changed. When the print settings are changed, the supporting program42may execute a process to transmit the print job to the printer2without receiving any cancellation instructions or instructions to further change the print settings. By providing the print execution button124and the cancel button126to the change notification screen120as shown inFIG.10and by receiving the selection of whether to perform printing based on the changed print settings via the user IF13, the user can confirm in advance whether to print or not when the print settings are changed, and the printing that the user does not intend can be avoided. By providing the print execution button124and the setting change button125on the change notification screen120shown inFIG.10and by receiving the selection of whether to perform printing based on the changed print settings via the user IF13, and when the selection not to perform printing based on the changed print settings is received, it is possible to further manually change the print settings after the change. By allowing the manual change of the print settings in this way, the print settings can be changed to other settings with which the printing can be performed. The notification shown inFIG.10is an example and is not necessarily limited to this configuration. The preview image121generated based on the changed print settings may not be displayed on the user IF13. It is noted, however, by displaying the preview image121, the user can recognize in advance what kind of printing will be performed and can easily make determination whether to perform printing or not. Optionally, the preview image121may be displayed when the print settings that can be visibly recognized by the user, such as color printing or aggregated printing, are performed. On the other hand, when the print settings that can hardly be visibly recognized by the user, such as the toner save setting, the preview image may not be displayed. This may reduce the processing load on the supporting program42. | 66,097 |
11861253 | DESCRIPTION OF THE PREFERRED EMBODIMENTS The present disclosure will now be described in more detail with reference to the accompanying drawings. The following description is illustrative in all respects and should not be construed as limiting the present disclosure. Embodiment 1 Configuration Example of Image Processing Apparatus FIG.1is a perspective view illustrating the outer appearance of a multifunction peripheral10as an example of an image processing apparatus according to the present disclosure. As illustrated inFIG.1, the multifunction peripheral10is provided with an operator30which receives an operation by a user, a scanner portion32including a document feeder31, and an engine portion34which performs printing. The engine portion34is provided with four paper feed trays35A to35D for accommodating printing sheets at a lower part of the engine portion34, and also a paper discharge tray33on an upper part of the engine portion34at a position below the scanner portion32. FIG.2is a block diagram illustrating the configuration of the multifunction peripheral10illustrated inFIG.1. As illustrated inFIG.2, in addition to the operator30, the scanner portion32, and the engine portion34illustrated inFIG.1, the multifunction peripheral10is provided with a controller12and a storage device36. The operator30is provided with a display30D which includes a liquid crystal display device and an LED lamp to provide information to the user. In addition, the operator30is provided with an inputter30E which includes hardware operation keys (also referred to as hard keys) and a touch panel to receive an operation by the user. The scanner portion32is provided with the document feeder31which conveys a document, a scanning mechanism which scans the document, and an image sensor which reads an image of the document. The engine portion34includes: the paper feed trays35A to35D for accommodating printing sheets; a sheet conveyance mechanism which feeds the printing sheet from the paper feed trays35A to35D and guides the printing sheet to the paper discharge tray33; an image former which forms a toner image by an electrophotographic method; and a transfer mechanism which transfers the formed toner image to the printing sheet that has been conveyed. The storage device36includes a hard disk drive (HDD), a solid state drive (SSD), a dynamic random access memory (DRAM), or a combination thereof. The storage device36includes a basic setting storage36B which stores basic settings of the multifunction peripheral10and a job setting storage36S which stores data pertaining to job settings. The basic settings are settings that are not related to individual jobs but are applied to processing of the multifunction peripheral10continuously. The setting to allow or prohibit an OCR function belongs to the basic settings. The data pertaining to job settings corresponds to data pertaining to settings related to individual jobs, in other words, settings of each item of a setting menu. As will be described later, a file format employed in transmitting, to an external device, data of a document which has been read by a scan transmission job is an example of the data pertaining to job settings. In addition, the storage device36includes an initial state storage36D which stores the initial state of a setting menu. Further, the storage device36arbitrarily includes a filing data storage36F which stores data of the read document image. The controller12employs an electronic circuit including a processor and a memory as the main elements for hardware resources. As the processor executes a control program stored in the memory, the functions of the controller12are implemented. The controller12functionally includes a character recognition processor14, a setting manager16, a job controller18, an operation controller20, an image processor22, and a file controller24. Further, the controller12arbitrarily includes a communication controller26and a user authenticator28. The character recognition processor14performs character recognition processing of extracting text information from an image of a document by using a well-known technique. In the present embodiment, it is assumed that the character recognition processor14alone performs the character recognition processing. However, the function of the character recognition processing may be provided through communication with a device such as an external server which provides the service of character recognition processing. The setting manager16manages data (basic setting data) pertaining to the basic settings to be set by a user operation received via the operator30. The basic setting data may be set by using the setting menu. The basic setting data includes data pertaining to the setting of allowing or prohibiting the OCR function. The job controller18controls execution of a series of processes (jobs) related to image processing such as causing the scanner portion32to read a document or causing the engine portion34to form a toner image and transfer (print) the toner image on a printing sheet, on the basis of the setting and an instruction received via the operator30. Also, the job controller18registers or deletes those jobs in a queue (a run queue), starts execution of the job registered in the queue, or stops the job being executed. The operation controller20causes the display30D to perform a display related to the state of the multifunction peripheral10or the job settings, and recognizes the user operation received by the inputter30E provided in the operator30. If the operation is that related to setting of the basic setting data, the operation controller20updates the basic setting data, which is managed by the setting manager16and is stored in the basic setting storage36B, accordingly. Further, if the operation is that related to setting of a job, the operation controller20updates the data (job setting data) pertaining to job settings stored in the job setting storage36S accordingly. The operation controller20provides a setting menu to the user via the display30D. Then, the operation controller20receives the settings from the user. That is, the operation controller20carries out processing related to a user interface. Note that the operation controller20may display, not only by way of the display30D and the inputter30E, a setting menu on a screen of an external device42(a PC, for example) or an external device44(a smartphone, for example) which is connected via a network, and receive a remote operation using the setting menu. The image processor22performs processing on an image according to the substance of image processing requested from the job controller18. Examples of the processing on an image include magnification varying processing, cropping of an image, determination of whether paper is blank or not, and determination of a document area. The file controller24saves a data file to or reads the same from the storage device36. The communication controller26controls communication with an external device connected via a communication circuit of the multifunction peripheral10not illustrated inFIGS.1and2. The communication may either be wireless or wired.FIG.2illustrates the state in which the external devices42and44are connected via a network40. The type of the external device is not limited as long as the device can communicate with the multifunction peripheral10. An example of the external device is a PC. Alternatively, the external device is a portable information device such as a smartphone or a tablet terminal connected wirelessly. In the above, the configuration of the multifunction peripheral10has been described as an example of the image processing apparatus in the present embodiment. Example of Setting Menu to Allow or Prohibit Character Recognition Function Next, an example of control related to a setting menu executed by the controller12of the present embodiment will be described. First, an example of an operation of the setting to allow or prohibit the function of character recognition will be described.FIG.3is an explanatory diagram illustrating an example of a device setting screen50, which is one of setting menus displayed on the display30D by the operation controller20, in the present embodiment. The device setting screen50is a screen which receives the setting to allow or prohibit the function of OCR processing of the multifunction peripheral10. In the example illustrated inFIG.3, an item that allows/prohibits the function of the OCR processing is provided together with other items (allowance/prohibition of a remote PC scan, allowance/prohibition of a save to an external memory device, and allowance/prohibition of transmission from a PC-Fax). Since the other items are merely displayed on the same screen as the OCR function and are not substantially related to the OCR function, a description of the other items is omitted. In the setting menu illustrated inFIG.3, the operation controller20causes a check box to be displayed on the left side of each item. When the user touches a position of the check box with his/her finger, the inputter30E detects the touch and the position of the touch. In response to that detection, the operation controller20alternately turns off and on a check mark by which the state of being allowed or prohibited is represented each time the check box is touched. As illustrated inFIG.3, the state in which a check mark is displayed for the item which is “Prohibit OCR” corresponds to the state in which the function of the OCR processing is prohibited. While the screen illustrated inFIG.3is an example of the display30D, a similar screen and function may be provided as the screen of the external devices42and44via the network. That is, the function of the OCR processing may be set to be allowed or prohibited by a remote operation using the external devices42and44. Example of Reflecting Character Recognition Function Allowance/Prohibition Setting Status in Job Setting Next, an example of reflecting the setting of allowing/prohibiting the character recognition function in the setting menu related to the settings of the job in items will be described. Here, as an example of the job, a scan transmission job of sending, in a file format set by the user, an image of a document that has been read to an external device set by the user, i.e., the external device42or44, for example, will be described. When the scan transmission job can be executed, it is assumed that the multifunction peripheral10is provided with the communication controller26for communicating with the external device42or44. FIGS.4A and4Bshow an example of a screen for selecting a transmission format of a scan transmission job in the multifunction peripheral10capable of performing the OCR processing, more specifically, a file format to be employed in the transmission to the external device. FIG.4Ais an example of a scan transmission screen52, which is one of the setting menus displayed on the display30D by the operation controller20. On the scan transmission screen52, operation keys (also referred to as soft keys) on the screen are arranged correspondingly to items such as “color mode”, “document”, “resolution”, “density”, “format”, and “other functions” related to document reading. The operation controller20displays each of the soft keys along with the name corresponding to the item. When the user touches the soft key of each item with his/her finger, the operation controller20receives the operation via the touch panel. The key [Color Mode] receives the setting of color, grayscale, or black-and-white (bitonal). The key [Document] receives specification of the document size and whether the document is single-sided or double-sided. [Other Functions] is a soft key that displays a setting menu, which is not illustrated inFIG.4A, to receive setting of other functions not accommodated inFIG.4A. The other functions include, for example, an image orientation detection function, a file name extraction function, a thin paper read function, a card scan function, and a business card scan function. Since the keys other than [Format] have low relevance to the setting of allowing/prohibiting the OCR function, further explanation of these functions is omitted here. Further, calling of a job program already registered and an address book can also be operated from the same screen. The job program and the address book will be described later. When the key [Format] is operated on the scan transmission screen52, the operation controller20displays, on the display30D, a transmission format screen54illustrated inFIG.4Bin response to the operation. In the example illustrated inFIG.4B, as file formats which can be employed for transmission in the scan transmission job, a total of 14 types of file formats including ones that require the OCR function and ones that do not can be specified. The operation controller20display soft keys corresponding to respective file formats on the transmission format screen54. Among those file formats, there are seven types of file formats, i.e., TIFF, XPS, JPEG, PDF, encrypted PDF, highly-compressed PDF, and PDF/A-1b, as the file formats that do not require the OCR function. Meanwhile, there are seven types of file formats, i.e., PDF/A-1a, searchable PDF, DOCX, XLSX, PPTX, RTF, and TXT, as the file formats that require the OCR function. When a scan transmission job is to be executed, one of these file formats must be set by the user. When the OCR function is set to be allowed, the operation controller20displays, on the display30D, the transmission format screen54for setting any one of the file formats among the 14 types of file formats illustrated inFIG.4B. In contrast,FIG.5is an explanatory diagram illustrating an example of the transmission format screen54displayed on the display30D by the operation controller20when the OCR function is set to be prohibited. A difference from the transmission format screen54illustrated inFIG.4Bis that the soft keys corresponding to the above-mentioned seven types of file formats that require the OCR function are not displayed, in other words, are hidden. The transmission format screen54ofFIG.5displays only the operation keys on the screen corresponding to the seven types of file formats that do not require the OCR function. In the example illustrated inFIG.5, setting items corresponding to the file formats that require the OCR function are hidden. However, these setting items may be grayed out, for example, to indicate that they are not to be set (i.e., cannot be selected). FIG.6is a flowchart illustrating the processing executed by the controller12serving mainly as the operation controller20in the examples shown inFIGS.4A,4B, and5. As illustrated inFIG.6, the controller12serving as the operation controller20monitors whether the touch panel or any of the hard keys arranged on the display30D has been operated (i.e., the loop when No applies in step S11). Further, if the touch panel or the hard key has been operated (Yes in step S11), it is determined whether or not the operation corresponds to an operation whereby the transmission format screen54illustrated inFIG.4B or5is to be displayed (step S13). Specifically, it is determined whether or not the key [Format] has been pressed on the scan transmission screen illustrated inFIG.4A. If an operation whereby the transmission format screen54is to be displayed is performed (Yes in step S13), the operation controller20determines whether or not the OCR function is set to be prohibited (step S15). If the OCR function is set to be prohibited, the operation controller20displays, as illustrated inFIG.5, only the file formats that do not require the OCR function on the transmission format screen54, and receives setting, in other words, selection of the file format (step S17). Accordingly, file formats that require the OCR function cannot be set. Further, the data stored in the job setting storage36S is updated in accordance with the selection of the file format that has been received. Meanwhile, if the OCR function is set to be allowed (No in step S15), the operation controller20displays, as illustrated inFIG.4B, both of the setting items that require the OCR function and the setting items that do not require the OCR function on the transmission format screen54, and receives setting of the file format (step S19). Further, the data stored in the job setting storage36S is updated in accordance with selection of the file format that has been received. After receiving the setting of the file format for the data to be transmitted on the transmission format screen54in this way, the operation controller20returns the processing to step S11described above, and waits for any of the operation keys to be operated next. In step S13described above, if the substance of the operation is not one that requests display of the transmission format screen54(No in step S13), the operation controller20subsequently determines whether or not an instruction to start a scan transmission job has been received (step S21). If an instruction to start a scan transmission job has been received (Yes in step S21), the operation controller20notifies the job controller18that the instruction to start the scan transmission job with the current settings has been received (step S23). In response to that notification, the job controller18starts the scan transmission job. Then, according to the settings stored in the job setting storage36S, the scan transmission job of reading and sending a document is executed. Meanwhile, the operation controller20returns the processing to step S11described above, and waits for any of the operation keys to be operated next. In the determination of step S21described above, if the substance of the operation is not an instruction to start a scan transmission job (No in step S21), the operation controller20performs the processing according to the received operation (step S25). Further, if setting related to a job is received, the data stored in the job setting storage36S is updated in accordance with the received setting related to the job. Moreover, the processing is returned to step S11described above to wait for any of the operation keys to be operated next. In the above, the transmission format screen54has been described but the present disclosure is not limited thereto. That is, also for the other items of a setting menu affected by the setting status of allowance/prohibition of the OCR function, the operation controller20applies a similar technique to the display of the setting menu. Example of Reflecting Character Recognition Function Allowance/Prohibition Setting Status in Initial State of Setting Menu Next, an example in which the operation controller20reflects the allowance/prohibition setting status of the character recognition function in the initial state of the setting menu will be described. FIG.7Ais an explanatory diagram illustrating an example of a scan transmission setting screen56in which some of the items indicated on the scan transmission screen52illustrated inFIG.4Aand the transmission format screen54illustrated inFIG.4Bare assembled on one screen. Strictly speaking, a basic setting screen56S, which constitutes a part of the scan transmission setting screen56, is shown. As inFIG.4A, setting regarding items corresponding to the color mode and the format related to document reading is received as the setting. Items such as the document, resolution, density, and other functions that are indicated inFIG.4Aare not included in the scan transmission setting screen56illustrated inFIG.7A. Those items may be included, but since they are not affected by the setting to allow/prohibit the OCR function, the presence or absence of those items is not important in the description of the present embodiment. In addition, in place of the transmission format screen54ofFIG.4B, the operation controller20displays a drop-down list58to set a file format (a format) for transmission on the scan transmission setting screen56. The user sets the file format by using the drop-down list58on the scan transmission setting screen56, instead of using the soft keys illustrated inFIG.4B. The drop-down list is used for an operation of selecting one item from a plurality of options. In the example illustrated inFIG.7A, 10 types of file formats, i.e., TIFF, XPS, JPEG, PDF, highly-compressed PDF, PDF/A-1a, searchable PDF, DOCX, XLSX, and PPTX, can be set. Searchable PDF is set as a default (the setting of the initial state before the user performs an operation). The default of each item is stored in the initial state storage36D. Among the above file formats, there are five types of file formats, i.e., TIFF, XPS, JPEG, PDF, and highly-compressed PDF, as the file formats that do not require the OCR function. Meanwhile, there are five types of file formats, i.e., PDF/A-1a, searchable PDF, DOCX, XLSX, and PPTX, as the file formats that require the OCR function. In addition to those setting items, the scan transmission setting screen56illustrated inFIG.7Aincludes a check box59to allow or prohibit the function of image orientation detection and a check box60to allow or prohibit the function of file name extraction. The functions of the image orientation detection and the file name extraction are both functions that use the OCR function. The image orientation detection corresponds to the function of determining that top-bottom directions of characters extracted from a document by the OCR processing represents a top-bottom direction of an image when the top-bottom directions of the characters are aligned in one direction beyond a predetermined ratio. Further, the file name extraction corresponds to the function of adopting any of the characters extracted from the document by the OCR processing as the file name. FIG.7Billustrates an initial state setting screen56D which constitutes a part of the scan transmission setting screen56. The screen is for setting the initial state which is a state before the user operates the basic setting screen56S illustrated inFIG.7A. That is, the screen is for setting the initial state when the scan transmission setting screen56is to be displayed on the display30D. The arrangement of the screen corresponds to that of the basic setting screen56S, and the user sets or changes each item on the initial state setting screen56D. The initial state that has been set or changed is stored in the initial state storage36D. When displaying the basic setting screen56S on the display30D, the operation controller20refers to the initial state of each item stored in the initial state storage36D and reflects the referred initial state in the display. FIG.7Ais an example of the scan transmission setting screen56displayed on the display30D by the operation controller20when the OCR function is set to be allowed. In contrast,FIG.8Ais an example of the scan transmission setting screen56displayed on the display30D by the operation controller20when the OCR function is set to be prohibited. A difference from the scan transmission setting screen56illustrated inFIG.7Ais that the initial state of the drop-down list58is PDF inFIG.8A, instead of searchable PDF as inFIG.7A. Further, as compared to the options of the drop-down list58illustrated inFIG.7A, the point that the five types of file formats described above that require the OCR function are excluded from the options (i.e., not displayed), in other words, hidden, is different inFIG.8A. The scan transmission setting screen56ofFIG.8Adisplays only the five types of file formats, i.e., TIFF, XPS, JPEG, PDF, and highly-compressed PDF, which do not require the OCR function. In the example illustrated inFIG.8A, setting items corresponding to the file formats that require the OCR function are hidden. However, such options may be grayed out, for example, to indicate that they are not to be set (i.e., cannot be selected). Further, inFIG.8A, items of the check boxes59and60illustrated inFIG.7Aare displayed to be grayed out with a check mark removed. That is, it is indicated that the functions of the image orientation detection and the file name extraction requiring the OCR function cannot be set. Further, the initial state is the setting which makes these functions unselectable (disabled). When the device setting screen50illustrated inFIG.3is operated, and the setting is changed to prohibit the OCR function which has been allowed until then, the operation controller20causes, if an option which requires the OCR processing is set as the initial state as illustrated inFIG.7B, this initial state to be replaced by an option which does not require the OCR processing. As a specific example, in the drop-down list58illustrated inFIG.7A, searchable PDF that requires the OCR processing is set as the initial state. When the setting is changed to prohibit the OCR function, the operation controller20replaces the setting of the initial state so that “PDF”, which does not require the OCR processing, is set, as illustrated inFIG.8A. The initial state for replacement may be determined in advance from among those that do not require the OCR processing. Alternatively, the user may be able to select one from among those that do not require the OCR processing. Furthermore, as illustrated inFIG.8B, the operation controller20reflects the replaced default in the initial state of the drop-down list58on the initial state setting screen56D. In addition to the above, options of the initial state which are to be displayed in the drop-down list58are changed to only TIFF, XPS, JPEG, PDF, and highly-compressed PDF, which are the five types of file formats that do not require the OCR function. Further, the items of the check boxes59and60illustrated inFIG.8Bare displayed to be grayed out with a check mark removed. By doing so, the functions of the image orientation detection and the file name extraction that require the OCR function are prevented from being set to the initial state. FIG.9is a flowchart illustrating the processing executed by the controller12serving mainly as the operation controller20in the examples shown inFIGS.7A and8A. As illustrated inFIG.9, when the controller12serving as the operation controller20displays any of the screens of the setting menu on the display30D, the controller12refers to the initial state storage36D and acquires data pertaining to the initial state of the setting menu which should be displayed (step S31). Then, the operation controller20determines whether the OCR function is set to be prohibited (step S33). If the OCR function is set to be prohibited, the operation controller20determines whether or not the setting that requires the OCR function has already been set as a default (step S35). As regards the item for which the setting that requires the OCR function has already been set as the default, the default of that item is replaced by an alternative which does not require the OCR function and the alternative is displayed (step S37), as illustrated inFIG.8A. In the example shown inFIG.8A, “searchable PDF” registered as the default of the drop-down list58on the scan transmission setting screen56is replaced by “PDF”, and “PDF” is displayed. In addition, a check mark in the check box59for the image orientation detection is removed in the display. Further, by not displaying or graying out the options which require the OCR function of each item to indicate that such options are not selectable, only the options which do not require the OCR function are provided, and an operation of the setting is received (step S39). In the example illustrated inFIG.8A, the options which require the OCR function among the options of the drop-down list58on the scan transmission setting screen56are hidden from the display. In addition, the check box59for the image orientation detection and the check box60for the file name extraction are grayed out to indicate that they are not selectable. In this way, the options which require the OCR function are prevented from being selected, and such functions are also prevented from being set. Meanwhile, if the OCR function is set to be allowed (No in step S33), the operation controller20displays, as illustrated inFIG.7A, both of the options which require the OCR function and the options which do not require the OCR function on the scan transmission setting screen56as the default and the options, and receives the setting (step S41). The above is an example of the processing executed by the controller12regarding display of the setting menu and reception of an operation. Embodiment 2 In Embodiment 1, the job setting storage36S stores job setting data pertaining to individual jobs, and the operation controller20changes the job setting data stored in the job setting storage36S in accordance with an operation of setting received in the setting menu. However, the job setting data stored in the job setting storage36S is data pertaining to individual jobs and is basically deleted when execution of a target job is completed. In the present embodiment, the function of having job setting data registered and calling the registered job setting data, thereby allowing a job to be executed by the called setting will be described. The function of having the job setting data registered such that the job setting data can be called in this way will be referred to as a job program in the present specification. By the job program, it becomes possible to call registered data, which is enabled by using a setting menu for registering job setting data and having the job setting data registered in advance. As a function similar to the job program, another function that can be employed is the function of retaining job setting data pertaining to a job which has already been executed even after execution of the job and calling the retained job setting data, thereby allowing a job to be executed by the called setting. In the present specification, such a function will be referred to as a job history. Another example of a similar function is an address book related to a scan transmission job. Generally, an address book corresponds to the function of saving the time and effort of the setting, which is enabled by having a transmission destination of data and an attribute of each transmission destination registered, and calling the registered transmission destination. In the present specification, it is assumed that an address book registers therein not only the transmission destination and the attribute pertaining to the transmission destination, but also the setting related to transmission including a file format employed in transmitting document data to an external device. That is, the address book in a scan transmission job can be considered as the function similar to a job program including the transmission destination. FIG.10Ais an explanatory diagram illustrating one example of data registered as a job program62in the multifunction peripheral10. The same applies to the job history. Further,FIG.10Bis an explanatory diagram illustrating one example of data registered as an address book63in the multifunction peripheral10. The job program62and the address book63are not settings related to individual jobs. Therefore, it is assumed that the job program62and the address book63belong to the basic setting data stored in the basic setting storage36B. An operation of calling the registration data of the job program62(also the same for the job history) or the registration data of the address book63may be rephrased as copying data called in the job setting data (data pertaining to the job to be set) that is stored in the job setting storage36S. The registration data of the job program62illustrated inFIG.10Apertains to a scan transmission job. The job program hands over the set contents to a setting manager16when an operation controller20receives the user operation related to the registration of the job program. The setting manager16stores the setting data that has been handed over in a basic setting storage36B. When the operation controller20receives an operation of calling the registered job program, the setting manager16calls the registration data of the target job program and hands it over to the operation controller20. The operation controller20applies the registration data of the called job program to the job setting of the job to be set. The job history corresponds to the function of saving the setting of the job executed by the user for a certain period of time. When execution of the job is completed, a job controller18notifies the setting manager16of the completion of the job. In response to the notification, the setting manager16registers the job setting data pertaining to the job whose execution is completed in the basic setting storage36B as the registration data of the job history. When the operation controller20receives an operation of calling the registered job history, the setting manager16calls the registration data of the target job history and hands it over to the operation controller20. The operation controller20applies the registration data of the called job history to the job setting of the job to be set. The registration data of the address book63illustrated inFIG.10Bincludes items of a default of the file format to be employed in transmission and a default of the color mode to be employed in transmission, in addition to information on the transmission destination. The operation controller20receives, in a state in which the OCR function is allowed, registration of an item which requires the OCR function for all of the job program62, the job history, and the address book63. The setting manager16stores the data handed over from the operation controller20in the basic setting storage36B. After registration, it is assumed that the setting has been changed to prohibit the OCR function. Then, it is assumed that any of the job program62, the job history, and the address book63is called in a state in which the OCR function is set to be prohibited. The setting manager16calls the registration data and hands over the registration data to the operation controller20. The operation controller20to which the registration data has been handed over first replaces those items of the handed over registration data that have been registered with such a setting that the OCR function is required by the setting that does not require character recognition processing, and then calls the registration data as the job setting. FIG.11is a flowchart illustrating an example of processing of calling the registration data in the present embodiment. Although the calling operation and the configuration of the called registration data of the job program, the job history, and the address book are different from one another, there is a commonality in the flow of the processing. Therefore, it is to be understood that the flowchart ofFIG.11is applied to any of the aforementioned kinds of registration data. When the operation controller20receives an operation related to calling of the registration data, the controller12serving as the operation controller20requests the setting manager16to call the target registration data. In response to the request, the controller12serving as the setting manager16calls the registration data and hands over the called registration data to the operation controller20(step S51). Then, when the registration data is handed over, the operation controller20determines whether the OCR function is set to be prohibited (step S53). If the OCR function is set to be prohibited, the operation controller20determines whether or not any of the items of the handed over registration data is registered as the setting that requires the OCR function (step S55). If there is any item which has been registered as the setting that requires the OCR function (Yes in step S55), the operation controller20does not replace the item in question by the setting that requires the OCR function, but maintains the current setting (step S57). This is because, as described in Embodiment 1, the current setting is the setting that does not require the OCR function. Since the functions of image orientation detection and file name extraction also require the OCR function, the current setting may be maintained for these items as well. However, such items are both set to be off, and thus, even if the registration data is called, those functions will be maintained to be off. To begin with, in a state in which the OCR function is prohibited, as illustrated inFIG.8A, those items are grayed out and is out of a target of the setting. Thus, those items can be considered as not being a target of calling in the first place. In the example of the job program62illustrated inFIG.10A, the format among the respective items of the registration data, i.e., the setting of the file format of data to be transmitted to an external device, is registered to be DOCX, which requires the OCR function. The operation controller20does not replace the item in question by “DOCX” of the registration data, but maintains the current setting. In the example of the address book63illustrated inFIG.10B, the format to be employed in E-mail transmission among the respective items of the registration data, i.e., the setting of the file format of data to be transmitted to an external device, is registered to be DOCX, which requires the OCR function. When a job to be set is one related to E-mail transmission among the scan transmission jobs, the operation controller20does not replace the item in question by “DOCX” of the registration data, but maintains the current setting. In addition, the setting of the format to be employed in FTP transmission among the respective items of the registration data is registered to be searchable PDF, which requires the OCR function. When a job to be set is one related to FTP transmission, the operation controller20does not replace the item in question by “searchable PDF” of the registration data, but maintains the current setting. For the other items which do not require the OCR function among the respective items of the registration data, the setting of the corresponding item in the setting menu is changed according to the registration data (step S59). Meanwhile, if the OCR function is set to be allowed (No in step S53), the operation controller20changes the setting of the item in the setting menu according to the registration data for both of the items that require the OCR function and the items that do not require the OCR function (step S61and step S59). Then, a user operation for the setting menu is received. The above is the processing executed by the controller12regarding the calling of the registration data. Embodiment 3 Some image processing apparatuses are provided with a function of storing data pertaining to an image of a document that has been read in a storage device, together with data pertaining to job settings, and calling (downloading) the stored data by a user operation and transmitting the stored data to an external device or performing printing. In the present specification, such a function is referred to as a filing function and the stored data is referred to as filing data. In the present embodiment, it is assumed that a multifunction peripheral10is provided with the filing function. It is assumed that a controller12receives the operation via an operator30or from an external device (for example, the external device42or44illustrated inFIG.2) that is connected via a communication controller26. When the multifunction peripheral10executes a copy job or a scan transmission job, the multifunction peripheral10can store data pertaining to the job in a storage device36. The data pertaining to the job includes data pertaining to an image of a document that has been read and data pertaining to job settings. The storage device36is provided with a filing data storage36F which stores data pertaining to the jobs as filing data. The controller12downloads the data stored in the filing data storage36F, together with the corresponding job setting data, and performs printing or transmits the data to an external device. While printing and transmission of the filing data can be performed with the same settings as those at the time of storage in the filing data storage36F, the settings can be changed before the printing or the transmission. The filing data is stored in the filing data storage36F with the image data as it is in a data format specific to the multifunction peripheral10that can be read by a scanner portion32and processed by an image processor22. However, such specific image data may be converted into a format (e.g., PDF, DOCX, or the like) that can be viewed by an information processing device, such as a PC or a smartphone of the external devices42and44, by means of the image processor22, and stored in the filing data storage36F together with the original specific image data. By doing so, it becomes possible to download the filing data to the external device42by performing remote control from the PC of the external device42, for example, and view the filing data on the external device42. In the present embodiment, the filing data is data obtained by adding data, which is in a format that can be viewed on a PC, and printer data to the image data in the specific format. In addition, a thumbnail image with a reduced resolution of the image data is added for display on a display30D or a screen of a user interface of the external device. Furthermore, job setting data of a job for which the image data in the specific format has been generated is added as job setting information. FIG.12is a flowchart illustrating an example of processing executed by the controller12with respect to a download of the filing data in the present embodiment. Processing to be performed when downloading the filing data stored in the filing data storage36F to the PC of the external device42will be described as an example. The processing of downloading the filing data to the external device42is executed as a single job. When a request to download filing data is received from the external device42, the controller12serving as a job controller18starts the job of downloading the filing data. In response to an instruction from the job controller18, the image processor22reads the filing data stored in the filing data storage36F to a memory area for use in data transmission (step S71). Then, allowance/prohibition setting for OCR processing that is stored in a basic setting storage36B is referred to (step S73). If an OCR function is set to be prohibited, data in a format that uses the OCR function for generation cannot be transmitted outside, so the following processing is performed. The controller12serving as the job controller18determines whether or not data in a format that can be viewed by the user on the external device42has already been generated (step S75). If data in a format that can be viewed has not been generated yet (No in step S75), the job controller18shifts the processing to step S79, which will be described later. Meanwhile, if data in a format that can be viewed on the external device42has already been generated (Yes in step S75), the job controller18determines whether or not the generated data in the format that can be viewed is one that requires the OCR function in the generation (step S77). If the generated data in the format that can be viewed is one that requires the OCR function in the generation (Yes in step S77), the image processor22is again made to generate data in a format that does not require the OCR function in the generation (step S79). Then, if no thumbnail image has been generated, the image processor22is made to generate a thumbnail image. After that, the data in the format that can be viewed generated in step S79described above, the thumbnail image, and job setting information are transmitted to the external device42(step S81), and the job is ended. Meanwhile, in the determination of step S77described above, if the generated data in the format that can be viewed is not one that requires the OCR function in the generation (No in step S77), the job controller18transmits the already-generated data, the thumbnail image, and the job setting information to the external device42(step S81), and ends the job. In the determination of step S73described above, if the OCR function is allowed, the job controller18determines whether or not data in a format that can be viewed by the user on the external device42has already been generated (step S83). If data in a format that can be viewed has already been generated (Yes in step S83), the job controller18transmits the already-generated data, the thumbnail image, and the job setting information to the external device42(step S81), and ends the job. Meanwhile, if data in a format that can be viewed on the external device42has not been generated yet (No in step S83), the job controller18causes the image processor22to generate data in a format that can be viewed by the user on the external device42in accordance with job setting information (step S85). Then, if no thumbnail image has been generated, the image processor22is made to generate a thumbnail image. After that, the data in the format that can be viewed generated in step S85described above, the thumbnail image, and the job setting information are transmitted to the external device42(step S81), and the job is ended. The above is the processing executed by the controller12regarding the download of the filing data. Embodiment 4 In the above embodiments, it has been described that the setting manager16assumes not the settings related to individual jobs of the multifunction peripheral10but the basic settings to be continuously applied to the processing of the multifunction peripheral10as the target of management. Thus, the basic settings can be considered as the setting targeted at any user who uses the multifunction peripheral10. However, some image processing apparatuses authenticate a user at the time of use, and manage the authenticated user. In the present embodiment, it is assumed that a multifunction peripheral10is provided with a user authenticator28which authenticates users and manages the authenticated users. In the present embodiment, a setting manager16stores the basic settings for each of the authenticated users in a basic setting storage36B, and individually manages the basic settings. Accordingly, even in a case where the OCR function is allowed in the multifunction peripheral10as a whole, if the OCR function is prohibited by the setting of each authenticated user, a controller12performs the control so that the authenticated user who is prohibited to use the OCR function cannot use the OCR function. Meanwhile, in a case where the OCR function is prohibited in the multifunction peripheral10as a whole, irrespective of the setting of whether the OCR function is allowed or prohibited for each authenticated user, the controller12performs the control so that none of the users can use the OCR function. As described above,(i) An image processing apparatus according to the present disclosure is provided with: a character recognition processor which reads an image of a document and extracts text information included in the document; a setting manager which manages settings including a setting to allow or prohibit a function of character recognition by the character recognition processor; a job controller which controls execution of a job related to reading of the document; and an operation controller which provides, to a user, a setting menu to receive a setting of one or more items related to the execution of the job and receives a setting from the user, and the operation controller is characterized in that when the function of the character recognition is set to be prohibited, the operation controller prevents the user from setting a function that requires the character recognition. In the present disclosure, the character recognition processor performs character recognition processing of extracting the text information included in the document, either alone or in cooperation with an external device. For the character recognition processing itself, a well-known technique may be applied. Further, the setting manager allows or prohibits the function of character recognition on the basis of an instruction by the user. The setting to allow or prohibit the function of character recognition may be integrated with the setting menu provided by the operation controller, and the setting manager may be implemented by using hardware resources in common with the operation controller. Furthermore, the job controller controls a series of processes related to image processing. As a specific mode of the above, for example, the hardware resources are configured from a circuit including a processor and a memory as the main elements. The function may be implemented by execution of a processing program stored in the memory by the processor. The operation controller provides the setting menu to the user and receives a setting from the user. That is, the operation controller carries out processing related to a user interface. As a specific mode of the above, the hardware resources may be configured from a circuit including a processor and a memory as the main elements, and the function may be implemented by execution of a processing program stored in the memory by the processor. The operation controller may be implemented by using the hardware resources in common with the job controller described above. Preventing the user from setting a function that requires character recognition means, for example, hiding the function that requires character recognition from the setting menu or indicating that such a function is not to be set. Hiding the function from the setting menu means that an item related to that function is not to be provided as the setting menu. A specific mode of the above includes, for example, preventing a setting item or an option provided by the setting menu from including ones related to the above function. Indicating in the setting menu that the function is not to be set means that while an item related to such a function is provided as the setting menu, the item is provided in a form that is different from the other functions to be set, so that the user can recognize that the function cannot be set. Examples of a different form include graying out items which are not the target of setting and adding strike-throughs, whereby the user can identify that the item is different from the target of setting. A case where a function that requires character recognition has already been set refers to a state in which the function that requires the character recognition has been registered when there is a function of registering and retaining a setting, for example. Further, preferred modes of the present disclosure will be described.(ii) The operation controller may enable, when a function that requires the character recognition has already been set, the function to be replaced by another function. In this way, even if any function that requires the character recognition has already been set at the time of making the setting to prohibit the function of the character recognition, the already-set function can be replaced by another function. By such replacement, it is possible to prevent inconsistency from occurring between the setting to prohibit the function of the character recognition and the function to be executed.(iii) An initial state storage, which stores an initial state of each item of the setting menu, may further be provided, and the operation controller may provide, to the user, a setting menu to receive a setting of the initial state of each item by the user; store the received initial state in the initial state storage; and prevent, when the function of the character recognition is set to be prohibited, the setting menu related to the setting of the initial state of each item from displaying an option of a setting that requires processing of character recognition. According to this mode, when the function of character recognition is set to be prohibited, ones that require processing of character recognition are prevented from being displayed as the initial state of the setting menu, whereby occurrence of inconsistency in the operation can be avoided.(iv) The operation controller may replace, when an initial state of each item is set with processing of the character recognition being set to be allowed, and the function of the character recognition is thereafter set to be prohibited, the initial state which requires the processing of the character recognition by an initial state which does not require the processing of the character recognition, and provide a setting menu of the replaced initial state and receive a setting related to execution of a job from the user. According to this mode, when an initial state is set with the function of character recognition being allowed, and the function of the character recognition is thereafter set to be prohibited, the initial state which requires processing of character recognition is replaced by an initial state which does not require the processing of character recognition in an operation menu, whereby occurrence of inconsistency in the operation can be avoided.(v) A job setting storage, which stores a job setting related to execution of the job, may be further provided, and the operation controller may provide, to a user, a setting menu related to registration of the job setting by the user and calling of a registered job setting; store the registered job setting in the job setting storage; and replace, when a job setting is registered with processing of the character recognition being set to be allowed, and the function of the character recognition is thereafter set to be prohibited and the job setting is called, a setting that requires the processing of the character recognition by a setting that does not require the processing of the character recognition before calling the setting. According to this mode, when a job is registered with the function of character recognition being allowed, and the function of the character recognition is thereafter set to be prohibited, in the case of calling a registered job setting, the calling can be performed by replacing the setting which requires processing of character recognition by a setting which does not require the processing of character recognition, whereby occurrence of inconsistency in the operation can be avoided.(vi) A filing data storage, which stores data of a document that has been read as the job is executed, may further be provided, and the job controller may store the data of the document in the filing data storage in accordance with a job setting related to execution of a job; and read and transmit, in accordance with an instruction by a user, the data of the document stored in the filing data storage to outside, in which when the data of the document is stored in the filing data storage with processing of the character recognition being set to be allowed, and the function of the character recognition is thereafter set to be prohibited and an instruction to read and transmit the data of the document to outside is received, if the data of the document stored in the filing data storage is in a data format that requires the processing of the character recognition, the data of the document may be first converted into a data format that does not require the processing of the character recognition, and then transmitted to outside. According to this mode, when data of a document is stored with the function of character recognition being allowed, and the function of the character recognition is thereafter set to be prohibited, in calling and transmitting the registered data of the document to the outside, the transmission can be performed by converting the data in a format that requires processing of character recognition into data in a format that does not require the processing of character recognition. Thus, inconsistency with the setting can be prevented from occurring.(vii) A user authenticator, which authenticates a user or a user group, may further be provided, and the setting manager may manage settings including a setting to allow or prohibit a function of character recognition by the character recognition processor for each user or user group. In this way, the setting to allow or prohibit the function of character recognition can be set and managed for each user or each user group.(viii) One aspect of the present disclosure includes an image processing method in which a processor of an image processing apparatus executes the image processing method including: a step of using a character recognition processor to read an image of a document and extract text information included in the document; a step of managing settings including a setting to allow or prohibit a character recognizing step by the character recognition processor; a step of controlling execution of a job related to reading of the document based on the setting; and a step of providing, to a user, a setting menu to receive a setting of one or more items related to the execution of the job, and receiving a setting from the user, in which when a function of the character recognizing is set to be prohibited, a function that requires the character recognizing is hidden from the setting menu or is indicated that the function is not to be set, and when the function that requires the character recognizing has already been set, the function is enabled to be replaced by another function. The aspect of the present disclosure also includes a combination of any of a plurality of aspects described above. Various modifications may be made to the present disclosure in addition to the above-described embodiments. Such modifications should not be construed as falling outside the scope of the present disclosure. The present disclosure should embrace the claims and their equivalents, and all modifications within the scope of the claims. | 58,289 |
11861254 | DETAILED DESCRIPTION Embodiment [Configuration of Industrial Printing System X] Firstly, with referring toFIG.1, the overall system configuration of the industrial printing system X according to the present embodiment is described. The industrial printing system X according to the present embodiment is a system that executes design and printing in industrial printing (production printing). Here, in the industrial printing system X of the present embodiment, a final product such as a book, or the like, to be output is defined as an “order”, and each component of the order is defined as a job. In the industrial printing system X of the present embodiment, each job for outputting the order is assigned to the component apparatus2and managed by the workflow. The industrial printing system X according to the present embodiment is capable of once archiving the data of the printed job and reprinting it. The industrial printing system X of the present embodiment includes a server1, component apparatus(es)2and an administrator terminal3, and each apparatus is connected with a network5. The server1is a server for designing variable printing in industrial printing and managing the workflow. The server1is a PC (Personal Computer) server, a dedicated machine, a general-purpose machine, or the like, placed on a so-called cloud or at a user's place. Then, the server1designs a variable document by using dedicated design application software (hereinafter simply referred to as “application”). The server1also manages the industrial printing workflow by executing a printing process management application. Specifically, the server1transmits and receives various instructions and information to and from the component apparatuses2in each process, and it manages the status of each component apparatus2and requests processing. At this time, the server1also processes, or the like, each configured job for unplanned processing such as change and cancellation occurring in the order. Further, the server1may be a server that executes a common platform that performs user management, tenant management, security management, notification service for maintenance, prepress management, storage management of each document, printer management, and the like. The application described above may be executed on this server. The component apparatus2is a component that executes various jobs of industrial printing, and is each apparatus managed by the server1. The component apparatus2includes, for example, a terminal for manuscript submission, a terminal for design proofreading, a prepress apparatus, a printing apparatus, a post-processing apparatus, a delivery management server, and the like. One of these apparatuses is simply referred to as component apparatus2in the present embodiment. Among these, in the present embodiment, as the printing apparatus, a digital production printing apparatus, a digital offset printing apparatus, or the like, which is capable of variable printing, is used, preferably. Each terminal and server in the component apparatus2can be connected with the server1via a web browser, a dedicated application, or the like, executed in a PC or a smartphone, or the like. The manager terminal3is a terminal of a user who is the manager of the printing process. By using the administrator terminal3, the administrator can access the server1to design a variable document by using a GUI, check the progress status, and request processing. In the present embodiment, the administrator terminal3can give instruction to the server1to reprint the archived job340(FIG.3) that was stored (archived) at the time of printing, and to set, or the like, the expiration date of the content and print stop. Next, with referring toFIG.2, the control configuration of the server1is described. The server1includes a control unit10, a network transmitting and receiving unit15, a storage unit19, and the like. Each unit is connected to the control unit10and controlled in operation by the control unit10. The control unit10includes GPP (General Purpose Processor), CPU (Central Processing Unit), MPU (Micro Processing Unit), DSP (Digital Signal Processor), GPU (Graphics Processing Unit), ASIC (Application Specific Integrated Circuit, application-specific processors), or the like. The control unit10reads the control program stored in the ROM or HDD of the storage unit19, expands the control program in the RAM, and executes it, thereby operating as each unit of the function blocks as described later. Further, the control unit10controls the entire apparatus according to instruction information input from the administrator terminal3or the console. The network transmitting and receiving unit15is a network connection unit including a LAN board, a wireless transmitting and receiving device, and the like, for connecting to the network5. The network5of the present embodiment is, for example, a LAN (Local Area Network), Wi-Fi, a WAN (Wide Area Network), a mobile telephone network, a voice telephone network, or the like. The network transmitting and receiving unit15transmits and receives data through a data communication line, and it transmits and receives voice signals through a voice telephone line. The storage unit19is a non-transitory recording medium such as semiconductor memory such as ROM (Read Only Memory) and RAM (Random Access Memory), or HDD (Hard Disk Drive), or the like. The control program for controlling the operation of the server1is stored in the ROM or HDD of the storage unit19. The control program includes an OS (Operating System), middleware on the OS, a service (a daemon), various applications, database data, and the like. Among these, various applications include the above-described design application, printing process management application, and the like. In addition, the storage unit19also stores account settings for users and administrators of the industrial printing system X. Further, a storage area for each user may be set in the storage unit19. In addition, in server1, the control unit10may be integrally formed, such as a CPU having built-in GPU, chip-on-module package, SOC (System On a Chip), or the like. Also, the control unit10may integrate a RAM, a ROM, a flash memory, or the like. [Functional Configuration of Server1] Here, with reference toFIG.3, the functional configuration of the server1of the industrial printing system X according to the present embodiment is described. The control unit10of the server1includes a variable attribute generation unit100, an archive job generation unit110, and a reprint unit120. The storage unit19stores variable document data300, variable attribute information330, archive jobs340, and rasterized data350. The variable attribute generation unit100acquires the variable data320of the variable document data300and generates variable attribute information330. Specifically, the variable attribute generation unit100puts together attribute information as variable attribute information330for each record of the variable data320. In the present embodiment, for example, the variable attribute generation unit100sets the expiration date of each record of the variable data320, the ID for identifying the record, and other conditions in the variable attribute information330. In addition, in the present embodiment, the variable attribute generation unit100can also generate the variable document data300itself. In this case, the variable attribute generation unit100can allow the user to design the variable document data300by using a GUI (Graphical User Interface) design application. In this case, a template of the design can be used. The archive job generator110generates an archive job340including the variable attribute information330generated by the variable attribute generation unit100. At this time, the archive job generation unit110generates rasterized data350in which each record is rasterized, and it includes the rasterized data350in the archive job340as capable to be referenced. The reprint unit120causes the component apparatus2to perform print processing each record of the archive job340generated by the archive job generation unit110. In the present embodiment, the reprint unit120reprints each record of the archive job340based on the expiration date, ID, other conditions, and the like. At this time, the reprint unit120may designate the customer, the area information, and the membership rank as the ID and reprint. Furthermore, the reprint unit120can stop reprinting (stop printing) as error processing when a record whose validity period has expired is included during reprinting. Alternatively, the reprint unit120may not stop reprinting, but may notify the user of records that have expired and is not be reprinted. The reprint unit120can also make these selections by setting of printing stop. In the present embodiment, as another condition, the reprint unit120may set the output destination to any one or any combination of print output, e-mail output, and electronic document output according to the set condition(s). The variable document data300is a file, database, or the like, in which a variable document used in variable printing and various data related thereto are collected. The variable document data300may be described, for example, in JDF (Job Description Format) and/or JMF (Job Messaging Format). In the present embodiment, variable document data300includes form data310and variable data320. These data may be included in the variable document data300as attribute data. The form data310is data including a common form, or the like, for performing variable printing. The common form is data of part (s) that basically do not change at the time of printing. Specifically, the form data310may be data such as PDF (Portable Document Format), PDL (Page Description Language), and PPML (Personalized Print Markup Language) in XML (Extensible Markup Language) format. Among these, the PDF may be PDF/X, which is a subset of the standard PDF defined by the International Organization for Standardization (ISO15930), a simpler PDF, or the like. Further, the form data310may include image data such as jpg, gif, BMP, PNG, TIFF, other document data, and other data. Additionally, the form data310may include layout information that defines the layout on the page, or the like. The layout information may include format information such as the position (coordinates) and size of the form on a page, font size of the variable data320, left alignment, center alignment, and right alignment, or the like. Furthermore, the form data310may also include definitions of the element of the variable data320, data describing the items of the element, data indicating the target of the attributes, and the like. The variable data320is data for variable output in order to change the print content at the time of printing. The variable data320may be embedded in the variable document data300in, for example, a table format including a plurality of records, a database format such as XML, or the like. Alternatively, the variable data320may be attached separately as a file in a format that is easy to handle as a database. In such case, the variable data320may be a database such as a tab-separated or comma-separated file, a spreadsheet application file, other types of database files, a list file, or the like. In the present embodiment, each record of the variable data320may have attributes such as expiration date, an ID, a condition, and the like. The variable attribute information330is data indicating attributes set for each record of the variable document. The attributes include information about each record and content information. In the present embodiment, the variable attribute information330may be data in a format that is easy to handle as a database similar to the variable data320. InFIG.4, an example of a part of the variable attribute information330is shown. In this example, variable attribute information330includes the following elements and attributes: “recode-number” indicates the record number of the variable data320. “primary-key” is an example of an ID, and it is a variable data value that becomes a primary key when specifying conditions. In this example, a value that can identify a record, such as a customer ID, or the like, is set. “pages” indicates the page number of the record including in the variable data320. For example, pages such as “pages1to10” are specified by “start-page” to “end-page”. “contents” or “content” indicates the content of the variable data320. “content-id” is an ID of the print component (content) (hereinafter referred to as “content ID”). The content ID may be uniquely set, for example, for each page when designing the variable document data300. The content ID in this example is generated by combining the ID “JB001” of the variable document data300and the page number. “content-source” is an ID indicating the source data of the content. In this example, a value similar to the content ID is set. “content-expire” indicates the expiration date. In this example, year, month, and day is set as an attribute. “archive-file” indicates the rasterized data350. In this element, attributes are set when the record is performed raster-in-process (hereinafter, it is abbreviated as “RIP” or “rasterize”). That is, when the rasterized data350of the record is generated and stored, the value is set to refer to it. In addition, at least part of the variable attribute information330may be described in a format compatible with JDF and/or JMF. The archive job340is print job data for reprinting variable print data including the variable data320. Specifically, the archive job340may be a file (a collection of data) in which various types of data used at the time of printing are structured and organized. Additionally, for example, the archive job340may also be written in JDF and/or (IMF. Here, in the present embodiment, archive job340is saved for archiving for reprinting. At this time, the archive job340includes data corresponding to each record included in the variable data320of the variable document data300. Specifically, archive job340structures and includes rasterized data350for each record. InFIG.5, an example of the structure of the archive job340is shown. This example shows variable attribute information330and rasterized data350corresponding to each record are set. Specifically, in the “rasterized data350” section, for example, the file path of the storage location set in the “archive-file” of the variable attribute information330is specified in the following format: “<archive-file>./arch/JB001/001.tif</archive-file>” The archive job340may include settings for reprinting. Each of these settings includes a setting to stop printing when there is an expired record. Further, the archive job340may include data created by prepress processing, corrections from the workflow, processing results of offset printing, and the like. In addition, archive job340may also include information that has been modified in response to prepress processing or post-processing. The rasterized data350is a file of electronic document data obtained by performing RIP the print components. In the present embodiment, the rasterized data350may be electronic document data such as PDF (Portable Document Format) in which each record of the variable document data300is printed, bitmap data (raster image data), the other type data, or the like. The image data in this rasterized data350may be reversibly compressed or irreversibly compressed. In the present embodiment, an example is described in which the rasterized data350is image data such as TIFF, which is obtained by dividing a large PDF generated as print data in units of records. In addition, the storage unit19may include workflow data. The workflow data is data for setting a workflow for creating an order as a final product by combining job templates. In the present embodiment, the workflow data includes provided data (hereinafter referred to as “template”) for specifying conditions for each record of the variable data320for performing variable printing by using the variable document data300. The template includes settings regarding what kind of archive job340is generated according to the conditions set for each record of the variable data320. Here, the control unit10of the server1is caused to function as a variable attribute generation unit100, an archive job generation unit110, and a reprint unit120by executing the control program stored in the storage unit19. Also, each part of the server1described above serves as a hardware resource for executing the variable printing method of the present disclosure. In addition, a part or any combination of the above functional configurations may be configured in terms of hardware or circuitry by using an IC, programmable logic, FPGA (Field-Programmable Gate Array), or the like. [Variable Archive Process by Industrial Printing System X] Next, with reference toFIGS.6and7, variable archive process by the industrial printing system X according to the embodiment of the present disclosure is described. In the variable archiving process according to the present embodiment, when designing the variable document data300, variable attribute information330in which the expiration date of each record of the variable data320is set is generated. Then, at the time of printing, an archive job340for reprinting the variable data320including the generated variable attribute information330is generated. Then, at the time of reprinting, each record of the generated archive job340is reprinted based on the expiration date. In the variable archive process of the present embodiment, mainly the control unit10of the server1executes a control program stored in the storage unit19in cooperation with each unit and by using hardware resources. In the follows, with reference to the flowchart ofFIG.6, details of the variable archive process are described step by step. (Step S101) Firstly, the variable attribute generation unit100performs variable attribute information generation process. The variable attribute generation unit100creates a variable document in which condition(s) are set for each record. Here, the variable attribute generation unit100generates variable document data300by using, for example, a template, or the like, according to the user's instruction via the GUI of the design application. At this time, the variable attribute generation unit100can create form data310and variable data320. The variable attribute generation unit100also generates variable attribute information330when creating the variable data320. For example, the variable attribute generation unit100can set the expiration date, ID, condition(s), or the like, specified by the user in each record of the variable data320and set them as element(s) of the variable attribute information330. Here, the variable attribute generation unit100can designate the expiration date for each record, for example, in the case of content is advertisements, or the like. This is because if the expiration date has passed, it may not be necessary for reprinting. In addition, the variable attribute generator100can set an ID for each record according to the user's designation of customer, area information, membership rank, and the like. Specifically, for example, a postal code ID can be set as the area information. This enables processing such as reprinting only records with the same zip code. As for the member rank, for example, it may be possible to designate “new customer”, “existing customer”, “ordinary member”, “premium member”, “VIP member”, or the like, as an attribute according to the type of each record. This enables reprinting according to the attribute. In addition, the variable attribute generation unit100can also specify conditions for data conversion. For example, it is possible to designate conversion to PDF/X for print output and conversion to low-resolution simple PDF for e-mail output. In addition, the variable attribute generation unit100can also set records to be processed by printing apparatuses and post-processing apparatuses of different component apparatuses2as specification of conditions. In addition, the variable attribute generation unit100can set, as a condition for each record, a condition designation that relates to a plurality of records, such as nesting condition designation, or the like. In addition, the variable attribute generation unit100may specify conditions corresponding to various macros, “For” statements or and “While” statements as in high-level languages. Variables, constants, random numbers, or the like, can also be used to specify the conditions at this time. For example, the variable attribute generation unit100can set conditions such as the first 100 people to come, a lottery “win”, and the like. Furthermore, the variable attribute generation unit100can also set conditions for record operations such as copying or deleting records, and conditions for changing or adding specific items in records, or the like. Furthermore, the variable attribute generation unit100may use the setting as a “template” for specifying other conditions. This template can also be shared. That is, settings change, or the like, may be possible to be centrally managed. This may be done similarly to by using global instances of “classes” in object-oriented languages. As a result, the setting of condition may be used in a common manner as similar to the “class” of the object-oriented language. In addition, the variable attribute generation unit100can also generate the template itself by using a GUI. Furthermore, the variable attribute generation unit100is capable of direct specification by using JDF and/or JMF, programmatic description by using a so-called “macro” language, and the like, in addition to condition setting by using a GUI. (Step S102) Then, the archive job generation unit110performs archive job generation process. The archive job generation unit110generates archive job340for reprinting variable data320. The archive job generator110causes the archive job340to include the variable attribute information330. Then, the archive job generation unit110generates rasterized data350in which each record is rasterized. The archive job generation unit110includes the rasterized data350in the archive job340so that it can be referenced. Specifically, for example, the archive job generating unit110divides a PDF generated for printing into records, and extracts image data for each record. The archive job generation unit110stores these image data as the rasterized data350in the storage unit19for archiving. In addition, the archive job generation unit110sets the file path of the stored image data, or the like, in the variable attribute information330of the archive job340. (Step S103) Then, the reprint unit120performs reprint process. After generating the archive job340, the reprint unit120reprints each record of the archive job340based on the expiration date as necessary. The reprint unit120can also reprint based on the customer, the area information, the membership rank ID, and the condition designation. Furthermore, the reprint unit120can also perform a setting to stop printing in accordance with the user's instruction, and it performs reprinting based on this setting. The details of the reprinting process are described later. With the above, the variable archive process according to the embodiment of the present disclosure completes. [Details of Reprint Process] Then, with reference toFIG.7, details of the above-described reprinting process are described step by step. (Step S111) Firstly, the reprint unit120performs ID record selection process. When the printing process management application is executed, the reprint unit120starts reprinting according to the user's instruction. The reprint unit120selects the target archive job340to be reprinted, designates the record to be reprinted, and executes reprinting. At this time, by specifying the ID set for a specific primary key, a record can be identified and reprinted. For example, when the customer ID is set as the primary key, the customer ID, such as “CI-001”, “CI-005”, “CI-009”, or the like, can be specified, and only the searched records can be reprinted. Alternatively, it is possible to identify a record by specifying a condition set on a specific primary key and reprint the record. For example, if the area information and the member rank are included as the primary keys, the record is identified by designating the ID of the specific area and the member rank. This membership rank may be “new customer”, “existing customer”, “ordinary member”, “premium member”, “VIP member”, or the like. Alternatively, it is also possible to specify only records based on the other conditional specification are to be reprinted. In addition, the reprint unit120can set printing stop according to the user's instruction. That is, if there is an expired record, the reprint unit120can stop printing or notify the record that is not be reprinted based on the print stop setting, as is described later. Further, the reprint unit120can specify record(s) to be processed by different printing apparatus or post-processing apparatus in component apparatuses2in the variable attribute information330and cause the record(s) to be reprinted. Alternatively, the reprint unit120can set a condition in the variable attribute information330so as to extract only record(s) that can be processed by the capability of the component apparatus2. (Step S112) Here, the reprint unit120starts processing each reprinted record and determines whether the record to be processed is within the expiration date. That is, the reprint unit120refers to the variable attribute information330of the archive job340, checks the expiration date of the record to be processed, and if the content of this record is within the expiration date and the rasterized data350exists, it decides Yes. The reprint unit120determines No if the expiration date has passed. In the case of Yes, the reprint unit120advances the process to step S113. In the case of No, the reprint unit120advances the process to step S114. (Step S113) If the record is within the expiration date, the reprint unit120performs record printing process. In the present embodiment, the reprint unit120processes each record of the archive job340based on the variable attribute information330. Here, the reprint unit120generates a job ticket for reprinting from the rasterized data350of the specified record and transmits it to the printing apparatus of the component apparatus2. Alternatively, the reprint unit120can perform post-processing, send an e-mail, or perform simple printing. Further, the reprint unit120can change the output according to the ID specification of the customer, the region information, and the member rank, and the other conditions. Specifically, the reprint unit120may set the output destination to any one of print output, e-mail output, and electronic document output, or an arbitrary combination, according to the ID. Furthermore, the reprint unit120can cause processing by the post-processing apparatus, the shipping management server, or the like, in the component apparatus2. Then, the reprint unit120advances the process to step S117. (Step S114) If there is a record whose expiration date has passed, the reprint unit120determines whether or not stop printing has been selected. If the stop printing has selected by the user setting, the reprint unit120determines Yes. Otherwise, the reprint unit120determines No, that is a case in which stop printing has not been selected. In the case of Yes, the reprint unit120advances the process to step S116. In the case of No, the reprint unit120advances the process to step S115. (Step S115) If stop printing is selected, the reprint unit120performs error process. Here, the reprint unit120stops reprinting and displays an error. In this case, for example, the reprint unit120can display the information of the record including the content that has been expired the expiration date as the error information on the administrator terminal3, or the like. After that, the reprint unit120completes the reprinting process. (Step S115) If print stop is not selected, the reprint unit120performs notification process. The reprint unit120prints only the record(s) within the expiration date and notifies the user of the record(s) that are not to be reprinted. Here, the reprint unit120performs reprinting except for the record that include expired content. At this time, after the reprinting, the reprint unit120displays the information of the printed record and the information of the record that was not printed because of the record including content whose expiration date has passed on the administrator terminal3, or the like, and it notifies the user. (Step S116) Here, the reprint unit120determines whether or not all records have been processed. If the reprint unit120has finished determining whether or not to reprint all the records selected by the user, it determines that the processing is completed, and determines Yes. The reprint unit120determines No if the processing of all the records has not been completed yet. In the case of Yes, the reprint unit120completes the reprinting process. In the case of No, the reprint unit120returns the process to step S112to continue processing the next record. As configured in this way, the following effects can be obtained. In digital printing for production printing, variable printing is the most characteristic digital printing job. For this reason, printing companies have become one of the purposes of introducing the digital production printing apparatuses. In addition, there have been many cases where the same printed material is reprinted by a printing company for regular orders, reprints, and the like. However, in typical variable printing, when reprinting, the same variable data as the previous time is reprinted regardless of whether it is valid, an archival print job might be printed that has already been invalid in the time of reprinting. On the other hand, the industrial printing system X according to the embodiment of the present disclosure is an industrial printing system that processes variable data320for production printing, including a variable attribute generation unit100that generates variable attribute information330in which an expiration date is set for each record of the variable data320; an archive job generation unit110that generates an archive job340for reprinting the variable data320that includes the variable attribute information330generated by the variable attribute generation unit100; and a reprint unit120that reprints each record of the archive job340generated by the archive job generation unit110based on the expiration date. With this configuration, an archive job340is generated in consideration of the expiration date of the contents of the variable data320, and variable printing is managed. As a result, the print job of variable printing can be used as the archive job340, and each record can be reprinted in consideration of the expiration date. Here, by checking the expiration date of the content, the printing of already invalid records can be prevented when reprinting. Therefore, printing errors can be reduced, and printing costs can be reduced. Further, in the industrial printing system. X according to the embodiment of the present disclosure, the archive job generation unit110generates rasterized data350obtained by rasterizing each of the records and includes the rasterized data350in the archive job340to be referenced. By configuring in this manner, not only the original document but also the rasterized data can be stored and managed by the archive job340. As a result, when reprinting, the saved rasterized data350can be reliably referred to for printing. As a result, when reprinting, reliable reprinting can be performed with the same quality as the previous printing. Further, in reprinting the typical variable data320, requests such as reprinting only specific records instead of all data could not be accommodated. On the other hand, in the industrial printing system X according to the embodiment of the present disclosure, an ID for identifying the record is also set in the variable attribute information330, and the reprint unit120reprints based on the ID as well. With this configuration, a record can be specified by ID of the variable data320, or the like. Therefore, only specific records according to the user's intention can be reprinted, easily. In the industrial printing system X according to the embodiment of the present disclosure, the ID designates the customer, area information, and membership rank, and the reprint unit120reprints based on the designation of the customer, the area information, and the membership rank. By configuring in this way, the IDs for the customer, the area information, and the membership rank can be specified, and reprinting based on the specification of these Ids can be performed. As a result, in addition to the expiration date, reprinting can be performed by designating the necessary record with the specific ID. In the industrial printing system X according to the embodiment of the present disclosure, when the record whose validity period has expired is included during reprinting, the reprint unit120selects whether to stop reprinting as error processing or to notify the record not to be reprinted to the user. With this configuration, if there is a content record whose expiration date has passed, whether to stop with an error or to ignore and notify only the record not to be reprinted can be selected. Therefore, reprinting can be performed in accordance with the user's intention. Other Embodiments In addition, in the above-described embodiment, an example of only reprinting the archive job340has been described. However, archive job340may also be configured for use in post-processing steps. In addition, the archive job340may change prepress, printing, post-processing, output destination, or the like, according to the conditions set in the record. It is also possible to specify conditions such that when there are few records, only the digital printer is used for printing, and when there are many records, the offset printer is used. Also, other conditions such as the number of records with the same ID or the file type of the records may be set. Also, if there are a plurality of component apparatuses2as output destinations, the apparatus may be selected according to a priority according to conditions. For example, in the “record conditions”, if “VIP member” is designated, a printing apparatus with high resolution and ink count is specified. Otherwise, if “regular member” is designated, a digital printing apparatus with low printing costs and a normal finish is specified, or the like. By configuring in this way, various conditions can be set and variable printing actually required in the industrial printing system can be performed. Also, in the above embodiment, the variable attribute information330is changed only when the rasterized data350is generated. However, the variable attribute information330may be changed according to the situation after the previous printing, the processing result, and the like. In this case, only the item may be changed, or the variable attribute information330itself may be changed. For example, if the results of a questionnaire show that many “VIP members” also want to send e-mails, a change may be automatically made to send e-mails at the same time as outputting printed matter. Furthermore, similarly, the form data310and the variable data320of the variable document data300may also be automatically changeable according to the processing results after output. By configuring in this way, variable printing in an industrial printing system that is more suited to the actual situation can be performed. Also, in the above-described embodiment, the example in which the archive job340is retained as it is even after being reprinted has been described. However, archive job340may be deleted after being reprinted. Also, the rasterized data350may be deleted after the expiration date or even before the expiration date. Furthermore, it is also possible to delete the rasterized data350only for records whose expiration date has passed. By configuring in this way, security can be taken into consideration. Also, in the above-described embodiment, an example in which the variable attribute information330is stored in the archive job340has been described. However, variable attribute information330may be set in variable data320. Alternatively, the variable attribute information330may be data such as a database different from the archive job340. In these cases, the original reference data may be included in the archive job340or the variable data320. Furthermore, variable attribute information330may be configured to include rasterized data350. In this case, data such as PDF corresponding to each record may be used. Also, by not generating the rasterized data350in the first place, the amount of data may also be reduced, possibly. By configuring in this way, it becomes possible to correspond to various configurations. Also, in the above-described embodiment, an example of setting the ID, or the like, for each record during normal printing has been described. However, it is also possible to set the ID directly for each record when reprinting. In this case, the printing process management application or the design application may allow the user to make settings by using a GUI. For example, the variable attribute generation unit100can acquire the user's instruction and set the ID. As a result, the variable attribute information330and the record itself of the variable data320of the variable document data300can also be changed. In the above embodiments, an example of automatically generating the archive job340was described. However, it may also be possible for the user to directly create the archive job340according to the conditions set by the variable attribute generation unit100. Further, when printing multiple times, it may be possible to set the archive job340to be generated each time or only at specified time(s). Also, in the above-described embodiment, an example was described in which the archive job340is process attribute data such as JDF and/or JMF. However, archive job340may also be able to use data in formats such as macro languages and programming languages. Also, in the above embodiment, an example of creating the archive job340described in JDF and/or JMF and performing each process of the variable document has been described. However, JDF and/or JMF may not be created. In this case, a job ticket that directly controls each apparatus may be generated according to the conditions set in each variable attribute information330. Thereby, the control similar to that of the archive job340may be performed. By configuring in this way, various configurations can be applied. In the above-described embodiment, examples of variable printing on paper printed matter, sending of e-mail, and electronic document output have been described as production printing, but other production printing can also be applied. For example, variable book printing, on-demand printing, and other printing are also applicable. Alternatively, for example, it can also be used for divided printing of large-sized posters, sheet printing for the exterior and interior of aircraft and automobiles, manufacture of electronic components such as flat displays and electronic substrates, and printing of cultured cells. In this case, as the component apparatus2, an industrial inkjet printer, an industrial robot, various reaction apparatuses, a culture apparatus, or the like, can be used. By configuring in this way, it becomes possible to correspond to various uses. Further, in the above-described embodiment, an example in which the server1performs various processes has been described. However, it may be configured by using a dedicated terminal for creating the variable data320, by using another server for managing the workflow, by performing prepress processing via the administrator terminal3, by using an e-mail transmission server, or the like. Further, the other apparatus may be configured to create and control the variable document data300and the archive job340. Further, the configuration and operation of the above-described embodiment are examples, and needless to say, they can be modified and executed as appropriate without departing from the aim of the present disclosure. | 40,880 |
11861255 | DETAILED DESCRIPTION Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims. The embodiments herein are generally directed to head-mounted displays (HMDs), and more particularly, to HMDs that have an outward-facing output system (including display(s), a speaker(s), or the like) in addition to wearer-facing displays to increase the options for interacting with others while wearing the HMD. In particular, HMDs that completely cover a wearer's eyes (or even entire head), such as those with displays or projectors enclosed in an opaque housing, may provide an immersive experience for the wearer. Further, HMDs may present environments and images to the wearer that are highly engaging and immersive, so much so that the wearer may be largely oblivious to what is happening in the real-world environment. For example, not only might an HMD block a wearer's view of the real-world environment, but a virtual-reality environment presented by the HMD may consume a wearer's attention and focus, making the wearer less likely to hear people or be able to respond or react to the real-world environment. Further, such HMDs limit the ability of people in the real-world environment to recognize or interact with the wearer, as the wearer's eyes are usually completely covered and the wearer's attention may be entirely directed to a virtual environment. For example, even if a HMD allows a wearer to see the external environment (e.g., via an external camera transmitting images to a wearer-facing display), a person who is in front of the wearer cannot determine if the wearer is able to see or perceive the real world environment, or if they are immersed in a virtual environment. And even if a person and the wearer are interacting by speaking, the wearer's eye movements and facial expressions remain largely hidden, thus removing many non-verbal cues and signals that are important for conveying information and emotion in human-to-human communications. Accordingly, described herein are HMDs that incorporate outward-facing or outwardly-directed output systems, such as external displays (high and/or low resolution displays), speakers, or the like, that can display or otherwise present information to observers in the real-world environment. This outwardly displayed or presented information may provide a communication path that helps penetrate the otherwise immersive world of a virtual-reality environment and helps break down the physical barrier of the HMD. For example, outward-facing displays may be capable of displaying symbolic graphical information to outside observers, and speakers may allow a digital assistant to interact with another person until a wearer is able to direct his or her attention to the other person. In this way, communication between the wearer and the outside observer is restored or improved, despite the immersive and dissociative effect of the HMD and the virtual environment. As one particular example, in cases where an HMD includes an outward-facing graphical display, aspects or images of a wearer's face may be displayed to the real-world environment, thereby improving interaction, recognition, and communication between the wearer and individuals in the real-world environment. To facilitate these improvements, an HMD may include a camera that captures images (e.g., still images or video images) of the portion of a user's face that is covered by a HMD, such as the user's eyes, eyebrows, nose, forehead, or the like. The HMD may display those captured images on the outward-facing display, allowing individuals in the real world to see the wearer's eyes and facial expressions, thereby facilitating a more natural interaction with the wearer. Moreover, the outward-facing display may also allow others to more quickly and easily recognize the person wearing the HMD, as more of their face will be perceptible. As another example, where an HMD includes a low resolution display (e.g., an LED array), the display can be used to display symbolic graphical outputs that correspond to or suggest a mood or emotion of the wearer, or that convey a mode or status of operation of the HMD. As yet another example, an HMD may include an outwardly-directed speaker that allows interaction between a digital assistant integrated with the HMD and an outside observer. These types of output systems and techniques, when integrated with an HMD (either alone or in any combination), allow more varied and robust interactions between HMD wearers and other people. The HMD may also have outward-facing cameras or other sensors allowing the wearer of the HMD to see video or images of the real world, including people, via the internal displays of the HMD. (Such video or images may be displayed to the wearer without any modification, or they may be fully or partially integrated into the wearer's virtual environment.) Because each person can see the face and eyes of the other, the dual-display HMD effectively provides a two-way pathway for visual communication between the wearer and a person in the real world environment. Moreover, both people can experience more natural communication without requiring the wearer of the HMD to completely disengage with the virtual experience. In addition to merely displaying a wearer's eyes or face on the outward-facing display, the HMD may enhance the visual output that is displayed on the outward-facing display in various ways. In particular, images of the user's eyes may be modified based on aspects of the real world environment, including the intensity, direction, or color of the ambient light. Accordingly, images of the wearer's eyes—which are behind an HMD and are not exposed to ambient light—can more closely match the appearance of the rest of the wearer's face. For example, a light sensor on the HMD may determine a color temperature of the real world environment and modify the color temperature of the visual output (e.g., still images or video images) on the outward-facing display to match. The HMD may provide other enhancements or modifications of the visual output displayed via the outward-facing display. For example, instead of merely displaying a live video feed of the wearer's face or eyes, the HMD may display a cartoon or other digitally rendered face or eyes that track and/or mimic the user's eye movements, expressions, and the like. As another example, the display of the wearer's face or eyes may be modified based on aspects of the virtual environment being experienced by the wearer. Thus, if it is raining in the virtual environment, rain may be visible on or in front of the user's eyes or face, or if the wearer is playing a game in which they are a cat, the user's eyes or face (as displayed) may be modified to appear as a cat. As yet another example, the visual output may be modified based on inputs directly from the wearer. Thus, if the user expresses an emotion such as surprise (as detected by visual cues, biometric data, user-command, or any other suitable technique), the displayed eyes may be enhanced, modified, exaggerated or the like to show a cartoonish, embellished indication of surprise. Similarly, if the wearer is angry, the eyes shown in the display may be modified, exaggerated, or the like to appear red or flaming. Thus, the immersive effect of virtual environments combined with the visually blocking effect of an HMD can impede interaction between HMD wearers and other people. The outwardly-directed output systems and techniques described herein, including outward-facing displays, help break down these barriers and enable new paths of communication and interaction between wearers and other people. FIG.1depicts a wearer (also referred to as a “user”)100wearing a head-mounted display (HMD)102, which may be a virtual reality system that produces and/or presents at least partially virtualized environments to the wearer100. As described herein, the HMD102may include outward-facing output systems that help break down the barriers to communication, recognition, and interaction that are created by HMDs and other devices that immerse a wearer in a virtual environment. The HMD102is configured to be worn over the wearer's eyes and some portion of the wearer's face. The HMD102shown inFIG.1covers only a portion of the wearer's face, though other designs, shapes, and sizes are also contemplated. For example, some HMDs may cover a wearer's entire head, while others may cover the eyes and ears, and so on. The HMD102may be attached to the wearer100via a strap104, or any other suitable mechanism. The HMD102is configured to display to the wearer100, via one or more internal or user-facing displays or visual output devices (shown in more detail inFIGS.11A-11Band discussed below), an at least partially virtual world. For example, the HMD102may be configured to present to the wearer100a fully virtual environment (e.g., a digitally rendered video game), a mixed reality or augmented reality environment (e.g., real world elements or objects integrated in an otherwise virtual environment), or any other type of at least partially virtual environment that is mediated by the HMD102. The HMD102may generate and/or present the at least partially virtual environment alone or in conjunction with other computers, components, and/or devices. The HMD102also includes an external or outward-facing display106. The outward-facing display106may define a display surface that faces away from the wearer, and may be configured to display visual output (e.g., still images, video images) to persons other than the wearer100. As noted above and described herein, such visual output may include real-time or pre-captured video or still images of at least a part of the wearer's100face (e.g., the wearer's eyes and eyebrows). The outward-facing display106is discussed in greater detail with respect toFIG.17B. The visual output presented on the outward-facing display106may be unmodified (e.g., a direct video feed from an internal camera of the HMD102, and/or other sensor data from another internal sensor), or it may be interpreted, enhanced, filtered, or otherwise modified in various ways, as described herein. The outward-facing display106facilitates interactions that are more natural and also provides improved facial recognition and non-verbal communication between the wearer100and other people. The HMD102may also include other outward-facing output systems in place of or in addition to an outward-facing display106. For example, the HMD102may include a speaker that allows the HMD102to present audio output to an external observer. The audio output may be produced automatically by a digital assistant associated with the HMD102, or it may be presented at the request of the wearer. In the former case, the digital assistant may, for example, recognize that another person has approached the wearer and automatically tell the other person to please wait while the wearer suspends the virtual environment. In the latter case, the wearer may instruct the HMD102to present particular audio content to another person (or the HMD102may do so automatically). As described herein, the HMD102may include various sensors. Some of the sensors may be configured to detect, sense, or otherwise capture information from the wearer100. Such sensors may include wearer-facing cameras, eye-tracking sensors, biometric sensors (e.g., heart rate, blood oxygen, respiration rate, perspiration rate), motion sensors, presence sensors (e.g., to detect whether the HMD102is currently being worn), or the like. Information from such sensors may be used to select and/or modify visual output that is displayed via the outward-facing display106, as described herein. For example, information from such sensors may be used to determine how and/or when to modify an image or model of the wearer so that the image or model represents or corresponds to the expression, emotion, or other state of the user. The modified image or model may then be displayed on the outward-facing display106. Other sensors, such as sensors or sensor arrays108, may be configured to detect, sense, or otherwise capture information from the real world environment, apart from the user. Such sensors may include outward-facing cameras, photo sensors, object detection sensors (e.g., radar sensors, light detection and ranging (LIDAR) sensors, acoustic sensors), ultrasonic sensors, light sensors, eye-tracking sensors, motion sensors, or the like, as described herein with respect toFIG.17B. The HMD102shown and described with respect toFIG.1may be used in various ways to improve the quality and types of interactions that are possible between a wearer and external observers. In particular, and as described herein, outwardly-directed output systems such as outward-facing displays can form a tunnel through both the physical and the virtual barriers that HMDs erect between wearers and other individuals and the real-world environment more generally.FIGS.2A-2B, for example, illustrate how an HMD102with an outward-facing display may achieve these results.FIGS.2A-2Billustrate the HMD102in two modes of operation: a first mode in which the outward-facing display106is inactive (FIG.2A); and a second mode in which the outward-facing display106is active and displaying visual output202of (or corresponding to) a portion of the wearer's face. The interaction illustrated inFIGS.2A-2Bdemonstrates how the HMD102may provide more natural interaction and communication between the wearer100and another person200. In particular, inFIG.2A, the wearer's attention may be fully directed to a virtual world provided by the HMD102, and thus the outward-facing display106is inactive. When another person200approaches the wearer100, however, the outward-facing display106may become active and may display a visual output202, such as still images or a video image of the wearer's face or other suitable visual output202. This may provide numerous benefits. For example, the other person200may be more likely to physically recognize the wearer100, thus increasing the feeling of comfort and connection with the wearer100. Further, the other person200can see nonverbal cues, such as gaze direction, eyelid and eyebrow movements, or other physical characteristics of the wearer100, which may improve the quality, speed, comfort, and familiarity of an interaction between the wearer100and the other person200. WhileFIG.2Bshows the visual output202as an image of the wearer's face, other visual outputs may be provided instead of or in addition to images of the wearer's face, such as textual outputs (e.g., “do not disturb,” “in virtual mode—please wait,” etc.), other graphical outputs (e.g., images of what is being displayed to the wearer100), a graphical user interface, weather information, patterns or symbols, or the like. Examples of visual outputs other than an image (e.g., video or still) of the wearer's face that may be displayed on an outward-facing display are described herein. FIGS.2A-2Balso more generally illustrate how conventional head-mounted displays or virtual reality systems may alienate others and decrease the ability to interact and the quality of interactions with the wearer. For example, when an outward-facing display106is inactive, or in cases where an HMD lacks an outward-facing display, the other person200may have little or no indication of the wearer's state of attention (as illustrated by the question and exclamation mark inFIG.2A). For example, the other person200may not be able tell whether the wearer is aware of the other person's presence, what the wearer is viewing (the real world or the virtual world), whether the wearer recognizes the other person, etc. In short, a head-mounted display without an outward-facing display (or an inactive outward-facing display) is a barrier to interaction and communication. By displaying information (e.g., images) on the outward-facing display106, important visual cues may be seen by the other person, effectively removing or diminishing the barrier effect of the head-mounted display. For example, showing the wearer's eyes to the other person200, as shown inFIG.2B, provides confirmation to the other person200that the wearer100is paying attention to them and/or the real world, rather than to the virtual world. This may reduce the other person's confusion and/or discomfort with the interaction (as indicated inFIG.2Bby a happy face), and may provide a more natural way to communicate with the wearer100. The mode of the HMD102, and more particularly whether or not the outward-facing display106is active, may be determined in any of a variety of ways. In some cases, the HMD102may use externally directed sensors (e.g., proximity sensors, LIDAR, presence sensors, cameras, microphones, etc.) to determine whether another person is near or within a threshold distance from the wearer100(e.g., within about 30 feet, 20 feet, 10 feet, 5 feet, or any other suitable distance) or to otherwise determine whether another person is attempting to interact with the wearer. In accordance with a determination that a person is within the threshold distance (or in accordance with a determination that a person is attempting to interact with the wearer), the HMD102may activate the outward-facing display106to present a visual output, such as an image of or suggestive of the wearer100. In some cases, the HMD102may also include an outward-facing sensor (e.g., a camera) that can detect the gaze direction of another person. In such cases, the HMD102may only activate the outward-facing display106and/or present a visual output, as described herein, in response to detecting that the other person is looking at or towards the wearer. In some cases, even after a person is detected nearby (e.g., within the threshold distance), the HMD102may activate the outward-facing display106only in response to detecting that the wearer100has directed his or her attention to the real world environment, to avoid giving a false impression of attention to the approaching person. The HMD102may detect that the wearer100has directed his or her attention to the real world in various ways. For example, the HMD102may display an affordance to the wearer100(e.g., via an internal display of the HMD102) offering the option of activating the outward-facing display106, and the outward-facing display106may only display visual output if the wearer100actively commands the HMD102to do so. As another example, in response to determining that a person is within a threshold distance of the wearer100, the HMD102may display to the wearer100visual output representative of the real world environment, such as a window showing a live video feed of the real world environment and the other person200. The window may be displayed out of the wearer's direct view so as to not disrupt the wearer's virtual experience. The HMD102may determine, using eye-tracking or other techniques, when and whether the wearer100looks at the window, and activate the outward-facing display106in response to determining that the wearer100is looking directly at the window. The HMD102may also delay activating the outward-facing display106after the wearer100looks at the window so that the wearer100can look away if he or she determines not to engage with the nearby person200. In this way, the outward-facing display106can indicate to other people whether the wearer's attention is actually directed to the real world environment. In some cases, in addition to or instead of determining whether another person is within a threshold distance of the user, the HMD102may determine whether a nearby person is known to the wearer. If the nearby person is not known to the wearer, the HMD102may not activate the outward-facing display106, and may not notify the wearer100to the presence of the person. If the nearby person is known to the wearer, the HMD102may present visual output via the outward-facing display106, and may notify the wearer100to the other person's presence. The HMD102may determine whether the person is known to the wearer100in any suitable manner. For example, the HMD102may compare an image of the person's face and/or body against images of individuals or contacts who are known to the wearer100. The HMD102may use facial recognition, image comparison, or any other suitable technique to determine whether the captured image matches an image of a known individual. (Images of known individuals may be stored in or associated with contacts in a contact list.) The HMD102may also automatically determine whether the person is known to the wearer by communicating with or otherwise detecting an electronic device (e.g., a phone, smartwatch, HMD, near field communication (NFC) chip, wireless-enabled device, or the like) associated with the person. The HMD102may compare a digital signature, address, or other identifiable information from the person's electronic device with a list of known information to determine if the person is known to the wearer100. Instead of or in addition to automatic person recognition, the wearer100may manually select whether a person is known or recognized (e.g., after viewing the person through the HMD102). The HMD102may also be configured to display different visual outputs on the outward-facing display106depending on whether or not a nearby person is recognized (either automatically or as selected by the wearer100). For example,FIGS.3A-3Bshow the wearer and the HMD102in various modes depending on whether another person300is nearby and recognized.FIG.3Ashows the wearer100when no other person is within a threshold distance. Accordingly, the outward-facing display106may be inactive (as shown), or it may display a decorative image or video (e.g., not corresponding the wearer's face or eyes). InFIG.3B, another person300is within a threshold distance of the wearer100. However, the other person300has not been recognized by the HMD102or the wearer100, or the wearer100has manually chosen to indicate that the person300is not known or recognized. Optionally, the wearer100may have also indicated to the HMD102that he or she wishes to interact with the person300. In response to the person300being in proximity but not being recognized, the HMD102displays a generic visual output302via the outward-facing display106. The generic visual output302may be any suitable output that conveys that the wearer100is viewing the real world environment and can interact with the person300, but is not an image of the wearer's face, or is an image that obscures part or all of the wearer's face, such as the eyes, eyebrows, and/or other identifiable physical features of the wearer. As shown inFIG.3B, the generic visual output302is a pair of sunglasses that do not show the wearer's eyes, though other outputs are also possible, such as digitally rendered eyes that do not have the same appearance as the wearer's eyes (but which may be manipulated to reflect the wearer's real-time expression and gaze direction). In this mode, the communicative benefits of the outward-facing display106may be realized while both maintaining a degree of anonymity and also suggesting to the other person300that he or she is not recognized by the wearer. InFIG.3C, however, the other person300is recognized (automatically by the HMD102and/or manually by the wearer100). As such, the HMD102displays a visual output304that corresponds to the wearer's actual physical features. For example, the visual output304may be a live video feed of the wearer's face (or a portion thereof) captured by a camera within the HMD102. As another example, the visual output304may include pre-captured images of the wearer, which may or may not be manipulated to reflect the wearer's real-time expression and gaze direction. More particularly, an image (e.g., a two-dimensional image and/or a three-dimensional rendering or representation) of the wearer100may be captured at a first time, such as when the wearer first dons the HMD102. The image may be captured by a camera, sensor, or other device attached to the HMD102, or via a separate device (e.g., a dedicated imaging system, a mobile phone, a tablet computer, or the like). When the wearer100is wearing the HMD102, a wearer-facing sensor, such as a biometric sensor, camera, eye-tracking sensor, motion sensor, or the like, may detect some aspect of the wearer100that is indicative of an emotion, expression, or other feature. In response to detecting the aspect of the wearer100, the HMD102may cause the pre-captured two- or three-dimensional image to be modified to correspond to, reflect, or otherwise be suggestive of the detected aspect. For example, if an eye-tracking sensor detects the direction of the wearer's gaze, the HMD102may modify the pre-captured image to reflect the wearer's detected gaze direction. As another example, if a face-mapping biometric sensor detects motion indicative of an eye blinking (or otherwise changing shape) or other facial motion, the HMD102may modify the pre-captured image to reflect or simulate the blinking or other motion. As yet another example, if a motion sensor detects motion of a wearer's face, the HMD102may capture an image (still or video) of the wearer's face in response to detecting the motion (or other triggering event), and may generate an animation of the wearer's face transitioning from the pre-captured image to the newly-captured image (possibly using the newly-captured image as the final frame of the animation). These modifications may include animating or otherwise changing the pre-captured image to show a smooth transition between states of the wearer's features. Modifying captured images or models in this manner may reduce processing burden as compared to showing a live video feed of the wearer, and may be able to use sensing techniques that are less intrusive to the wearer than optical imaging may be. For example, optical imaging may use a light source to illuminate the wearer's face within the HMD102in order to capture high-quality images for a real-time video feed, which could be detrimental to the wearer's experience. Additionally, capturing real-time video data for an extended period of time may be power-intensive. By using less intrusive sensors (e.g., infrared light sources and sensors, infrared light sensors, etc.) and modifying a captured image or video segment rather than capturing and/or transmitting real-time data, a high-quality, well-lit image may be displayed on the outward-facing displays without distracting the wearer and while requiring less overall processing resources. Accordingly, it should be understood that references to “video” or the like herein encompass modification of an image or set of images, whether static or a short-duration animation or video capture segment, as discussed above. Likewise, references to modifying, changing, or otherwise altering or adjusting an image or images, transitioning between two images, and the like also encompass the foregoing. As described above, the outward-facing display106may be configured to display facial features of the wearer, thereby providing a more natural interaction between the wearer and other people.FIGS.4A-6Cshow various ways that physical features of a wearer may be displayed on the outward-facing display106. For example, a camera (and/or other sensors) in the HMD102may capture real-time images of the portion of the wearer's face that is behind the HMD102, which may then be displayed on the outward-facing display106with or without modification or manipulation. As another example, images (e.g., video or still images) of the wearer may be captured and stored for later display on the outward-facing display106. As noted above, these pre-captured images may be modified based on attributes of the wearer, where the attributes are detected by sensors, cameras, or the like. These visual outputs may allow the wearer to communicate via facial expressions, decrease an observer's feelings of alienation and separation, and generally improve the quality of interaction between the wearer and the other person. FIGS.4A-4Cshow how a portion of a wearer's face may be captured by a camera and displayed as a real-time still or video image on the outward-facing display106.FIGS.4A-4Ceach show the wearer100on the left with a particular expression: a neutral expression inFIG.4A, a surprised expression inFIG.4B, and an angry expression inFIG.4C. The right side of each figure shows what is shown on the outward-facing display106. As shown, the displayed images are unmodified images of the wearer's face. In some cases, the images may be modified in ways that do not change the fundamental content of the image, such as changes to lighting, color, contrast, hue, or the like. The portions of the wearer's face displayed on the outward-facing display106(shown on the right hand side of the figures) correspond to the portions of the wearer's face that are covered by the HMD102, including for example the wearer's eyes, eyebrows, and part of the wearer's nose and forehead. This allows the outward-facing display106to simulate a “transparent” HMD so that an observer essentially sees what is behind the HMD102and to convey highly expressive features that provide important non-verbal cues and information to observers. FIGS.5A-5Cshow how the HMD102may display a modified or embellished version of the wearer's face in which the content of the image is altered but the user's expressions are mapped or translated to a different image so that modified or embellished version of the wearer's face is coordinated with the expressions of the wearer's face and/or eyes. In particular, the HMD102may use a camera (and/or other sensors) to capture an image of a portion of the wearer's face. The HMD102may then modify the image, in real-time, to have a different appearance, while maintaining the same or similar expressions of the wearer. For example,FIGS.5A-5Cconvert images of the wearer's expression, shown on the left side of the figure, to a partial animal face, while maintaining similar expressions, face and eye movements, and the like. WhileFIGS.5A-5Cdisplay the relevant portion of the wearer's face as that of a cat, this is merely one example. In other cases, the HMD102may display portions of the wearer's face as a different animal, a cartoon, a line drawing, an emoji, an alien or other fictional being, or the like. The images displayed on the outward-facing display106may generally correspond to the portions of the wearer's face that are covered by the HMD102, including for example the wearer's eyes, eyebrows, and part of the wearer's nose and forehead. Physical features displayed on the outward-facing display, such as eyes, eyebrows, noses, and the like, may be positioned to generally correspond to the position of the wearer's physical features, thus providing a substantially seamless transition from the displayed images to the wearer's actual face. A captured image of the wearer's face may be mapped or translated to another being or object in any suitable way, thereby coordinating the outwardly displayed graphics with the wearer's actual eyes, face, expressions, and/or features. For example, in some cases, photographic manipulation is used to directly modify a captured image, such as by modifying textures, colors, shapes, angles, or overlaying other images such as eyes, glasses, eyebrows, fur, or the like. In some cases, facial and eye movements are analyzed and converted into motion vectors that are then used to modify a purely digital model of an object. For example, an eye-tracking system may capture and/or calculate vectors representing the speed and direction of motion of the wearer's eyes, and the HMD102may apply those vectors to the eyes of a manipulable digital model of a cat. Accordingly, the cat's eyes will appear to move at the speed and in the same direction as the wearer's eyes. Similar techniques may be used to capture motion of other parts of the wearer's face (e.g., eyebrows, nose, forehead) and apply those motions to a digital model. The same or similar techniques for mapping or translating detected attributes of a wearer to pre-captured images and/or models of the wearer. For example, detected attributes of the wearer (e.g., eye and/or facial movements) may be used to modify a pre-captured two- or three-dimensional image, model, or video of a wearer's face for display on an outward-facing display. FIGS.6A-6Cshow a hybrid display technique, in which some of a wearer's expressions or emotions cause the HMD102to display direct images of the wearer's face, while others cause the HMD102to display embellished or otherwise modified images representing the wearer's face. For example, as shown inFIG.6A, a resting or neutral expression may be displayed on the outward-facing display106as a real-time image of the wearer's face. As shown inFIG.6B, when the wearer100has a surprised expression, the HMD102may display cartoon eyes having a similar or analogous expression. Similarly,FIG.6Cshows the wearer100having an angry expression, resulting in the HMD102displaying cartoon eyes having a similarly angry expression. The cartoon images inFIGS.6B-6Cmay be still images, or they may be moving images that track, in real-time, the wearer's expressions (e.g., they may be manipulable digital models, as described above). Where they are still images, the they may be displayed for a predetermined duration (e.g., 1 second, 2 seconds, 3 seconds), or they may be displayed as long as the wearer has the expression (or as long as the wearer100commands them to be displayed). The wearer's expressions may initiate the display of certain visual outputs such as the cartoon eyes described above with respect toFIGS.6B-6C. These expressions may be detected in any suitable way. For example, the expressions may be detected by analyzing video images of the user's face and/or eyes and comparing the images against known expressions and corresponding emotions. For example, an image analysis algorithm may associate raised eyebrows with an expression or emotion of surprise, or narrowed eyelids with an expression or emotion of anger. The HMD102may then select a corresponding cartoon (or other type of image, photograph, artwork, animal, or the like) to display on the outward-facing display106. As another example, the expressions may be separately controlled by the wearer. For example, the wearer100may signal to the HMD102, via any suitable input, what emotion the wearer100would like the HMD102to display on the outward-facing display. Thus, if the wearer100indicates that the wearer100is angry, the HMD102may display the eyes shown inFIG.6C. The HMD102may also modify or manipulate images of the wearer's face in other ways to produce a natural, lifelike appearance of the wearer's face on the outward-facing display. In particular, because the HMD102covers a portion of the wearer's face, that portion of the wearer's face will not be illuminated by the light in the real world environment. Accordingly, showing an unmodified, unaltered video of the wearer's face may not appear natural. For example, to even capture an image of the wearer's face behind the dark mask, the HMD102may use infrared light to illuminate the wearer's face without distracting the wearer or otherwise interfering with the wearer's virtual experience. Such lighting, however, may produce images dominated by unnatural looking hues or greyscale images that will not match or blend well with the portion of the wearer's face that is not covered by the HMD102. Accordingly, the HMD102may modify or manipulate the captured images to better match the rest of the wearer's face and provide a better experience for people interacting with the wearer. In some cases, the HMD102may apply one or more predetermined image manipulations or modifications to the captured images prior to displaying them on the outward-facing display106. Such image manipulations may include changing a hue, contrast, brightness, or saturation of the image (or any other suitable parameter). In some cases, where captured images are monochrome or otherwise do not contain a full color range, they may be colorized. Other filters, edits, or other manipulations are also possible. Such manipulations or modifications may be applied to images of the wearer's face regardless of the surrounding real world environment. Further, as described in further detail with respect toFIGS.14A-14D, images of the wearer's face may be displayed on the outward-facing display such that the displayed image appears continuous with the wearer's face. More particularly, because the outward-facing display106may be a nonzero distance (e.g., several inches) in front of the wearer's face, displayed facial features may appear to be “in front of” the wearer's actual face, rather than continuous with the wearer's face. Accordingly, the displayed images may be modified to appear “recessed” relative to the outward-facing display, providing the illusion that the outward-facing display106is transparent and that external observers are actually seeing the wearer's face through the outward-facing display106. FIGS.7A-7Cshow another example of a hybrid display technique. In particular, similar toFIGS.6A-6C, some of a wearer's expressions or emotions cause the HMD102to display direct images of the wearer's face, while others cause the HMD102to display embellished or otherwise modified images representing the wearer's face. WhereasFIGS.6B-6Cshow the wearer's face being replaced with cartoon eyes,FIGS.7B-7Cshow the wearer's face having exaggerated versions of the wearer's real expression. For example, as shown inFIG.7A, a resting or neutral expression may be displayed on the outward-facing display106as a real-time image of the wearer's face. As shown inFIG.7B, when the wearer100has a surprised expression, which may be characterized by widening of the eyes and raising of the eyebrows, the HMD102may display the wearer's face but with eyes that are even wider and eyebrows that are even higher than those of the wearer's actual expression. Similarly,FIG.7Cshows the wearer100having an angry expression, characterized by narrowing of the eyes and lowering of the eyebrows. Detection of this expression or emotion (using any suitable detection technique, such as image analysis) may result in the HMD102displaying eyes having an exaggerated angry expression, including, for example, narrower eyes, more sharply angled eyebrows, and a flame or shimmering effect on the user's pupils. The images inFIGS.7B-7Cmay be still images, or they may be moving images that track, in real-time, the wearer's expressions (e.g., they may be manipulable digital models or real-time modifications of captured images of the wearer, as described above). Where they are still images, the they may be displayed for a predetermined duration (e.g., 1 second, 2 seconds, 3 seconds), or they may be displayed as long as the wearer has the expression (or as long as the wearer100commands them to be displayed). The images in7B-7C may be produced by modifying captured images of the wearer. For example, the HMD102may identify regions of a captured image corresponding to particular facial features, such as eyebrows, pupils, iris, sclera, eyelids, forehead, and the like. Those regions (as well as surrounding or nearby areas of the image) may be modified to produce the desired image for display on the outward-facing display106. For example, the angle of the eyebrows may be changed (e.g., producing or exaggerating a “V” shape) to exaggerate a detected expression of anger. As another example, the eyes and/or pupils may be enlarged to exaggerate an expression or surprise or excitement. As further examples, the borders of a pupil may be caused to shimmer or appear flame-like to exaggerate an expression of anger, or they may be changed to a heart-shape to exaggerate an expression of love. Other modifications are also possible. In some cases, what is displayed on the outward-facing display106may be more symbolic (e.g., less lifelike or realistic) than the images described with respect toFIGS.4A-7C. For example,FIGS.8A-8Cshow the HMD102displaying shapes that are representative or suggestive of eyes and that convey information, but that are not actually images of the human or animal eyes. Displaying shapes as shown inFIGS.8A-8Cmay provide several benefits, such as lower screen resolution requirements, allowing greater anonymity for the wearer while also increasing the ability to interact and quality of interactions between the wearer and other people. With reference toFIG.8A, the HMD102may display shapes that represent the wearer's eyes and indicate, for example, where the wearer100is actually looking. The visual output800shows the wearer's gaze in one direction, while the visual output802shows the wearer's gaze in a different direction. Despite the highly obfuscated or symbolic appearance of the eyes, showing gaze direction may convey important information to observers and increase the feeling of interaction and connection between the wearer100and other people. As described herein, the wearer's gaze direction may be determined by sensors in the HMD102. FIG.8Bshows an example visual output804that may be used to indicate when the wearer100is directly focused on an observer or other person. As shown, the wearer's eyes may be represented by circles. This particular visual output804may also be used to indicate to observers that the wearer100is able to see the real world environment, rather than just a virtual world. The visual output on the outward-facing display106may change from one state to another based on any suitable information or status. For example, a gaze detection sensor or camera may detect the wearer's gaze and show the visual output800,802when the wearer100is actively looking around the real-world environment, and show the visual output804when the wearer is focused on a single person or object in the real-world environment. FIG.8Cshows an example visual output806that may be used to convey an emotion via shapes that represent eyes but are not images of the wearer's eyes. As shown, the conveyed emotion may be anger, though other shapes representing other emotions are also contemplated, such as hearts to indicate love, question marks to indicate confusion, oversized circles to indicate surprise, circles with teardrops to indicate sadness, and the like. Further, the emotion to be conveyed by the representative or symbolic eyes may be determined in any suitable way. For example, the wearer100may actively select what emotion is to be conveyed (e.g., by selecting from a list). As another example, the HMD102may use cameras, heart rate monitors, temperature sensors, skin-flush sensors, biometric sensors, or any other suitable technique, sensor, or system to determine the wearer's emotion for selecting a corresponding visual output. In some cases, the outward-facing display106displays visual output that is not suggestive of eyes or other facial features, yet still conveys information to observers and increases the quality and the content of interactions between a wearer and an observer. Such visual outputs may include, for example, symbols or images indicative or suggestive of emotions or moods, informational displays (e.g., current weather, a description of the content the wearer is viewing, the wearer's calendar), or patterns that indicate operational states or a status of the wearer or the wearer's virtual environment.FIGS.9A-9Dshow a wearer100with an HMD102displaying symbols, images, or other information that is not indicative of the wearer's eyes or face, yet still conveys information and increases the quality of interactions between the wearer and an observer. FIG.9A, for example, shows the HMD102showing a visual output900that includes a pattern of hearts. The visual output900may be displayed in response to any suitable input or detected state of the wearer. For example, the visual output900may be displayed when the wearer100indicates (or the HMD102detects) that the wearer100is feeling happy or experiencing emotions of love or affection. The HMD102may detect such emotions in any suitable way, including facial image analysis, biometric sensors, or the like. As another example, the visual output900may be displayed when the wearer100indicates (or the HMD102detects) that a nearby observer is recognized (e.g., by comparison to a list of known individuals) and has been previously tagged as a family member or person for which the wearer feels affection. The displayed image of hearts is merely exemplary, and other images may be used to communicate affection (or any other emotion). For example, instead of hearts, the HMD102may display smiley faces, flowers, or the like. FIG.9Bshows the HMD102showing a visual output902that includes an image (e.g., a video or still image) of flames. Similar to the discussion ofFIG.9A, the visual output902may be displayed in response to any suitable input or detected state of the wearer, such as when the wearer100indicates (or the HMD102detects via facial image analysis, biometric sensors, etc.) that the wearer100is feeling angry. In other cases, the visual output902may show other images in response to detected or selected emotions of anger, such as lightning bolts, frowning faces, exclamation marks, “X” symbols, or the like. FIGS.9A and9Bshow example visual outputs that may be presented to reflect or indicate the wearer's emotions or mental state (or to indicate an emotion that has been selected by the user, regardless of whether the user is actually experiencing a particular emotion or mood). Other emotions, moods, and mental states may also be detected by the HMD102and/or selected by the wearer100, and corresponding visual outputs may be presented for those emotions, moods, or mental states. For example, question marks may be presented for confusion, tear drops may be presented for sadness, or the like. FIG.9Cshows an example visual output904that is an abstract image, such as a pattern. A pattern or other abstract image may be used to convey that a user does not wish to be disturbed, or merely as a default image for the HMD102. In some cases, different patterns may be used to indicate different states or modes of the HMD102or conditions of the wearer. For example, a checkerboard pattern may indicate that the wearer does not want to be disturbed or is fully engaged in a virtual environment, while a pattern of circles or swirls may indicate that the wearer100is viewing the real-world environment or is open to interactions with outside observers. In some cases, patterns or other abstract images may become part of a de facto communication protocol or communication convention for outward-facing displays for HMDs. For example, as HMDs are increasingly used, people will become familiar with certain patterns and abstract images having certain meanings. Thus, even such abstracted images may increase the quality of interaction between wearers of HMDs and the outside world. FIG.9Dshows an example an informational visual output906. As shown, the visual output906shows weather information, though other information may be displayed in addition to or instead of weather information. Informational visual output906may be displayed when the HMD102in a particular state of operation (e.g., displaying a completely virtual environment to the wearer), or based on a particular environment or status of the wearer100. For example, as described with respect toFIGS.16A-16C, an HMD102may be configured to present different groups of visual outputs based on different environments or modes of the HMD102(e.g., “work,” “home,” or “public” modes etc.). An informational visual output906, for example, may be presented on the outward-facing display when the wearer is in a “work” environment and is fully focused on a virtual environment. FIG.9Eshows an example visual output908that mirrors what is shown on the user-facing display(s) of the HMD102. For example, if the user-facing display(s) of the HMD102are showing a virtual environment of a tropical beach scene, that same scene may be presented on the outward-facing display of the HMD102. Displaying on the outward-facing display what the wearer is seeing may also increase the quality of interactions between a wearer and an observer, as the observer will gain additional insight into what the wearer is seeing. This additional contextual information can help the observer determine when and whether it is appropriate to interact with the wearer, what type of mood the wearer may be in, what type of work or leisure activity the wearer is engaged in, or the like. In some cases, the images displayed on user-facing displays may be configured to present stereoscopic images to produce a three-dimensional view for the wearer. In such cases, the HMD102may modify or alter the user-facing images so that they are viewable via a two-dimensional display. For example, stereoscopic images may be combined to produce a single image, or only one of the images used to form the stereoscopic image may be displayed. FIG.9Fshows an example visual output910that displays text. As shown, the displayed text includes the words “do not disturb,” though any other message or text may be displayed. For example, the text may indicate that the wearer is fully immersed in a virtual environment (e.g., “please wait—I am in a virtual environment”), or that the wearer is currently viewing at least part of the real-world environment (e.g., “external view mode is active”). Any other suitable text relating to the wearer's state (e.g., mood, emotion), a mode or state of operation of the HMD102, the type of environment being presented to the wearer (e.g., full virtual reality, augmented reality, mixed reality), or the like, may be presented on the outward-facing display106. FIGS.10A-10Dshow examples of symbolic visual outputs that convey information about the wearer100and/or the wearer's state of interaction with virtual and real-world environments. The visual outputs shown inFIGS.10A-10Dmay be part of a communication convention for outward-facing displays to convey a set of concepts that may help improve interactions between the wearer100and external observers. In particular, the visual outputs shown in these figures may be intended to convey that the wearer100is engaged in the virtual world (and thus is not aware of real-world surroundings), that the wearer100does not wish to be disturbed or to interact with external observers, that the user is focused on or is able to see the real-world environment, and/or that the wearer100is currently recording video of the real-world environment. For example,FIG.10Ashows the HMD102displaying a visual output1000of a triangle, which may resemble a “playback” symbol. This visual output may indicate that the wearer100is fully engaged in a virtual environment and/or that the real-world environment is not being presented to the wearer.FIG.10Bshows the HMD102displaying a visual output1002with two lines or shapes that are suggestive of closed eyes. The visual output1002may convey that the HMD102is in a “do not disturb” state or the wearer100otherwise does not wish to be interrupted or engage with the real-world environment.FIG.10Cshows the HMD102displaying circles that are suggestive of open eyes (similar to the visual output804, described above), indicating that the wearer100is viewing the real-world environment and/or is open to engage with individuals in the real-world environment.FIG.10Dshows the HMD102displaying a visual output1006of circles with inset icons or symbols representing video cameras, indicating to outside observers that the HMD102is recording images (e.g., video or still images) of the real-world environment. Other symbols or colors may instead be used to convey that the HMD102is recording, such as red circles or red circles with a contrasting peripheral border. As described herein, an outward-facing display may have a resolution capable of displaying actual images, similar to a display of a smartphone or a computer monitor. In some cases, an outward facing display may be a low-resolution display, such as an array of light emitting diodes (LEDs). In such cases, the HMD102may use the array to produce shapes and patterns of lights to convey information to increase the quality of interactions between the wearer100and outside observers. For example,FIG.11Ashows the HMD102with light arrays1100that each include nine light sources arranged in a pattern. (Amounts and patterns of light sources other than those shown inFIG.11Amay also be used.) The light sources in the light arrays1100may be LEDs, incandescent light sources, or any other suitable light source. FIG.11Bshows the light arrays1100displaying a visual output1102suggestive of open eyes. This pattern may be displayed when the wearer100is viewing or is able to view the real-world environment and/or is willing to engage with others.FIG.11Cshows the light arrays1100displaying a visual output1104suggestive of closed eyes, which may be may displayed when the HMD102is in a “do not disturb” state or the wearer100otherwise does not wish to be interrupted or engage with the real-world environment.FIG.11Dshows the light arrays1100displaying a visual output1106having the shape of two “X” symbols, which may indicate that the wearer100is fully engaged in a virtual environment and/or that the real-world environment is not being presented to the wearer. Other patterns indicative of other states of the HMD102or the wearer100may also be displayed using the light arrays1100. For example, the light arrays1100may display patterns that suggest the emotion or mood of the wearer, such as patterns resembling or suggesting the shape of a smile or frown to indicate happiness or sadness. As another example, the light arrays1100may be operated as a scrolling marquee to display text output. The patterns shown on the light arrays1100may be or become part of a communication convention, where large groups of people become familiar with a set of common patterns and their meanings. In this way, the patterns may become well accepted and understood ways to convey information between wearers of HMDs and external observers. FIGS.12A-12Fshow additional examples of how a level of engagement of a wearer with the real-world environment may be conveyed using different types of visual outputs on an outward-facing display106. For example, the particular visual output that is displayed on an outward-facing display may be indicative or suggestive of, for example, what the wearer is viewing, the level of virtualization and/or immersion of the wearer's environment, or the like. With reference toFIGS.12A and12B, when a wearer is fully engaged in a virtual environment1202, such as a video game (as shown inFIG.12A) or a movie or video, the outward-facing display may show a visual output1204including logo or title (as shown inFIG.12B) that conveys information about what the wearer is engaged with, as well as conveying the fact that the wearer is fully engaged with the virtual environment (and thus may not be aware of the real-world environment). WhileFIG.12Bshows a logo representative of the video game, other symbolic outputs representative of the content being displayed to the wearer may be displayed, including text (such as a movie title), stylized text, an image of a character in a movie, a movie poster or other promotional images, etc. The HMD102may display a different visual output when the HMD102is operating in a pass-through or “full reality” mode, such as when the HMD102is showing images of the real-world environment to the wearer100. For example,FIG.12Cshows an example internal image1206corresponding to a live or real-time image of the real-world environment around the wearer100. When the wearer100is viewing the image of the real-world environment as shown inFIG.12C, the HMD102may display a visual output1208of the wearer's eyes (e.g., images of the wearer's actual eyes, as described above), as shown inFIG.12D. Thus, the direct view of the wearer's eyes indicates to outside observers that the wearer is also directly viewing the real-world environment. The real-time, un-virtualized, two-way display of real images on both the inward- and outward-facing displays may enhance the quality of interactions by minimizing the effect of the HMD102and allowing a more natural interaction and communication style. Moreover, the fact that the image on the outward-facing display is not virtualized may suggest to observers that the wearer's internal view is also not a virtual- or mixed-reality environment. The HMD102may display yet another visual output when the HMD102is operating in a mixed reality or augmented reality mode, as illustrated inFIGS.12E-12F. For example,FIG.12Eshows an example mixed reality visual output1212(displayed to the wearer100) in which a real-world element1210(e.g., a person) is incorporated into a virtual environment. In order to convey to observers that the wearer100can see them or some other aspect of the real-world environment, the HMD102may display, on the outward-facing display, the wearer's eyes embellished with an additional symbol or indicator that the wearer's view is somehow modified or mediated. For example, as shown inFIG.12F, a visual output1214may include real-time images of part of the wearer's face with circles around the eyes. Increasing the amount of information that observers have about what the wearer is viewing may help to ease awkwardness and generally increase the comfort level of outside observers, as observers may feel more comfortable knowing when and whether the wearer is being exposed to content in addition to the real-world environment. While many of the examples herein use graphical displays (e.g., either high or low resolution displays capable of producing symbolic graphical or visual output) to display images or shapes to convey information about the state of a wearer's engagement with the real-world environment, information may also be conveyed using other visual outputs. For example, one or more light source associated with (e.g., attached to) an HMD may shine red when the HMD102is in a virtual reality mode, yellow when the wearer is in a mixed or augmented reality mode, and green when the wearer is in a pass-through mode. As another example, a light source may shine red when the wearer100is angry or otherwise in a bad mood, and green when the wearer100is happy or otherwise in a good mood. Other techniques for visually conveying the state of the wearer's real-world engagement, mood, emotional state, or any other suitable information, are also possible. As described herein, where an HMD102captures images of the wearer100to display on an outward-facing display (as described with reference toFIGS.4A-4C and7A-7C, for example), the captured images may also or instead be modified, at least in part, based on aspects of the surrounding real world environment. For example,FIGS.13A-13Bshow how the HMD102may modify captured images based at least partially on the lighting conditions of the real world environment.FIG.13Ashows the wearer100wearing the HMD102in diffuse lighting conditions, such as may occur on a cloudy day or in an environment with substantially omnidirectional lighting (e.g., where substantially no shadows are cast on the wearer's face). The HMD102may use sensors1300(e.g., photosensors, cameras) to determine an intensity, color, or primary direction (and/or other properties) of the ambient light, and may modify captured images in view of those properties to produce an image for display that substantially matches the uncovered portion of the wearer's face.FIG.13Aillustrates how the image on the outward-facing display106may substantially match the lighting, shading, color, and overall look of the uncovered portion of the wearer's face. FIG.13Bshows the wearer100wearing the HMD102in an environment with a highly directional light source.FIG.13Brepresents this environment as a sunny day, but such lighting conditions may be produced in many ways, such as by indoor lights. Such lighting conditions may cast shadows on parts of the wearer's face. Accordingly, in order to produce a more natural image on the outward-facing display106, the HMD102may use sensors1300(e.g., photosensors, cameras, or the like) to determine an intensity, color, and/or primary direction (as well as other properties) of the ambient light, and may modify captured images to add shadows, highlights, and other lighting (or other) modifications to the displayed image. Accordingly, as shown inFIG.13B, the image of the wearer's face displayed on the outward-facing display106substantially matches and blends with the uncovered portion of the wearer's face, producing a natural appearance. The HMD102may manipulate a displayed image based on the external environment in other ways as well. For example, as described with respect toFIGS.14A-14D, the HMD102may modify how an image is displayed on the outward-facing display106based on where an observer1400is relative to the wearer100. In particular, the HMD102may detect the location or position of the observer1400relative to the wearer100(and/or relative to the direction that the HMD102is facing), and may alter the image so that the displayed portion of the wearer's face appears to follow the contours of the wearer's face (e.g., instead of appearing a few inches in front of the wearer's face due to the positioning of the outward-facing display106). The HMD102may determine the position of the observer1400relative to the wearer100(and/or relative to the direction or orientation of the HMD102) in any suitable way and using any suitable sensors, such as LIDAR, radar, ultrasonic sensors, cameras, accelerometers, or any other suitable sensor. FIG.14Bshows how an image may be displayed on the outward-facing display106when the wearer100is directly in front of the observer1400(e.g., where the viewing angle of the observer relative to the HMD102is about 0 degrees), as shown inFIG.14A. In particular, as shown inFIG.14B, an image1402is presented on the outward-facing display106so that the image1402blends with an exposed portion of the wearer's face. Under these viewing conditions (e.g., a 0 degree or head-on viewing angle), the HMD102may not need to manipulate a captured image to accommodate for the viewing angle. In particular, the image of the wearer's face that is captured by the HMD102may be from a straight-on viewing angle, and thus angle adjustments may not be necessary. In some cases, the HMD102may change the size of the captured image prior to displaying the image1402on the outward-facing display106so that the displayed facial features are aligned with the wearer's actual facial features and correctly positioned (e.g., so that the displayed eyes do not appear larger or smaller than, or out of place relative to, the wearer's actual eyes). FIG.14Dshows how an image may be displayed on the outward-facing display106when the observer1400is viewing the outward-facing display106at an angle1406. As noted above, the HMD102may determine the viewing angle of the observer1400relative to the outward-facing display106using LIDAR, radar, ultrasonic sensors, cameras, accelerometers, and/or any other suitable sensing systems. Once the viewing angle is determined, the HMD102may process a captured image of the wearer's face based on the viewing angle to produce a modified image1408for display on the outward-facing display106. The modified image1408may be skewed, stretched, rotated, or otherwise manipulated to produce the illusion that the outward-facing display106is transparent and that the observer1400is actually viewing the wearer's face through the HMD102. If the image1408is not manipulated in this way, the parallax effect of displaying the wearer's eyes on the front of the HMD102may produce a distracting, unnatural appearance. The HMD102may periodically update the viewing angle of the observer1400relative to the outward-facing display106and update the modified image1408in accordance with the updated positional information, thus providing real-time positional tracking and image updating, further increasing the natural experience and appearance for the observer1400. The HMD102may also be configured to integrate aspects of the external, real world environment into the virtual environment being presented to the wearer.FIGS.15A-15Dshow examples of various ways in which images of the real world environment, and in particular a person in the real world environment, may be integrated into the virtual environment presented to the wearer100by the HMD102. For example,FIG.15Ashows a person1500with whom a wearer of the HMD102is interacting, andFIGS.15B-15Dshow examples of a virtual environment1504being presented to the wearer. At the outset, the HMD102may capture an image of the person1500using a camera or other sensor or imaging device. As shown inFIG.15B, the HMD102may present the captured image as a live video feed1502in the virtual environment1504. The live video feed1502may be displayed in a discrete window1506that overlays the virtual environment1504and that may visually distinguish the video feed1502from the displayed virtual environment1504. The live video feed1502may be movable and/or resizable within the virtual environment1504by the wearer. For example, the HMD102may allow the wearer to move the video feed1502to a less distracting location or size, such as in a corner of the virtual environment1504, thus allowing the wearer to customize the level to which he or she is interacting with each of the virtual environment and the real world environment. In some cases, the HMD102may modify the image captured of the person1500. For example, the HMD102may add features or aspects of the person1500that are not present in the captured image. For example, as shown inFIG.15C, the HMD102may capture an image of only part of the person's1500body (e.g., the person's head and torso) and integrate that part with a lower half1512of the person's body to produce an image1508corresponding to the person1500. The integration of the captured image with the additional content may be performed in real-time or near real-time, so that the displayed image1508appears as a live, real-time video feed. Other aspects of the captured image may remain unchanged, such as the background behind the person1500, and the background may remain distinct from the virtual environment1504to maintain a visual separation between the virtual environment1504and the image1508of the real world environment. The additional body portions (or other content that is integrated with the captured image) may be integrated with the captured image of the person1500in any suitable way. For example, the HMD102may store or receive pre-captured or pre-rendered images (either still or video images) to which captured images or portions thereof may be added, superimposed, or otherwise blended to produce the image1508. The image1508may be displayed in a discrete window1510, as described above. In some cases, the HMD102may more fully integrate the image of the person1500into the virtual environment1504. For example, instead of displaying a live video feed (FIG.15B) or a modified live video feed (FIG.15C), the HMD102may partially or fully virtualize the person1500and integrate the virtualized image into the virtual environment1504. For example, the HMD102may extract information about the person1500from the captured image, such as the user's facial features, movements, gaze direction, eye motions, expressions, or the like. The HMD102may then use this information to control a manipulable model of a person, animal, or other virtual object that can be integrated in the virtual environment. This technique may allow the person1500to appear as various different people or creatures. In one particular example, if a wearer is playing a video game in which the characters are fantastical creatures, such as elves and wizards, a person1500in the real world can be virtualized as an elf. In this way, the wearer can interact with the virtualized person within the context of the video game and while maintaining immersion in the virtual world, and people in the real world environment are afforded a more natural, unmediated communication experience with the wearer. While the HMD can display many different types of information and images on an outward-facing display, not all information or images will necessarily be suitable or appropriate for all situations in which an HMD may be worn. For example, when a wearer is at home and is “internally focused” (e.g., engaged with a virtual environment), it may be acceptable for the outward-facing display to mirror what the wearer is seeing in the HMD. At work, where confidentiality of working documents or other materials may be important, it may not be suitable to mirror the inward-facing displays as a default setting. Accordingly, the HMD102may have different modes in which different types of visual outputs are displayed for different statuses of the wearer or HMD. For example,FIGS.16A-16Cshow three example modes (home, work, and public) and what may be displayed on an outward-facing display in each mode for three example statuses (internal focus, external engagement, and do not disturb). FIG.16Ashows the types of content that may be presented on an outward-facing display when a user is in a “home” mode. The particular mode of the HMD may be selected or determined in any suitable way. For example, a wearer may select a mode manually, or the HMD may determine a mode automatically, for example, using a location of the HMD, a date/time, a calendar entry associated with the wearer, a set of present conditions established by the wearer, available communications networks, or the like. When the HMD is in a “home” mode and the wearer is internally focused, the outward-facing display may show a visual output1600that mirrors the wearer's virtual or internal environment (as represented in output1600by a gameplay scene). When the wearer is in the “home” mode and externally focused, the display may show a visual output1602including images of the wearer's actual eyes. When the wearer is in the “home” mode and is in a “do not disturb” condition, the display may show a visual output1604of a pattern, such as a checkerboard or screen pattern. These or other suitable visual outputs for the “home” mode may reflect the more open and laid-back atmosphere that wearers may encounter when at their home. As shown inFIG.16B, when the HMD is in a “work” mode and the wearer is internally focused, the outward-facing display may show a visual output1606that includes information (e.g., the weather, the wearer's upcoming appointments or open appointment slots, etc.), but does not mirrors the wearer's virtual or internal environment. This may help maintain confidentiality while also providing useful information to coworkers or other observers. When the wearer is in the “work” mode and externally focused, the display may show a visual output1608including images of the wearer's actual eyes, which may be useful in work environments when personal recognition is helpful and virtualized images may be inappropriate. When the wearer is in the “work” mode and is in a “do not disturb” condition, the display may show a visual output1610that includes direct words such as “do not disturb.” These or other suitable visual outputs for the “work” mode may reflect the more professional atmosphere that wearers may encounter when at their work. FIG.16Cshows the types of content that may be presented on an outward-facing display when a user is in a “public” mode, such as when a wearer is in a public place or otherwise may wish to share less information about themselves or their virtual environment. For example, when the HMD is in a “public” mode and the wearer is internally focused, the outward-facing display may show a visual output1612that is blank (e.g., no discernable visual output or a deactivated display). When the wearer is in the “public” mode and externally focused, the display may show a visual output1614including digital representations of eyes. This type of visual output may indicate to others that the wearer can perceive them and/or the environment, while also hiding more personal details of the wearer such as the wearer's actual appearance, gaze direction, facial expressions, and the like. When the wearer is in the “public” mode and is in a “do not disturb” condition, the display may show a visual output1616that includes images or symbols that quickly and indicate that the user does not wish to be disturbed, such as a “prohibition sign.” The modes, statuses, and corresponding types of visual outputs shown and described with respect toFIGS.16A-16Care merely examples. More or fewer modes may be used, and each mode may have more or fewer statuses associated therewith. Further, statuses may be related to a wearer's mood or emotional state rather than level of engagement with the real-world environment, and other types of information or visual outputs may be used for any given status. FIGS.17A-17Bshow details of the physical structure and components of the HMD102. The components shown in these figures are merely examples of components and configurations that may be used, and are meant more to illustrate the functions of the various components rather than any particular shape, location, size, integration, or other physical characteristic of the components. FIG.17Ashows a back or internal view of the HMD102. The HMD102includes a housing1700that covers at least a wearer's eyes, and possibly other portions of the wearer's face. A strap104may be coupled to the housing1700to attach the HMD102to a wearer's head. The HMD102includes user-facing displays1702,1704that present visual information to the wearer. The user-facing displays1702,1704may be flat or curved display panels (e.g., LED or OLED screens) on which an at least partially virtual environment may be displayed. While shown as two separate components, the user-facing displays1702,1704may be a single component on which two different (though related) images may be presented. The different images may be configured to simulate a three-dimensional environment for the wearer. The HMD102may also include lenses or other optical components in front of the user-facing displays1702,1704(e.g., between the displays and the wearer's eyes) to provide optical characteristics that result in the simulation of a three-dimensional environment. While the use-facing displays1702,1704are shown as flat screens, other types of displays may be used instead or in addition to the flat screens, such as retinal projectors, optical collimators (similar to a heads-up-display), or the like. Further, the user-facing displays1702,1704may be integrated with the HMD102in any suitable way. For example, the user-facing displays1702,1704may be one or more integral components that are built into a housing of the HMD102. As another example, the user-facing displays1702,1704may be one or more removable components that can be attached to and/or removed from the HMD102by a consumer. In the latter example, the user-facing displays1702,1704may be a display accessory (e.g., an accessory that is intended only as an add-on component to the HMD102), or they may be an electronic device that provides its own separate functionality (e.g., a smartphone or tablet or other suitable device that can be attached and/or connected to the HMD102). Where the user-facing displays1702,1704are a single component and/or include only a single, contiguous display, the display may be operated to produce two distinct visual outputs. For example, a single display may produce a first visual output to be presented to one eye, and a second visual output to be presented to another eye. The HMD102may also include wearer-facing sensor arrays1706. The wearer-facing sensor arrays1706may include cameras, eye-tracking sensors, biometric sensors (e.g., heart rate, blood oxygen, respiration rate, perspiration rate), optical sensors, motion sensors, presence sensors (e.g., to detect whether the HMD102is currently being worn), or the like. For example, the wearer-facing sensor arrays1706may include a wearer-facing camera to capture images of the wearer to be displayed (either modified or unmodified) on an outward-facing display106(FIG.1) of the HMD102, an eye tracking sensor to detect eye movements and gaze direction, and a motion tracking sensor to detect facial movements. More or fewer sensors may be included in the sensor arrays1706. FIG.17Bshows a front or exterior view of the HMD102. As described in detail herein, the HMD102includes an outward-facing display106positioned within the housing1700. The outward-facing display106may be any suitable type of display, including a liquid-crystal display (LCD), organic light emitting diode (OLED) display, LED display, or the like. The outward-facing display106may be configured to display the images or videos described herein, thus providing a more interactive and natural communication experience between a wearer and individuals in the real-world environment. The outward-facing display106may be a display other than a binary indicator, such as a light that indicates whether the device is on or off. The outward-facing display106may be integrated with the HMD102in any suitable way. For example, the outward-facing display106may be an integral component that is built in to a housing of the HMD102. As another example, the outward-facing display106may be a removable component that can be attached to and/or removed from the HMD102by a consumer. In the latter example, the outward-facing display106may be a display accessory (e.g., an accessory that is intended only as an add-on component to the HMD102), or it may be an electronic device that provides its own separate functionality (e.g., a smartphone or tablet or other suitable device that can be attached and/or connected to the HMD102). The outward-facing display106may be a high-resolution display that is capable of rendering realistic images, or a low-resolution display that is not capable of rendering realistic images (e.g., photographs), but is capable of rendering non-realistic symbolic graphical outputs (e.g., patterns and shapes). The outward-facing display106may also be covered by a protective cover, such as a glass, plastic, polycarbonate, sapphire, or other transparent material. Also, while the outward-facing display106is often shown and described herein as a single display component, it may include multiple different displays, such as two or more displays in a side-by-side arrangement, or two or more displays in an over-under arrangement. For example, an over-under arrangement of outward-facing displays may include one display extending horizontally from a first side to a second side of the HMD102(which may roughly correspond to the location of a wearer's eyebrows), and a second display below the first display and extending horizontally from the first to the second side of the HMD102(which may roughly correspond to the location of the wearer's eyes). The first and second displays may display different subject matter. For example, the second display may present images of the wearer's eyes, while the first display may present textual information, such as a description of the wearer's virtual environment, a mode of operation of the HMD102, a preference of the wearer (e.g., do not disturb, open to interaction, etc.), or other information such as stock values, weather information, calendar information, etc. The HMD102may also include outward-facing sensor arrays108. The outward-facing sensor arrays108may include sensors that capture information about the real-world environment with which to improve the experience of using the HMD102for both the wearer and observers. For example, the outward-facing sensor arrays108may include an outward-facing camera that captures images of people that are then displayed (with or without modification) to a wearer, thereby allowing the wearer to perceive the real-world environment without removing the HMD102. The outward-facing sensor arrays108may also include object detection sensors, such as LIDAR, radar, or ultrasonic sensors, that can be used to determine the viewing angle of an observer with respect to the outward-facing display106. As described, the HMD102may use this information to produce images that seamlessly blend with the wearer's physical features, thus providing a more natural appearance of the wearer to an observer. Other sensors included in the outward-facing sensor arrays108may include, for example, photo sensors (e.g., for sensing color and directionality of ambient light), motion-tracking or motion-capture sensors (e.g., for sensing the movements of observers for applying to manipulable models or avatars in a virtual environment), and the like. FIGS.18A-18Bshow another example HMD1800that may be used with any of the systems and techniques described herein. In particular, the HMD1800may be substantially the same as the HMD102and may include the same and/or similar components of the HMD102, but may be configured to cover a greater portion of the wearer's100face. The HMD1800may include an outward-facing display1806, similar to the outward-facing display106, which is configured to display images corresponding to and/or based on captured or stored images of the wearer. Because the HMD1800covers a greater portion of the wearer's face than the HMD102, the outward-facing display1806may be larger than the outward-facing display106and may therefore display more features of the wearer's face, including for example the wearer's nose and mouth. FIG.19shows an example process1900of operating an HMD, such as the HMD102. The process1900may be used to display, on an outward-facing display, images of a wearer's face, or images corresponding to, selected based on, or derived from images of a wearer's face. For example, at operation1902, the HMD may capture an image of a portion of the wearer of the HMD (or any other information or data about a wearer that can be used to generate or select an image that may correspond to or represent the wearer, including biometric data). For example, as described herein, the HMD may use a wearer-facing camera or other imaging device to capture an image of a portion of the wearer's face that is covered by the HMD (e.g., the wearer's eyes and surrounding areas). At operation1904, the HMD may display, on an outward-facing display of the head-mounted display, a second image that is based on the first image. The second image may be the first image (e.g., the first image may be output directly on the outward-facing display with no modification or adjustment. In other cases, the second image may be a modified version of the first image. For example, a parameter of the first image, including hue, contrast, brightness, or saturation, may be modified to produce the second image. As another example, the first image may be stretched, skewed, filtered, rotated, or otherwise manipulated to produce the second image. As yet another example, the first image may be analyzed to determine motion information (e.g., motion vectors of facial features or eye movements), which may then be applied to a manipulable digital model to produce the second image. As yet another example, the first image may be analyzed to determine an emotional state of the wearer, and the second image may be selected, based on the first image, from a group of candidate images. As described herein, images may be still images, video images, animated images, patterns, shapes, light array patterns, or the like. Moreover, images that are selected based on an emotional state of the wearer may not be suggestive of the wearer's face. For example, animated flames may be presented as the second image if it is determined that the wearer is angry. As noted herein, when the HMD is in a first mode of operation, the HMD may display images that are derived from or represent the wearer's face. For example, when the HMD is in a “home” mode with an “external engagement” status (see, e.g.,FIG.16A), the HMD may display the wearer's eyes. In other modes of operation, the HMD may display an image that corresponds to a scene being displayed on an inward-facing display of the HMD. For example, when the HMD is in a “home” mode with an “internal focus” status (see, e.g.,FIG.16A), the HMD may display what the wearer is viewing (as modified for a non-stereoscopic or other three-dimensional display technology). FIG.20depicts example components of a head-mounted display in accordance with the embodiments described herein, such as the HMD102and/or the HMD1800. As shown inFIG.20, an HMD2000includes a processing unit2002operatively connected to computer memory2004and/or computer-readable media2006. The processing unit2002may be operatively connected to the memory2004and computer-readable media2006components via an electronic bus or bridge. The processing unit2002may include one or more computer processors or microcontrollers that are configured to perform operations in response to computer-readable instructions. The processing unit2002may include the central processing unit (CPU) of the device. Additionally or alternatively, the processing unit2002may include other processors within the device including application specific integrated chips (ASIC) and other microcontroller devices. The memory2004may include a variety of types of non-transitory computer-readable storage media, including, for example, read access memory (RAM), read-only memory (ROM), erasable programmable memory (e.g., EPROM and EEPROM), or flash memory. The memory2004is configured to store computer-readable instructions, sensor values, and other persistent software elements. Computer-readable media2006also includes a variety of types of non-transitory computer-readable storage media including, for example, a hard-drive storage device, a solid-state storage device, a portable magnetic storage device, or other similar device. The computer-readable media2006may also be configured to store computer-readable instructions, sensor values, and other persistent software elements. In this example, the processing unit2002is operable to read computer-readable instructions stored on the memory2004and/or computer-readable media2006. The computer-readable instructions may adapt the processing unit2002to perform the operations or functions described above with respect toFIGS.1-19. For example, the processing unit2002, the memory2004, and/or the computer-readable media2006may be configured to cooperate with the wearer-facing sensors2024, outward-facing sensors2026, internal display(s)2012(e.g., wearer-facing display(s)), and external display(s)2008(e.g., outward-facing display(s)) to capture images a wearer of the HMD and display modified or unmodified versions of those images on the external display(s)2008, as well as to capture images or information about a real world observer and display modified or unmodified versions of those images on the internal display(s)2012. The computer-readable instructions may be provided as a computer-program product, software application, or the like. As shown inFIG.20, the HMD2000also includes one or more internal displays2012(e.g., the user-facing displays1702,1704) and one or more external displays2008(e.g., the outward-facing display106). The internal display(s)2012and the external display(s)2008may include liquid-crystal display(s) (LCD), organic light emitting diode (OLED) display(s), LED display(s), or the like. If a display is an LCD, it may also include a backlight component that can be controlled to provide variable levels of display brightness. If a display is an OLED or LED type display, the brightness of the display may be controlled by modifying the electrical signals that are provided to display elements. The internal display(s)2012may be projector-type displays, such as retinal projectors that project images or other information that can be visually perceived by a wearer. The HMD2000may also include a battery2009that is configured to provide electrical power to the components of the HMD2000. The battery2009may include one or more power storage cells that are linked together to provide an internal supply of electrical power. The battery2009may be operatively coupled to power management circuitry that is configured to provide appropriate voltage and power levels for individual components or groups of components within the HMD2000. The battery2009, via power management circuitry, may be configured to receive power from an external source, such as an AC power outlet. The battery2009may store received power so that the HMD2000may operate without connection to an external power source for an extended period of time, which may range from several hours to several days. In some embodiments, the HMD2000includes one or more input devices2010. An input device2010is a device that is configured to receive user input. The one or more input devices2010may include, for example, a push button, a touch-activated button, a keyboard, a key pad, a motion capture system, an accelerometer, or the like (including any combination of these or other components). In some embodiments, the input device2010may provide a dedicated or primary function, including, for example, a power button, volume buttons, home buttons, scroll wheels, and camera buttons. The HMD2000may also include wearer-facing sensors2024that may be used to sense, capture, and/or detect information about a wearer of the head-mounted display (including video and/or still images). Example wearer-facing sensors2024include, without limitation, cameras, eye-tracking sensors, biometric sensors (e.g., heart rate, blood oxygen, respiration rate, perspiration rate), motion sensors, presence sensors (e.g., to detect whether the HMD102is currently being worn), or the like. The HMD2000may also include outward-facing sensors2026that may be used to sense, capture, and/or detect information about an environment surrounding the head-mounted display, including video and/or still images of the real-world environment and people in the real-world environment. Example outward-facing sensors2026include, without limitation, cameras, photo sensors, object detection sensors (e.g., radar sensors, light detection and ranging (LIDAR) sensors), ultrasonic sensors, light sensors, eye-tracking sensors, motion sensors, or the like. The HMD2000may also include other sensors that may be used to detect an environmental condition, orientation, position, or some other aspect of the HMD2000and which may not necessarily be categorized as a wearer-facing or outward-facing sensor. Example sensors that may also be included in the HMD2000include, without limitation, one or more accelerometers, gyrometers, inclinometers, goniometers, or magnetometers. Such sensors may also be broadly defined to include wireless positioning devices including, without limitation, global positioning system (GPS) circuitry, Wi-Fi circuitry, cellular communication circuitry, and the like. The HMD2000may also include a communication port2028that is configured to transmit and/or receive signals or electrical communication from an external or separate device. The communication port2028may be configured to couple to an external device via a cable, adaptor, or other type of electrical connector. In some embodiments, the communication port2028may be used to couple the HMD2000to an accessory, including a dock or case, a stylus or other input device, a remote control, a motion-tracking or motion-capture accessory, smart-clothing, pointing device, keyboard, or other device configured to send and/or receive electrical signals. The concepts contained herein are described with reference to head mounted displays of various particular configurations. However, these concepts apply equally or by analogy to head mounted displays or wearable electronic devices or systems of other configurations as well, including without limitation visors, glasses, goggles, contact lenses, implantable devices, helmets (e.g., motorcycle helmets), or the like. Further, HMDs or other devices with outward-facing displays, as described herein, may provide other features and benefits that increase the options for using the HMD in an interactive way and that leverage some of the concepts and techniques described herein. For example, an outward-facing display may be used to display, to an external observer, a virtual environment with which both the observer and the wearer can interact. More particularly, if an external observer wishes to interact with a wearer of an HMD while the wearer is immersed in a virtual environment, the HMD may integrate the observer into the virtual environment. This may include, for example, generating an avatar representative of the observer in the virtual environment, and providing a graphical output showing the avatar (or the avatar's virtual viewpoint) on the outward-facing display. The observer may then be able to manipulate the avatar, for example, by physically interacting with the HMD (e.g., touching the outward-facing display), using a controller or a smartphone that is in communication with the HMD, or the like. Because the HMD can show the virtual world with the avatar (or from the avatar's perspective) on the outward-facing display, the external observer can actually interact with the virtual environment (and interact with the wearer in the virtual environment) in ways that were previously not offered. From the wearer's perspective, the external observer can be integrated into a virtual environment as described herein with respect toFIGS.15A-15D, for example. Accordingly, the wearer need not leave the virtual environment or remove the HMD in order to interact with an external observer, while the external observer can have a more fulfilling and natural interaction with the wearer, and even participate in the wearer's virtual experience. Further, any of the techniques described herein relating to integrating real-time expressions and/or images of the wearer and an observer into a virtualized environment may also be used to enhance the foregoing mutual-virtual experience. For example, cameras or other sensors may detect a wearer's facial expressions, gaze direction, or other features or expressions, which may be mapped or applied to an avatar of the wearer (which may be displayed on the outward-facing display to the observer). As another example, cameras or other sensors may detect the observer's facial expressions, gaze direction, or other features or expressions, which may be mapped or applied to an avatar of the observer (which may be displayed on the inward-facing display to the wearer). Accordingly, the techniques described herein for enhancing interaction between a wearer and an observer are applicable to this and other examples of shared virtual experiences. The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings. For example, while the methods or processes disclosed herein have been described and shown with reference to particular operations performed in a particular order, these operations may be combined, sub-divided, or re-ordered to form equivalent methods or processes without departing from the teachings of the present disclosure. Moreover, structures, features, components, materials, steps, processes, or the like, that are described herein with respect to one embodiment may be omitted from that embodiment or incorporated into other embodiments. | 96,841 |
11861256 | DETAILED DESCRIPTION The terms used in the disclosure and the claims are general terms selected in consideration of the functions of the various embodiments. However, such terms may be varied depending on an intention of those skilled in the art, a legal or technical interpretation, an emergence of a new technology, and the like. Also, there may be some terms arbitrarily selected by an applicant. Such terms may be construed according to meanings defined in the present specification, and may also be construed based on general contents of the present specification and a typical technical concept in the art unless the terms are not specifically defined. It is to be understood that when one component is referred to as being “on” or “in contact with” another component, it may be in direct contact with or be connected to the another component, or be in contact with or be connected to the another component with other component interposed therebetween. To the contrary, if one component is described as being “directly on” or “in direct contact with” another component, it is to be understood that there is no other component interposed therebetween. Other expressions that describe the relationship between the components, for example, “between” and “directly between” may be interpreted in the same way. As used herein, terms the terms “1st” or “first” and “second” or “2nd” may use corresponding components regardless of importance or order and are used to distinguish one component from another without limiting the components. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. For example, the expression, “at least one of a, b, and c,” should be understood as including only a, only b, only c, both a and b, both a and c, both b and c, or all of a, b, and c. Singular forms are intended to include plural forms unless the context clearly indicates otherwise. It is to be understood that the terms “include”, “have” or the like, specify the presence of features, numerals, steps, operations, components, parts or a combination thereof mentioned in the specification, but do not preclude the addition of one or more other features, numerals, steps, operations, components, parts or a combination thereof. In describing embodiments, detailed description of relevant known functions or components may be omitted if it would obscure the description of the subject matter. Embodiments will be described in detail with reference to the accompanying drawings and the contents described in the drawings, but the disclosure is not restricted or limited by the embodiments. Hereinafter, certain embodiments will now be explained in detail with reference to the accompanying drawings. One or more embodiments provide a display apparatus capable of selectively receiving one of a plurality of image signals and a method of controlling thereof. FIGS.1A to1Care views illustrating a display apparatus according to an embodiment. The display apparatus100according to an embodiment the disclosure may display one or more display modules. Referring toFIG.1A, a display apparatus100according to an embodiment may include four display modules131to134. The display modules131to134may be physically coupled to constitute one display. For example, each of the display modules131to134may be implemented as an LED display module including an inorganic light emitting diode (LED). Specifically, referring toFIG.1B, each of the display modules131to134may be implemented as an LED display module including a plurality of LEDs11that implement sub-pixels. For example, a red LED, a green LED, and a blue LED may form one pixel. In the pixel, the red LED may act as a red subpixel, the green LED may act as a green subpixel and the blue LED may act as a blue subpixel. The plurality of pixels may be arranged in a matrix form (e.g., M×N, where M and N are natural numbers). For example, the form of the matrix may have the same arrangement (e.g., M=N, where M, N are natural numbers, 16×16 arrangements, 24×24 arrangements, etc.), and also other arrangements (e.g. M≠N, where M and N are natural numbers). According to an embodiment, each LED of the LED display module may be implemented as a micro LED. The micro LED is an LED having a size of about 5 to 100 micrometers, and refers to an ultra-small light emitting device. For example, each of the micro LEDs may emit light of a corresponding color, and thereby do not require a color filter. For example, a pixel of an LED display module may include a red micro LED that emits red light, a green micro LED that emits green light and a blue micro LED that emits blue light. According to an embodiment, the LED display module does not include a bezel. For example, the LED display module may be implemented using micro LEDs and not include a bezel. Because the LED display module implemented using micro LEDs does not include a bezel, seamless images may be displayed using a plurality of display apparatuses physically arranged adjacent each other. However, embodiments are not limited to an LED display module, and the display module may be implemented as a flat panel display panel such as a liquid crystal panel (LCD), an organic LED (OLED), an active-matrix OLED (AMOLED) panel, a plasma display panel (PDP), etc. Hereinafter, for convenience of description, the display module is described as an LED display module. Referring back toFIG.1A, the display apparatus100according to an embodiment may be implemented in a form in which a plurality of display modules131to134are combined in a 2×2 arrangement. The 2×2 arrangement of LED display modules is only an example, and arrangements and numbers of LED display modules may be variously changed. The display apparatus100may include a base plate on which each of the display modules131to134may be mounted. The base plate may be implemented in a form that each display module can be mounted on a front of the base plate. The display apparatus100according to an embodiment may include a plurality of coupling portions150-1and150-2that can be coupled to other display apparatuses. The positions and numbers of the coupling portions ofFIG.1Aare only an example, and the location and number of coupling portions may be variously changed. Accordingly, the display apparatus100according to an embodiment may be combined with other display apparatuses to form a modular display apparatus. The modular display apparatus may be referred to as a wall display or a video wall. For example, referring toFIG.1C, a plurality of display apparatuses100-1to100-16may be combined in a 4×4 arrangement according to an embodiment, and be implemented as a modular display apparatus1000such as a video wall. Embodiments are not limited to the modular display apparatus having the 4×4 arrangement, and arrangement forms and numbers of the modular display apparatus may be variously changed. The modular display apparatus1000may display an image through a display module included in each of the plurality of display apparatuses. The image may be an image received from an external device (e.g., a set-top box, a computer, a server, etc.), and also an image pre-stored in the modular display apparatus. Specifically, when information on an image is received, each display apparatus100constituting the modular display apparatus1000may identify a region corresponding to identification information of the display apparatus100among an entire region of the image and display an image of the corresponding region based on the identification information of the display apparatus100. Accordingly, as illustrated inFIG.1D, the modular display apparatus1000according to an embodiment may display an image through a plurality of display apparatuses. For example, the display apparatus100is one of a plurality of display apparatuses constituting the modular display apparatus1000and may be referred to as a cabinet. FIG.2is a view illustrating a signal transmission system according to an embodiment. Referring toFIG.2, a signal transmission system according to an embodiment may include a content transmission device10, a splitter device20, a first external device30-1, a second external device30-2, and a modular display apparatus1000. The content transmission device10(e.g., a set-top box, a computer, a server, etc.) may transmit a signal to the splitter device20. The signal may be an image signal including information on an image, and also a control signal for controlling the modular display apparatus1000. The signal received by the splitter device20from the content transmission device10may be referred to as a first signal. The splitter device20may duplicate the first signal to generate a second signal. The second signal may be a signal for execution of a redundancy function. For example, when the first signal received from the content transmission device10is an image signal including information on the image, the splitter device20may generate a second signal including information on the same image as the image included in the image signal. Then, the splitter device20may transmit the signal received from the content transmission device10(the first signal) to the first external device30-1, and transmit a signal generated by replicating the first signal (the second signal) to the second external device30-2. The first signal may be referred to as a main signal and the second signal may be referred to as a backup signal. For this operation, the splitter device20may include a first interface that can be connected to the first external device30-1and a second interface that can be connected to the second external device30-2. The first external device30-1may transmit the first signal received from the splitter device20to the modular display apparatus1000. Specifically, the first external device30-1may be connected to a display apparatus of the modular display apparatus1000by wire or wireless communication, and transmit the first signal received from the splitter device20through the wired or wireless communication method to the modular display apparatus1000. The display apparatus (e.g., a first display apparatus) that received the first signal may transmit the first signal to another display apparatus adjacent the first display apparatus. The second external device30-2may transmit the second signal received from the splitter device20to the modular display apparatus1000. Specifically, the second external device30-2may be connected to a display apparatus of the modular display apparatus1000in a wired or wireless communication, and transmit a second signal received from the splitter device20to the modular display apparatus1000through the wired or wireless communication method. The display apparatus (e.g., a second display apparatus) that received the second signal may transmit the second signal to another display apparatus adjacent the second display apparatus. Thereafter, when a first signal (main signal) or a second signal (backup signal) is transmitted to each display apparatus constituting the modular display apparatus1000, the modular display apparatus1000may perform various functions based on the first signal or the second signal. For example, when the first and second signals include information on an image, the modular display apparatus1000may display an image based on the first signal or the second signal. In the description above, a signal is replicated by the splitter device20, and the modular display apparatus1000is connected to the first and second external devices30-1and30-2, but this is only an example. According to an embodiment, the modular display apparatus1000is connected to one external device, the external device may receive a first signal from the content transmission device10, replicate the first signal to generate a second signal, transmit the first signal to a first display apparatus of the modular display apparatus1000, and transmit the second signal to a second display apparatus of the modular display apparatus1000. In this case, the splitter device200may be omitted from the signal transmission system described above. Each display apparatus constituting the modular display apparatus1000according to an embodiment may selectively receive the first signal or the second signal. Hereinafter, this will be described in greater detail below. FIG.3is a block diagram illustrating a display apparatus according to an embodiment. The display apparatus100may be one of a plurality of display apparatuses constituting the modular display apparatus. Referring toFIG.3, the display apparatus100according to an embodiment may include a first interface110, a second interface120, a display130, and a processor140. The first interface110may receive a first signal from the first external device30-1or receive the first signal from the other display apparatus (e.g., the first display apparatus) being connected and adjacent to the display apparatus100. Specifically, when the first interface110is in communications connection with the first external device30-1, the first interface110may receive the first signal from the first external device30-1, and when the first interface110is in communications connection with the first display apparatus, the first interface110may receive the first signal from the first display apparatus. The first signal may refer to an image signal including information on the image, and also may be a control signal for controlling the display apparatus100. The second interface120may receive a second signal from the second external device30-2or receive the second signal from the other display apparatus (e.g., the second display apparatus) being connected and adjacent to the display apparatus100. Specifically, when the second interface120is in communications connection with the second external device30-2, the second interface120may receive the second signal from the second external device30-2, and when the second interface120is in communications connection with the second display apparatus, the second interface120may receive the second signal from the second display apparatus. The second signal may refer to an image signal including information on the image, and also may be a control signal for controlling the display apparatus100. The display130may display various images. The image may include a still image and a video, and the display130may display various images such as broadcast contents and multimedia contents. In addition, the display130may display various user interfaces (UIs) and icons. The display130may display an image of an area corresponding to identification information of the display apparatus100among the images included in the image signal. For example, if the identification information of the display apparatus is ID 1, the display130may display an image of an area corresponding to ID 1 among the images included in the image signal. In this regard, the display apparatus100may store identification information of the display apparatus100. The display130may be realized as various kinds of displays, such as liquid crystal display (LCD), organic light-emitting diode (OLED), micro LED, liquid crystal on silicon (LCoS), digital light processing (DLP), or the like. Further, the display130may include a driver circuit that may be realized as A-Si TFT, low temperature poly silicon (LTPS), TFT, or organic TFT (OTFT) and a backlight unit. The display130may be a touch screen including a touch sensor. The touch sensor may generate a touch signal indicating a touch location. The processor140controls overall operations of the display apparatus100. The processor140may include at least one or more of a central processing unit (CPU), an application processor (AP) and a communication processor (CP). Alternatively, the processor140may refer to a field programmable gate array (FPGA) designed or programmed to implement various functions described below. The processor140may, for example, control a number of hardware or software elements connected to the processor140by driving an operating system or application program, and perform various data processing and calculations. Further, the processor140may load and process a command or data received from at least one of the other components to a volatile memory and store diverse data in a non-volatile memory. Hereinafter, for convenience of description, the operation of the processor140will be described with reference toFIGS.4and5. FIG.4is a view illustrating a structure of a display apparatus according to an embodiment. The display apparatus100according to an embodiment may include a plurality of interfaces. Specifically, referring toFIG.4, the display apparatus100may include a first interface110and a second interface120. A first signal1may be received through the first interface110from the first external device30-1or the first display apparatus being adjacent and connected to the display apparatus100, and the received first signal may be transmitted to the processor140through the first interface110. A second signal2may be received through the second interface120from the second external device30-2or the second display apparatus being adjacent and connected to the display apparatus100. The display apparatus100may also include a third interface111and a fourth interface121. The second signal2, received from the second external device30-2or the second display apparatus, may be transmitted through the third interface111to the first display apparatus. The first signal1, received from the first external device30-1or the first display apparatus, may be transmitted to the second display apparatus through the fourth interface121. According to an embodiment, the first and third interfaces110and111may be included in a first port, and the second and fourth interfaces120and121may be included in a second port. Further, the first port may be connected to the second port of the first display apparatus through a cable, and the second port may be connected to the first port of the second display apparatus through a cable. The first signal may include information on an image, and the second signal may include information on the same image as the image included in the first signal. For example, the second signal may refer to a signal for executing a redundancy function. In addition, the first signal may refer to a signal transmitted to a plurality of display apparatuses constituting the modular display apparatus1000through a first path (e.g., a forward path). The first path may refer to a path where a signal sequentially passes from the first display apparatus which receives the signal from the first external device30-1, the plurality of display apparatuses constituting the modular display apparatus1000to the nth display apparatus which receives the signal from the second external device30-2. For example, as illustrated inFIG.2, when the first to ninth display apparatuses are sequentially connected, the first signal may be sequentially transmitted from the first display apparatus to the ninth display apparatus along the first path. The second signal may refer to a signal transmitted to a plurality of display apparatuses constituting the modular display apparatus1000through a second path (e.g., a backward path). The second path may refer to a path where a signal sequentially passes from the nth display apparatus which receives a signal from the second external device30-2and the plurality of display apparatuses constituting the modular display apparatus1000to the first display apparatus which receives the signal from the first external device30-1. For example, as illustrated inFIG.2, when the first to ninth display apparatuses are sequentially connected, the second signal may be sequentially transmitted from the ninth display apparatus to the first display apparatus along the second path. As described below, the first signal according to an embodiment may be transmitted to a first plurality of display apparatuses among the plurality of display apparatuses constituting the modular display apparatus1000through the first path, and the second signal according to an embodiment may be transmitted to a second plurality of display apparatuses excluding the first plurality of display apparatuses among the plurality of display apparatuses constituting the modular display apparatus1000through the second path. The processor140may execute a first operation (i.e., a first logic operation) for identifying (or detecting) the first signal1received through the first interface110and a second operation (i.e., a second logic operation) for identifying (or detecting) the second signal2received through the second interface120. According to an embodiment, the first signal1may refer to a signal transmitted by the first display apparatus being connected and adjacent to the display apparatus100, and the second signal2may be a signal transmitted by the second display apparatus being connected and adjacent to the display apparatus100. For example, the processor140may repeatedly (or alternately) execute the first and second operations. In other words, the processor140may not execute the second operation while executing the first operation, and may not execute the first operation while executing the second operation. For example, the processor140may repeatedly execute the first operation and the second operation based on a predetermined period. In other words, the processor140may execute the first operation during the first period based on the predetermined period, and when the first signal1is not identified in the first interface110during the first period, the processor may execute the second operation during the second period based on the predetermined period. For example, when the predetermined period is 1 second, the processor140may execute the first operation for 1 second (from 0 to 1 seconds), and when the first signal1is not identified from 0 to 1 seconds, the processor140may execute the second operation for 1 second (from 1 to 2 seconds). The period may be determined by a unit of time and also be determined by a unit of frame. In addition, the processor140may execute the first and second operations together according to an embodiment. When the first signal1is identified in the first interface110while the first operation is being executed, the processor140may not execute the second operation thereafter, and receive a signal through the first interface110. The processor140may deactivate (or disable) functions of the second interface120and may not receive (or may ignore) the second signal2transmitted by the second display apparatus. In other words, when the first signal1is identified in the first interface110, the processor140may not receive the second signal2from the second interface120thereafter. For example, when a clock signal is identified in the first signal1transmitted to the first interface110while the first operation is being executed, the processor140may not execute the second operation afterwards, and keep receiving a signal through the first interface110. For this operation, the first operation may include a code for identifying the clock signal. When the first signal1is not identified in the first interface110while the first operation is being executed, the processor140may not execute the first operation and execute the second operation. For example, the processor140may cease execution of the first operation and initiate the second operation when the first signal1is not identified in the first interface110while the first operation is executed. When the second signal2is identified in the second interface120while the second operation is being executed, the processor140may not execute the first operation and continuously receive signals through the second interface120. For example, the processor140may cease the repeated alternate execution of the first operation and the second operation, and continuously receive signals through the second interface120if the second signal2is identified in the second interface120while the second operation is being executed. For example, when a clock signal is identified from the second signal2transmitted to the second interface120while the second operation is being executed, the processor140may not execute the first operation afterwards, and continuously receive signals through the second interface120. For this operation, the second operation may include a code for identifying the clock signal. The processor140may deactivate (or disable) a function of the first interface110and may not receive the first signal1transmitted by the first display apparatus. As such, the processor140may selectively receive one of the first signal1transmitted from the first external device30-1or the first display apparatus and the second signal2transmitted from the second external device30-2or the second display apparatus. Deactivating one of the first and second interfaces based on whether the clock signal is identified is only an embodiment. For example, the processor140may deactivate one of the first and second interfaces based on whether a sync signal and data enable signal are identified. For example, when the sync signal and data enable signal are identified from the first signal1transmitted to the first interface110while the first operation is being executed, the processor140may not execute the second operation afterwards and continuously receive signals through the first interface110. In addition, when the sync signal and data enable signal are identified from the second signal2transmitted to the second interface120while the second operation is being executed, the processor140may not execute the first operation afterwards and continuously receive signals through the second interface120. For this operation, the second operation may include a code for identifying the sync signal and the data enable signal. The processor140may deactivate one of the first and second interfaces based on whether the clock signal is identified, or whether the sync signal and the data enable signal are identified. For example, when the clock signal or the sync signal and the data enable signal are identified from the first signal1transmitted to the first interface110, the processor140may deactivate the function of the second interface120, and when the clock signal is not identified from the first signal1, and the sync signal and the data enable signal are not identified from the first signal1, the processor140may execute the second operation based on the predetermined period. The processor140may deactivate one of the first and second interfaces based on a lock or unlock of a phase locked loop (PLL). The PLL is configured to be locked or unlocked according to whether the clock signal is received, and may be included in the processor140, and may be implemented in a separate configuration from the processor140. For this operation, the processor140may identify whether the PLL is locked or unlocked while executing the first operation. When the PLL is locked as the first signal1(e.g., a clock signal) is transmitted to the first interface110, the processor140may identify that the first signal1is received at the first interface110. In this case, the processor140may deactivate the function of the second interface120without executing the second operation. When the first signal1is not transmitted to the first interface110during the predetermined period, the processor140may execute the second operation during the predetermined period to identify whether the PPL is locked or unlocked. In addition, when the PLL is unlocked as the second signal2(e.g., clock signal) is transmitted to the second interface120, the processor140may identify that the second signal2is received at the second interface120. In this case, the processor140may not execute the first operation and deactivate a function of the first interface110. Thereafter, the processor140may transmit signals to the outside through one of the third interface111and the fourth interface121. For example, when the first signal1is identified in the first interface110while executing the first operation, the processor140may transmit the first signal1received through the first interface to the second display apparatus through the fourth interface121, and when the second signal2is identified in the second interface120while executing the second operation, the processor140may transmit the second signal2received through the second interface to the first display apparatus through the third interface111. In this method of transmitting and receiving signals, the plurality of display apparatuses constituting the modular display apparatus1000may receive one of the first signal and the second signal. For example, as shown inFIG.5, the modular display apparatus1000includes a display apparatus100, a display apparatus A100-A, and a display apparatus B100-B, and the display apparatus A100-A may receive the first signal1from the first external device30-1or the other display apparatus through the first interface110, and the display apparatus B100-B may receive the second signal from the second external device30-2or the other display apparatus through the second interface120. The display apparatus100may repeatedly execute the first operation and the second operation based on a predetermined period. When the first signal1is identified in the first interface110while executing the first operation, the display apparatus100may not execute the second operation afterwards, but continuously receive a signal through the first interface110and transmit the first signal1to the display apparatus B100-B through the fourth interface121. For example, the display apparatus100may cease the repeated alternate execution of the first operation and the second operation, and continuously receive signals through the first interface110if the first signal1is identified in the first interface110while the first operation is being executed. In addition, the display apparatus100may not receive the second signal2transmitted by the display apparatus100-B by deactivating the function of the second interface120. For example, in a state in which the display apparatus100has already received the first signal1including the same information as the second signal2, the display apparatus100may display an image based on the first signal1and deactivate the function of the second interface120. By deactivating a function of one of the first interface110and the second interface120, the display apparatus may reduce processor overload and reduce a cross talk problem and an EMI occurrence problem that may be caused by the first signal1and the second signal2. When a signal is not identified in the first interface110during the predetermined period, the display apparatus100may execute the second operation, and when the second signal2is identified in the second interface120while executing the second operation, the display apparatus100may not execute the first operation and continuously receive the signal through the second interface120and transmit the second signal2to the display apparatus100-A through the third interface111. For example, the display apparatus100may cease the repeated alternate execution of the first operation and the second operation, and continuously receive signals through the second interface120if the second signal2is identified in the second interface120while the second operation is being executed. In addition, the display apparatus100may not receive the first signal1transmitted by the display apparatus100-A by deactivating the function of the first interface110. Based on the aforementioned description, the plurality of display apparatuses constituting the modular display apparatus1000may receive the first signal or the second signal as illustrated inFIG.6. FIG.6is a view illustrating a case in which the modular display apparatus1000is composed of first to sixth display apparatuses100-1to100-6. Each of the first to sixth display apparatuses100-1to100-6may repeatedly execute the first and second operations until a signal is received through the first interface110or the second interface120based on a predetermined period. While executing the first operation, the first display apparatus100-1may receive the first signal1from the first external device30-1through the first interface110. The first signal1is an image signal including information on an image, and may refer to a main signal according to an embodiment. The first display apparatus100-1may transmit the first signal1to the second display apparatus100-2through the fourth interface121. When the first signal1is received from the first display apparatus100-1through the first interface110while executing the first operation, the second display apparatus100-2may transmit the first signal1to the third display apparatus100-3through the fourth interface121. When the first signal1is received through the first interface110from the second display apparatus100-2while executing the first operation, the third display apparatus100-3may transmit the first signal1to the fourth display apparatus100-4through the fourth interface121. The sixth display apparatus100-6may receive the second signal2from the second external device30-2through the second interface120while executing the second operation. The second signal2is an image signal including information on the same image as the image included in the first signal1, and may refer to a backup signal according to an embodiment. The sixth display apparatus (100-6) may transmit the second signal2to the fifth display apparatus100-5through the third interface111. While executing the second operation, the fifth display apparatus100-5may receive the second signal2from the sixth display apparatus100-6through the second interface120. The fifth display apparatus100-5may transmit the second signal to the fourth display apparatus100-4through the third interface111. While executing the second operation, the fourth display apparatus100-5may receive the second signal2from the fifth display apparatus100-5through the second interface120. The fourth display apparatus100-5may transmit the second signal2to the third display apparatus100-3through the third interface111. Because the third display apparatus100-3has already received the first signal1including the same information as the second signal from the second display apparatus100-2, the third display apparatus100-3may not receive the second signal2from the fourth display apparatus100-4. When the second signal2is received from the fourth display apparatus100-4before the third display apparatus100-3receives the first signal1from the second display apparatus100-2, the third display apparatus100-3may receive the second signal through the second interface120. In this way, when the first signal1is transmitted to the first to third display apparatuses100-1to100-3, and the second signal2is transmitted to the fourth to sixth display apparatuses100-4to100-6, the first to third display apparatuses100-1to100-3may display an image based on the first signal1, and the fourth to sixth display apparatuses100-4to100-6may display an image based on the second signal2. FIG.7illustrates a contrasting example. A plurality of display apparatuses constituting a related modular display apparatus have received both a first signal from first external device30-1and a second signal from second external device30-2. Because both the first signal and the second signal are received by each of the plurality of display apparatuses, the contrasting modular display apparatus generates excess heat in an interface, a processor is overloaded due to processing the plurality of signals, and also cross talk and EMI may be generated between the plurality of image signals. By contrast, as illustrated inFIG.6, each display apparatus100of the modular display apparatus1000may solve the aforementioned problem by selectively receiving one of the first signal and the second signal. In addition, when a signal is not transmitted to the third display apparatus100-3from the second display apparatus100-2due to a signal transmission error between the second display apparatus100-2and the third display apparatus100-3, the third display apparatus100-3may receive a signal from the fourth display apparatus100-4. In other words, the disclosure may execute a redundancy function while receiving one of the first signal and the second signal. FIG.8is a flowchart illustrating a method of controlling a display apparatus according to an embodiment. The display apparatus100may repeatedly execute the first operation for identifying a signal received through the first interface and the second operation for identifying a signal received through the second interface (S810). For example, the display apparatus100may repeatedly execute the first operation and the second operation based on a predetermined period. In other words, the display apparatus100may execute the first operation during a first section based on the predetermined period, and may execute the second operation during a second section based on the predetermined period when a signal is not identified in the first interface during the first section. In addition, when the signal received through the first interface is identified while executing first operation, the display apparatus100may not execute the second operation after identifying the signal and display an image on the display based on the signal received through the first interface, and when a signal received through the second interface is identified while executing the second operation, the display apparatus100may not execute the first operation after identifying the signal and display an image on the display based on the signal received through the second interface (S820). The identification of the signal may refer to an identification of a clock signal. For example, the display apparatus100may identify whether the clock signal is included in the signal received through the first interface while executing the first operation, and when it is identified that the clock signal is included in the signal, the display apparatus may display an image on the display based on the signal received through the first interface without executing the second operation. If it is identified that the clock signal is included in the signal, the display apparatus100may execute the second operation. Alternatively, the display apparatus100may identify whether the signal includes a sync signal and a data enable signal, and when it is identified that the signal includes the sync signal and the data enable signal, the display apparatus100may not execute the second operation and display an image on the display based on the signal received through the first interface, and may execute the second operation when it is identified that the signal does not include at least one of the sync signal and the data enable signal. When the signal received through the first interface is identified while the first operation is being executed, the display apparatus100may transmit the signal received through the first interface to the second display apparatus through the fourth interface, and deactivate the function of the second interface. When the signal received through the second interface is identified while the second operation is being executed, the display apparatus100may transmit the signal received through the second interface to the first display apparatus through the third interface to deactivate the function of the first interface. Accordingly, the signal received through the first interface may be transmitted to the first plurality of display apparatuses among the plurality of display apparatuses constituting the modular display apparatus through a first path, and the signal received through the second interface may be transmitted to the second plurality of display apparatuses except for the first plurality of display apparatuses among the plurality of display apparatuses constituting the modular display apparatus through a second path. According to various embodiments as described above, the disclosure can execute a redundancy function, reduce crosstalk occurring between a plurality of image signals, and reduce heat generated in an interface while receiving the plurality of image signals. Further, the disclosure can reduce power consumption by receiving and processing one image signal among a plurality of image signals, and reduce electromagnetic interference (EMI) that may occur between the plurality of image signals. The methods according to the above-described embodiments may be realized as software or applications that may be installed in the existing electronic apparatus. Further, the methods according to the above-described embodiments may be realized by upgrading the software or hardware of the existing electronic apparatus. The above-described embodiments may be executed through an embedded server in the electronic apparatus or through an external server outside the electronic apparatus. A non-transitory computer readable medium in which a program is stored that, when executed, causes a device to sequentially execute a controlling method according to the disclosure may be provided. The non-transitory computer readable recording medium refers to a medium that stores data and that can be read by devices. In detail, the above-described various applications or programs may be stored in the non-transitory computer readable medium, for example, a compact disc (CD), a digital versatile disc (DVD), a hard disc, a Blu-ray disc, a universal serial bus (USB), a memory card, a read only memory (ROM), and the like, and may be provided. Although embodiments have been disclosed, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the disclosure as disclosed in the accompanying claims. Accordingly, such modifications, additions and substitutions should also be understood to fall within the scope of the disclosure. | 42,248 |
11861257 | DETAILED DESCRIPTION Exemplary embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be implemented in a variety of forms and should not be construed as being limited to examples set forth herein; rather, these embodiments are provided so that the present disclosure will be more full and complete so as to convey the idea of the exemplary embodiments to those skilled in this art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In addition, the drawings are merely schematic representations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and the repeated description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software, or implemented in one or more hardware modules or integrated circuits, or implemented in different networks and/or processor devices and/or microcontroller devices. FIG.1shows a schematic diagram of a system architecture of an exemplary application environment to which an interaction method and apparatus between a display device and a terminal device according to an embodiment of the present disclosure can be applied. As shown inFIG.1, the system architecture100may include one or more of terminal devices101,102, and103, a network104and a server105. The network104is used to provide a medium for communication links between the terminal devices101,102,103and the server105. The network104may include various connection types such as wired, wireless communication links, fiber optic cables or the like. The terminal devices101,102, and103may be electronic devices with data interaction functions, including but not limited to, mouses, desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks, and servers inFIG.1is merely illustrative. According to implementation needs, there can be any number of terminal devices, networks, and servers. For example, the server105may be a server cluster composed of multiple servers or the like. An interaction method between a display device and a terminal device provided by embodiments of the present disclosure can be executed by a processor provided in the display device. Correspondingly, in some embodiments, a user can use the terminal device101,102, or103to obtain an access address of the display device, and according to the access address, an access request is generated and sent to the display device by the terminal device, so as to establish a communication connection between the terminal device and the display device; in response to a user's control operation on a cursor in a one-to-one correspondence with the terminal device and displayed on a display screen of the display device, a control instruction is generated and sent to the processor in the display device, and the processor completes the interaction with the terminal device101,102, or103by means of the interaction method between the device and the terminal device provided by the embodiments of the present disclosure. The embodiments of the present disclosure provide an electronic device for implementing the interaction method between the display device and the terminal device. The electronic device includes at least a processor, a memory, and a display screen, and the memory is configured to store executable instructions by the processor, and the processor is configured to execute the interaction method between the display device and the terminal device by executing the executable instructions. The electronic device may be a kind of display device. The following takes a display device200inFIG.2as an example to illustrate a configuration of the display device. It will be understood by those skilled in the art that the configuration inFIG.2can also be applied to stationary type devices, except components specifically for mobile purposes. In other embodiments, the display device200may include more or less components than shown, or combine some components, or split some components, or have different component arrangements. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. An interface connection relationship between components is only schematically shown, and does not constitute a structural limitation of the display device200. In further embodiments, the display device200may also adopt an interface connection manner different from that shown inFIG.2, or a combination of multiple interface connection manners. As shown inFIG.2, the display device200may specifically include: a processor210, an internal memory221, an external memory interface222, a universal serial bus (USB) interface230, a charging management module240, a power management module241, a battery242, an antenna1, an antenna2, a mobile communication module250, a wireless communication module260, an audio module270, a speaker271, a receiver272, a microphone273, an earphone interface274, a sensor module280, a display screen290, a camera module291, an indicator292, a motor293, a key294, a subscriber identification module (SIM) card interface295, and the like, and the sensor module280may include a depth sensor2801, a pressure sensor2802, a gyroscope sensor2803, and the like. The processor210may include one or more processing units, for example, the processor210may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor and/or a neural-network processing unit (NPU), etc. Different processing units may be separate devices, or may be integrated in one or more processors. The NPU is a neural-network (NN) computing processor. With reference to a biological neural network structure, such as a transmission mode between neurons in the human brain, the NPU can quickly process input information and can continuously learn by itself. Applications such as intelligent cognition of the display device200can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like. The memory is provided in the processor210. The memory can store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and the execution of these instructions is controlled by the processor210. The charging management module240is configured to receive a charging input from a charger. The power management module241is configured to couple the battery242, the charging management module240with the processor210. The power management module241receives the input from the battery242and/or the charging management module240, and supplies power to the processor210, the internal memory221, the display screen290, the camera module291, the wireless communication module260, and the like. A wireless communication function of the display device200may be implemented by the antenna1, the antenna2, the mobile communication module250, the wireless communication module260, the modem processor, the baseband processor, and the like. The antenna1and the antenna2are used to transmit and receive electromagnetic wave signals; the mobile communication module250can provide a wireless communication scheme including 2G/3G/4G/5G applied to the display device200; the modem processor can include a modulator and a demodulator; the wireless communication module260can provide a wireless communication scheme including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks, Bluetooth (BT)) applied to the display device200. In some embodiments, the antenna1of the display device200is coupled to the mobile communication module250, the antenna2is coupled to the wireless communication module260, so that the display device200can communicate with the network and other devices through wireless communication technologies. The display device200implements a display function through the GPU, the display screen290, the application processor, and the like. The GPU is a microprocessor for image processing, and is coupled to the display screen290and the application processor. The GPU is configured to perform mathematical and geometric calculations for graphics rendering. The processor210may include one or more GPUs that execute program instructions to generate or alter display information. The display device200may implement a shooting function through the ISP, the camera module291, the video codec, the GPU, the display screen290, the application processor, and the like. The ISP is configured to process data fed back by the camera module291; the camera module291is configured to capture still images or videos; the digital signal processor is configured to process digital signals, and can also process other digital signals, in addition to processing digital image signals; and the video codec is configured to compress or decompress a digital video. The display device200may also support one or more video codecs. The external memory interface222can be configured to be coupled to an external memory card, such as a Micro SD card, to expand a storage capacity of the display device200. The external memory card communicates with the processor210through the external memory interface222to realize a data storage function. In some embodiments, files such as music files and video files are saved in the external memory card. The internal memory221may be configured to store computer executable program codes, which include instructions. The internal memory221may include a program storage area and a data storage area. The program storage area can store an operating system, an application program required for at least one function (such as a sound play function, an image play function, etc.), and the like. The data storage area may store data (such as audio data, phone book, etc.) created during the use of the display device200and the like. In addition, the internal memory221may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like. The processor210executes various functional applications and data processing of the display device200by running instructions stored in the internal memory221and/or instructions stored in the memory provided in the processor. The display device200may implement an audio function through the audio module270, the speaker271, the receiver272, the microphone273, the earphone interface274, the application processor, and the like, such as music play, recording, etc. The depth sensor2801is configured to acquire depth information of a scene. In some embodiments, the depth sensor may be disposed in the camera module291. The pressure sensor2802is configured to sense a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the display screen290may be provided with the pressure sensor2802. There are many kinds of pressure sensors2802, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The gyroscope sensor2803may be configured to determine a motion attitude of the display device200. In some embodiments, angular velocities of the display device200about three axes (i.e., x, y, and z axes) can be determined by the gyroscope sensor2803. The gyroscope sensor2803can be used for image stabilization, navigation, and somatosensory game scenes. In addition, sensors with other functions can also be provided in the sensor module280according to actual needs, such as an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and a bone conduction sensor, etc. Other devices that provide auxiliary functions may also be included in the display device200. For example, the key294includes a power key, a volume key, etc., through which key signals related to the user setting and function control of the display device200can be input and generated by the user. In addition, the indicator292, the motor293, the SIM card interface295, and the like may also be included. In the related art, prices of LED spliced devices and large-size LCD devices are gradually decreasing at present, and such devices gradually replace traditional projectors in conference room scenes. However, traditional pointing devices such as laser pointers have little display effect on the LED spliced devices and the LCD devices. LED spliced black matrixes may absorb a large amount of laser brightness, and brightness of a display area is much greater than that of the laser pointer, and the laser spot is not obvious; an influence of polarization of the LED results in very little reflected laser, and the laser spot is very weak. In a multi-person conference scene, the use of a device such as the laser pointer and an mouse can only realize single device-to-single device control. When multiple persons want to perform pointing, multiple air mice need to be coupled or the laser pointer is passed between these persons, which affects the user experience. The interaction method between the display device and the terminal device according to an embodiment of the present disclosure will be specifically described below. FIG.3shows a flowchart of an interaction method between a display device and a terminal device according to an embodiment of the present disclosure, which can be executed by the electronic device, such as the above-mentioned display device200, which is not limited to this, and the interaction method includes the following steps: in step S310, access requests generated and sent by multiple terminal devices according to an access address are received; and communication connections between the terminal devices and the display device are established; in step S320, multiple cursors in a one-to-one correspondence with individual terminal devices are generated, and the cursors are displayed on a display screen of the display device; and in step S330, a cursor control instruction sent by the terminal device is received, and a display position of the cursor on the display screen is controlled according to the control instruction. Compared with the prior art, the display device in the present disclosure can be accessed by the multiple terminals at the same time, and generate the cursor corresponding to the terminal device, and the user can control the display position of the cursor on the display screen of the display device through the terminal device. The terminal device is used to control the cursor position to complete the interaction, the indication is clear, and there may be the multiple cursors at the same time, which is convenient for interaction and improves the user experience. In the step S310, the access requests generated and sent by the multiple terminal devices according to the access address are received; and the communication connections between the terminal devices and the display device are established. In some embodiments of the present disclosure, the display device can be provided with a multi-device access function. In response to a user's enabling operation on the multi-device access function, the access address of the display device is generated. The enabling operation may be that the user enables the multi-device access function through a voice recognition function of the display device, or the user enables the multi-device access function through a triggering operation on an enabling identification set on the display device, or the user enables the multi-device access function by means of the mouse or the keyboard, which is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, the access address may be displayed in the form of an encoded image, such as a two-dimensional code, a barcode, etc., and the form of the access address is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, the display device receives the access request generated and sent by the terminal device according to the access address, and establishes the communication connection between the terminal device and the display device. The communication connection between the terminal device and the display device can be established through a local area network; or the communication connection between the terminal device and the display device can be established through a cloud server. Specifically, when the access address is the encoded image, the terminal device can decode the encoded image and generate the access request. For example, the encoded image is the two-dimensional code, and the terminal device can scan the two-dimensional code to generate the access request, and then the communication connection between the terminal device and the display device is established. The terminal device can scan the two-dimensional code by using a third-party platform, which can be an existing commonly used third-party platform, such as WeChat, QQ, etc., or by using an independently developed application (APP), which is not specifically limited by the embodiments of the present disclosure. Specifically, the APP for establishing the communication connection between the terminal device and the display device is independently developed, and the encoded image is decoded and the access request is generated by means of this APP. The terminal device sends the access request to the display device, so that the terminal device can establish the communication connection with the display device. In the step S320, the multiple cursors in the one-to-one correspondence with the individual terminal devices are generated, and the cursors are displayed on the display screen of the display device. In some embodiments of the present disclosure, after the communication connection between the terminal device and the display device is established, the processor of the display device generates the multiple cursors in the one-to-one correspondence with the individual terminal devices, and displays the cursors on the display screen of the display device. In some embodiments of the present disclosure, when the multiple cursors in the one-to-one correspondence with the individual terminal devices are generated, a cursor display identifier may be set on the display device. After the user triggers the cursor display identifier, a cursor corresponding to a terminal device is generated on the display device. Each terminal device may correspond to one cursor display identifier, for example, the cursor display identifier corresponding to the terminal device may be set below a terminal identifier of the terminal device; or multiple terminal devices may correspond to one cursor display identifier; or all terminal devices only correspond to one cursor display identifier. A display position of the cursor display identifier can be customized according to user requirements, which is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, a cursor display request sent by the terminal device may be received by the display device, and the cursor corresponding to the terminal device may be generated on the display device. The cursor display request may be generated by the user operating the terminal device and sent to the display device. For example, a cursor display switch is set on the terminal device, and in response to a user's triggering operation on the cursor display switch, the cursor display request can be generated and sent to the display device. In an embodiment, when the cursor is initially generated, the cursor can be displayed in a designated area of the display screen of the display device, such as a set area in the upper left corner of the display screen, a set area in the lower right corner of the display screen, etc., which is not specifically limited by the embodiments of the present disclosure. In an embodiment, a shape of the cursor may be an arrow, a circle, etc., or may be other shapes such as a finger, a triangle, etc., and may also be customized according to the user requirements. In an embodiment, referring toFIG.4, the generated cursors401of the multiple terminal devices can be displayed differently. For example, for different terminal devices, colors of the corresponding cursors401on the display devices are different, or for different terminal devices, shapes of the corresponding cursors401on the display devices are different, or for different terminal devices, the corresponding cursors401on the display devices include different ID numbers. For example, the cursor401includes an ID number corresponding to the terminal device, the cursor401is numbered in the form of numbers and the resulting number is added to the lower right corner of the cursor401to distinguish the cursors401belonging to different terminal devices, which is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, authority levels may be set for different terminal devices, and then an operation authority that the user can perform on the display device through the terminal device may be determined according to the authority level. In an embodiment, the authority levels can be classified according to the personnel corresponding to the terminal devices, and the personnel can be specifically classified into a moderator, an advanced participant and an ordinary participant. The authority levels can be classified as follows: the moderator has all authorities, the advanced participant has a control authority, and the ordinary participant only has a cursor authority. In some embodiments of the present disclosure, the classification of the authority levels can be implemented according to rules set by the processor of the display device. For example, a user of a terminal device that first accesses to the display device is regarded as the moderator, users of the second to Nth terminal devices that access to the display device are regarded as the advanced participants and users of terminal devices that access to the display device after the Nth terminal device are regarded as the ordinary participants. In other embodiments, the classification of the authority levels can be set by the user, that is, the user sets classes of the personnel corresponding to individual terminal devices, and gives corresponding authority levels to different persons. Reference may be made to the table below for the specific classification of the authority levels and the corresponding authority. AuthorityAuthority ClassCursorPage TurnAnnotationEscSettingModerator✓✓✓✓✓Advanced✓✓✓ParticipantOrdinary✓Participant That is, the moderator who has all the authorities can use the terminal device to control the cursor401corresponding to the terminal device to perform operations such as page turn, annotation, exiting, and authority setting on the display device. The advanced participant who has the control authority can perform operations such as page turn, and annotation on the display device through the cursor401corresponding to the terminal device of the advanced participant. The ordinary participant who has the cursor401authority can control the display position of the corresponding cursor401on the display screen of the display device through the terminal device. In some embodiments of the present disclosure, the page turn operation may be to switch content currently displayed on the display screen, the annotation operation may be to add partial content to the currently displayed content, the exiting operation may be to turn off the display of the currently displayed content by the display device, and the authority setting operation may be to change the authority level of each terminal device. In some embodiments of the present disclosure, referring toFIG.4, after the communication connection is established, a terminal identifier402in the one-to-one correspondence with a terminal device can be generated and displayed at a preset position of the display screen. The preset position can be the upper right corner of the display screen, may also be right above the display screen, and may also be customized according to the user requirements, which is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, the terminal identifier402may include attribute information of the terminal device. For example, the terminal identifier402includes avatar information set by the user corresponding to the terminal device so as to distinguish different terminal devices represented by respective terminal identifiers402. In some embodiments of the present disclosure, voice interaction can also be performed between the display device and the terminal device. The display device can receive and play voice information sent by the terminal device. When playing the voice information sent by the terminal device, the display device can display the terminal identifier402corresponding to the current terminal device in a differentiated manner. For example, a speaker logo is set in the lower right corner of the terminal identifier402corresponding to the terminal device, or a frame is set on the periphery of the terminal identifier402, or the display of the terminal identifier402corresponding to the terminal device can be customized according to the user requirements, which is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, the display device can receive and play the voice information sent by the terminal device, and when playing the voice information sent by the terminal device, the display device can generate a play mark on the cursor corresponding to the current terminal device, so that other users can be aware of a source of the voice information, thereby improving the user experience. In the step S340, the cursor control instruction sent by the terminal device is received, and the display position of the cursor on the display screen is controlled according to the control instruction. In some embodiments of the present disclosure, the user can send the cursor401control instruction to the processor of the display device through the terminal device, and the processor can control the display position of the cursor401on the display screen according to the cursor401control instruction. In some embodiments of the present disclosure, the terminal device may be a device with an interactive control function, such as a mouse, a notebook computer, a mobile phone, etc., and the terminal device may include the gyroscope. The control instruction includes data information of the terminal device collected by the gyroscope, the terminal device sends the data information to the display device, and the display device controls the display position of the cursor401on the display screen according to the data information. Specifically, the gyroscope can obtain accelerations of the terminal device along three axes of x, y, and z directions in a coordinate system, and a sampling frequency is set. When the terminal device moves, all the three accelerations may change. First, there is an initial value, that is, the gravitational acceleration g, for an value of the acceleration. An acceleration a′ decomposed on a plane perpendicular to a direction of the gravitational acceleration is used as an acceleration of the cursor movement. Due to the error generated by the gyroscope, data filtering can be performed on the acceleration, and then time integration of the acceleration a′ can be used to obtain a movement distance; finally, the movement distance can be normalized to be converted to the number of movement pixels. Then, the number of movement pixels, that is, movement data, is transmitted to the processor of the display device, and then the processor of the display device controls the display position of the cursor according to the movement data. In other embodiments of the present disclosure, the terminal device has a display function, and content displayed on the terminal device is synchronized with content displayed on the display device. After the communication connection between the terminal device and the display device is established, the user can move the display position of the cursor corresponding to the terminal device by means of the terminal device, thereby moving the display position of the cursor corresponding to the terminal device on the display device. In some embodiments of the present disclosure, the present disclosure also provides an interaction method between a display device510and a terminal device520. Referring toFIG.5, multiple display devices510may be included, and the multiple display devices510display synchronously, that is, contents displayed on the multiple display devices510are exactly the same, and the contents are synchronized, and each display device510can complete the interaction with the terminal device520. For example, one display device510and multiple terminal devices520are included at a location A, and one display device510and one terminal device520are included at a location B, the displayed contents of the two display devices510are identical and synchronized. In this case, after the communication connections between the multiple terminal devices520at the location A and the display device510at the location A are established, the multiple terminal devices520at the location A can control the displayed content of the display device510at the location A. Since the displayed contents of the two display devices510are completely synchronized, the displayed content of the display device510at the location B can also be controlled. In other embodiments of the present disclosure, the displayed contents of the multiple display devices may be partially synchronized. For example, the multiple display devices510include display windows corresponding to the same application, the displayed contents in the display windows are completely synchronized, and contents outside the display windows may be displayed asynchronously or synchronously, which is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, cursors corresponding to the same terminal device520displayed on the multiple different display devices can be regarded as the same cursor, and when the multiple display devices510have all executed the interaction method between the display device and the terminal device, the contents displayed on the multiple different display devices can be completely consistent, and the number of cursors displayed on each display device is the total number of terminal devices coupled to the display devices510. For example, the number of terminal devices520accessing to the display device510at the location A is three, the number of terminal devices520accessing to the display device510at the location B is five. In this case, the display device510at the location A and the display device510at the location B display the same number of cursors, and both display eight cursors. The number of terminal devices520accessing to the display device510at the location A is two, the number of terminal devices520accessing to the display device510at the location B is seven. In this case, the display device510at the location A and the display device510at the location B display the same number of cursors, and both display nine cursors. The number of terminal devices that can access to each display device510can be customized according to the user requirements, which is not specifically limited by the embodiments of the present disclosure. In some embodiments of the present disclosure, the interaction between the display device510and the terminal device520located in the same area has been described in detail above, and therefore will not be repeated here. In some embodiments of the present disclosure, as shown inFIG.5, the multiple display devices510may synchronize the displayed contents through a cloud server530. Specifically, the multiple display devices510are coupled to the same cloud server530, and the smart terminal can also control the display position of the corresponding cursor on the display device510through the cloud server530. In some embodiments of the present disclosure, referring toFIG.6, the solution of the present disclosure is described in detail through a specific embodiment. Specifically, the display device510enables the multi-device access function, and then the processor511of the display device510generates the access address and displays it in the form of the two-dimensional code. The terminal device520scans the two-dimensional code and sends the access request to establish the communication connection with the display device510, and at this time, the cursor in the one-to-one correspondence with the terminal device520can be displayed on the display device510. Then, the processor511classifies the authority levels, the gyroscope of the terminal device520detects the data information of the terminal device520, and then the data information is sent to the processor511. The processor511obtains movement data and determines a position of the cursor after movement according to the data information, and finally, the display device510displays the cursor at the designated position. There may be multiple terminal devices520, such as three, four, or more etc., which is not specifically limited by the embodiments of the present disclosure. In addition, the present disclosure also provides an interaction method between a display device and a terminal device, as shown inFIG.7, which can be executed by the terminal device, and specifically may include the following steps:in step S710, an access address of the display device is obtained, an access request is generated according to the access address and sent to the display device to establish a communication connection between the terminal device and the display device; andin step S720, in response to a user's control operation on a cursor in a one-to-one correspondence with the terminal device and displayed on a display screen of the display device, a control instruction is generated and sent to the display device, so that a display position of the cursor on the display screen can be controlled by the display device according to the control instruction. In the embodiments of the present disclosure, steps performed by the terminal device have already been described in detail when steps performed by the display device is described, and thus are not repeated here. In addition, the present disclosure also provides an interaction method between a display device and a terminal device. Referring toFIG.8, the method may include the following steps:in step S810, in response to a user's enabling operation on the multi-device access function, the display device generates an access address of the display device;in step S820, the terminal device obtains the access address of the display device, generates an access request according to the access address and sends it to the display device to establish a communication connection between the terminal device and the display device;in step S830, the display device generates multiple cursors in a one-to-one correspondence with individual terminal devices, and displays the cursors on a display screen of the display device;in step S840, in response to a user's control operation on a cursor in a one-to-one correspondence with the terminal device and displayed on the display screen of the display device, the terminal device generates a control instruction and sends it to the display device; andin step S850, the display device receives a cursor control instruction sent by the terminal device, and controls a display position of the cursor on the display screen according to the control instruction. The specific content of each of the above steps has been described in detail above, and therefore, will not be repeated here. In summary, in the embodiments of the present disclosure, the display device in the present disclosure can be accessed to multiple terminals at the same time, and generate the cursor corresponding to the terminal device, and the user can control the display position of the cursor on the display screen of the display device through the terminal device. The terminal device is used to control the cursor position to complete the interaction, the indication is clear, and there are the multiple cursors at the same time, which is convenient for interaction and improves the user experience. In addition, it should be noted that the above-mentioned drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present disclosure, and are not intended to limit the purpose. It is understood that the processes shown in the above drawings do not indicate or limit the chronological order of these processes. In addition, for example, it is also understood that these processes may be performed synchronously or asynchronously in multiple modules. Those skilled in the art may understand that various aspects of the present disclosure may be implemented as a system, method, or program product. Therefore, various aspects of the present disclosure may be embodied in the following forms: a complete hardware implementation, a complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software, which may be collectively referred to herein ‘circuit’, ‘module’, or ‘system’. The embodiments of the present disclosure also provide a computer readable storage medium, and a program product capable of implementing the above-mentioned method of the present specification is stored in the computer readable storage medium. In some possible embodiments, aspects of the present disclosure may also be implemented in the form of a program product, which includes program code. When the program product runs on a terminal device, the program code is used to make the terminal device perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned “exemplary method” section of this specification. It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of thereof. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer readable storage medium include: electrical connection with one or more wires, portable computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, the computer readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. And in the present disclosure, the computer readable signal medium may include a data signal in baseband or propagated as a part of a carrier wave, which carries computer readable program codes. Such a propagated data signal may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The computer readable signal medium may also be any computer readable medium other than a computer readable storage medium, and the computer readable medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device. The program code contained in the computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the foregoing. In addition, the program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, which include object-oriented programming languages, such as Java, C++, etc. and further include conventional procedural programming language, such as ‘C’ or a similar programming language. The program code may be executed entirely or partly in the user computing device, may be executed as an independent software package, may be executed partly in the user computing device and partly in the remote computing device, or may be executed entirely in the remote computing device or server. In the case of involving remote computing devices, the remote computing devices may be coupled to the user computing device via any kind of network, such as a local area network (LAN) or a wide area network (WAN), or it may be coupled to external computing devices, for example, coupled to external computing devices via the Internet by use of an Internet service provider. Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the present disclosure disclosed herein. The present disclosure is intended to cover any variations, uses, or adaptations of the present disclosure, which are in accordance with the general principles of the present disclosure and include common general knowledge or conventional technical means in the art that are not disclosed in the present disclosure. The specification and embodiments are illustrative, and the real scope and spirit of the present disclosure is defined by the appended claims. It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and various modifications and changes can be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims. | 43,044 |
11861258 | DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS Various embodiments of the present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the disclosure are shown. Indeed, the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. The term “or” is used herein in both the alternative and conjunctive sense, unless otherwise indicated. The terms “illustrative” and “exemplary” are used to be examples with no indication of quality level. Like numbers refer to like elements throughout. Overview Various embodiments of the disclosure generally relate to sharing interface annotations among participating devices within a group-based communication system. Members of a group-based communication system routinely use group-based audio and/or video connections as well as in-person meetings to collaborate. During such meetings, it is often useful to share screens among participants such that each participant can view relevant information, e.g., presentations, papers, graphics, source code, or the like, for discussion. As meetings proceed and discussions develop, participants may desire to point to, highlight, or otherwise emphasize portions of the presentation, paper, graphic, etc. However, participants other than the host sharing his screen are conventionally unable to do so. Current attempts to work around, rather than technologically address, this problem such as passing host sharing responsibility among participants have proven to be inefficient, error prone, and frustrating. Provided herein are methods and systems for effectively and efficiently sharing interface annotations among participating devices in a group-based communication system. While a select interface (the “shared interface”) is being shared among participating devices such that each participating device can view the intended display, participating devices can add one or more annotations (otherwise referred to herein as “interface annotations”) to the shared interface. The one or more interface annotations are then shared among other participating devices such that the participants in the discussion can view and quickly understand any point the annotator is attempting to convey. Various annotations are possible and can be modified to remain on the shared interface for a selected period of time. For instance, the interface annotations may remain on the shared interface for a period of seconds or minutes and then be removed without user intervention. Beneficially, this may allow the presentation of the shared document to continue without the interface annotation carrying over to unrelated discussions or becoming distracting for subsequent discussion. The interface annotations can have an ephemeral nature in some embodiments while in others, the interface annotations may be built upon with other interface annotations thereby allowing for collaboration of ideas by various participants. Further, in some embodiments, the interface annotations may be made permanent and/or recorded for future replay of the discussion. For instance, in some embodiments, such as when a video call transcription is enabled, the interface annotation may be reproduced upon playback of a transcribed video. The interface annotations may be reproduced at the point in time the original interface annotations were shared and remain on the shared interface for the period of time for which the original interface annotations were displayed. In some embodiments, the interface annotations may be tagged or otherwise designated as being created by certain participating devices. For instance, an interface annotation may be a distinct color to indicate that a certain participating device created the respective interface annotation. In some embodiments, the interface annotation may have a tag that identifies the participating device which created the interface annotation (e.g., the tag may include an ASCII text, a pointer, a memory address, emoji, avatar, etc. that identifies the user via the participating device who created the interface annotation). In some embodiments, one or more users may have the ability to disable the ability of certain participating devices to create interface annotations. The ability of individual participating devices to create interface annotations may be disabled and may be enabled. In some embodiments, custom emojis may be stamped on the interface. In some embodiments, an interface annotation may be smoothed to result in a cleaner, more aesthetically pleasing interface annotation. In some embodiments, the interface annotations are created on a designated layer of the shared interface. The designated layer can then be transmitted along with the rest of the shared interface to participating devices or can be selectively withheld from being transmitted with the rest of the shared interface. For instance, in some embodiments, sharing a shared interface may result in a double rendering of the interface annotation. The resulting image may appear blurred, offset, or otherwise insufficient. By selectively withholding the designated layer that the interface annotation is associated with, the inventors have found that double rendering of the interface annotation can be avoided. In addition, having the interface annotation associated with a designated layer of the shared interface can allow for identification of the interface annotation and association of the interface annotation with a select participating device. In such embodiments, other participating devices can quickly recognize which participating device created the interface annotation. Such information can help further develop and advance the discussion. The group-based communication interface provides systems and methods for allowing participants in a discussion to interact in the discussion visually thereby allowing the participants to more clearly and efficiently explain their point of view or point of discussion. The group-based communication interface allows an interface to be shared among participating devices and allows the participating devices to annotate the interface simultaneously such that sharing does not need to be disconnected/reconnected to transfer control of annotations among devices and the viewed documents do not need to be separately transmitted among devices. Further, visually illustrating a point a user is trying to convey may be more efficient than trying to verbally explain the point; thus, the group-based communication interface may allow for shorter discussions. The group-based communication interface thereby reduces the system resources, improving the life of the devices, and thereby provides an efficient and effective method to allow the creation and display of annotations and documents among devices. The group-based communication interface thereby provides an interface for group-based communications rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks. Definitions As used herein, the terms “data,” “content,” “digital content,” “digital content object,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received, and/or stored in accordance with embodiments of the present disclosure. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present disclosure. Further, where a computing device is described herein to receive data from another computing device, it will be appreciated that the data may be received directly from another computing device or may be received indirectly via one or more intermediary computing devices, such as, for example, one or more servers, relays, routers, network access points, base stations, hosts, and/or the like, sometimes referred to herein as a “network.” Similarly, where a computing device is described herein to send data to another computing device, it will be appreciated that the data may be sent directly to another computing device or may be sent indirectly via one or more intermediary computing devices, such as, for example, one or more servers, relays, routers, network access points, base stations, hosts, and/or the like. The term “client device” refers to computer hardware and/or software that is configured to access a service made available by a server. The server is often (but not always) on another computer system, in which case the client device accesses the service by way of a network. Client devices may include, without limitation, smart phones, tablet computers, laptop computers, wearables, personal computers, enterprise computers, and the like. “Group-based” is used herein to refer to a system, channel, message, or virtual environment that has security sufficient such that it is accessible only to a defined group of users. The group may be defined by common access credentials such as those of an organization or commercial enterprise. Access may further be facilitated by a validated request to join or an invitation to join transmitted by one group member user to another non-member user. Group identifiers (defined below) are used to associate data, information, messages, etc., with specific groups. The term “group-based communication system” refers to a communications software platform and associated hardware that is configured to support and maintain a plurality of group-based communication interfaces and all associated functionality. Group-based communication system users are organized into organization groups (e.g., employees of different companies may be separate organization groups) and each group interacts with the system via a respective group-based communication interface. For example, the group-based communication system might support, among others, a Slack Corporation group-based communication interface and an ACME Corporation group-based communication interface. Example group-based communication systems comprise supporting servers, client devices, and external applications. The supporting servers comprise, among other components, a messaging communication server, a validation server, a user account database, an analytical engine, and the like. The term “group-based communication interface” refers to a virtual communications environment configured to facilitate user interaction with a group-based communications system. Each group-based communication interface is accessible and viewable to a select group of users, such as a group of employees of a business or organization (e.g., the Slack Corp. interface would be accessible and viewable to the Slack employees however the ACME Corporation group-based communication interface would not be accessible and viewable to Slack employees). The group-based communication interface includes a plurality of group-based communication channels (e.g., a marketing channel, sales channel, accounting channel, etc.), which are defined below. The term “group-based communication channel” refers to a virtual communications environment or feed that is configured to display messaging communications posted by channel members (e.g., validated users accessing the environment using client devices) that are viewable only to the members of the group. The format of the group-based communication channel may appear differently to different members of the group-based communication channel; however, the content of the group-based communication channel (i.e., messaging communications), in some embodiments, will be displayed to each member of the group-based communication channel. For instance, a common set of group-based messaging communications may be displayed to each member of the respective group-based communication channel such that the content of the group-based communication channel (i.e., messaging communications) does not vary per member of the group-based communication channel. The term “user” should be understood to refer to an individual, group of individuals, business, organization, and the like; the users referred to herein are accessing a group-based communication or messaging system using client devices. The terms “user profile,” “user account,” and “user account details” refer to information associated with a user, including, for example, a user identifier, one or more group-based communication channel identifiers associated with group-based communication channels that the user has been granted access to, one or more group identifiers for groups with which the user is associated, an indication as to whether the user is an owner of any group-based communication channels, an indication as to whether the user has any group-based communication channel restrictions, a plurality of messages, a plurality of emojis, a plurality of conversations, a plurality of conversation topics, an avatar, an email address, a real name (e.g., John Doe), a username (e.g., j doe), a password, a real name, a time zone, a status, and the like. The user account details can include a subset designation of user credentials, such as, for example, login information for the user including the user's username and password. The terms “group-based communication channel identifier” or “channel identifier” refer to one or more items of data by which a group-based communication channel may be identified. For example, a group-based communication channel identifier may comprise ASCII text, a pointer, a memory address, and the like. The terms “group identifier” or “team identifier” refer to one or more items of data by which a group within a group-based communication system may be identified. For example, a group identifier may comprise ASCII text, a pointer, a memory address, and the like. As used herein, the terms “messaging communication” and “message” refer to any electronically generated digital content object provided by a user using a client device and that is configured for display within a group-based communication channel. Message communications may include any text, image, video, audio or combination thereof provided by a user (using a client device). For instance, the user may provide a messaging communication that includes text as well as an image and a video within the messaging communication as message contents. In such a case, the text, image, and video would comprise the messaging communication or digital content object. Each message sent or posted to a group-based communication channel of the group-based communication system includes metadata comprising the following: a sending user identifier, a message identifier, message contents, a group identifier, and a group-based communication channel identifier. Each of the foregoing identifiers may comprise ASCII text, a pointer, a memory address, and the like. A “sending user identifier” is associated with a collection of messages and/or interface annotations that are sent by a particular user (via a client device associated with the particular user). These messages and/or interface annotations may be analyzed to determine context regarding the user (e.g., the user's expertise or interest in a topic may be determined based on the frequency of mention of the topic or key-words associated with the topic within such messages and/or interface annotations). Group-based communication system users are organized into organization groups (e.g., employees of each company may be a separate organization group) and each organization group may have one or more group-based communication channels (explained below) to which users may be assigned or which the users may join (e.g., group-based communication channels may represent departments, geographic locations such as offices, product lines, user interests, topics, issues, and/or the like). A group identifier may be used to facilitate access control for a message (e.g., access to the message, such as having the message return as part of search results in response to a search query, may be restricted to those users having the group identifier associated with their user profile). The group identifier may be used to determine context for the message (e.g., a description of the group, such as the name of an organization and/or a brief description of the organization, may be associated with the group identifier). Group-based communication system users may join group-based communication channels. Some group-based communication channels may be globally accessible to those users having a particular organizational group identifier associated with their user profile (i.e., users who are members of the organization). Access to some group-based communication channels may be restricted to members of specified groups, whereby the group-based communication channels are accessible to those users having a particular group identifier associated with their user profile. The group-based communication channel identifier may be used to facilitate access control for a message (e.g., access to the message, such as having the message return as part of search results in response to a search query, may be restricted to those users having the group-based communication channel identifier associated with their user profile, or who have the ability to join the group-based communication channel). The group-based communication channel identifier may be used to determine context for the message (e.g., a description of the group-based communication channel, such as a description of a project discussed in the group-based communication channel, may be associated with the group-based communication channel identifier). The term “private group-based communication channel” refers to a group-based communication channel with restricted access such that it is not generally accessible and/or searchable by other members of the group-based communication system. For example, only those users or administrators who have knowledge of and permission to access (e.g., a group-based communication channel identifier for the private group-based communication channel is associated with their user profile after the user has been validated/authenticated) the private group-based communication channel may view content of the private group-based communication channel. The term “interface annotation” refers to a visual marking applied virtually to a graphic user interface, such as a group-based communication interface. The visual marking may be used by participating users of a group-based communication interface to highlight, emphasize, modify, illustrate, or otherwise bring to the attention of other participating users some portion of the graphic user interface. In the group-based communication interface of the present disclosure, the interface annotations are applied to a designated layer of the group-based communication interface and can be shared among participating devices on a shared interface. A “shared interface” refers to a portion of a group-based communication interface that is shared among participating devices of a group-based communication system. The shared interface is a portion of the group-based communication interface configured for display on the interface sharing device that can then be shared among participating devices. The contents of the shared interface may vary over time as the interface sharing devices modifies/changes the portion of the group-based communication interface configured for display on the interface sharing device. The interface may vary among participating devices as to certain qualities or aspects of the interface, however, the information conveyed by the interface will generally be rendered on a display of each participating device. That is, the protocol for sending the shared interface to each participating device may vary among participating devices as well as the format, size, etc. of the resulting interface; however, the content will generally be consistent among participating devices. In some embodiments, interface annotations may be designated for certain participating devices among a plurality of participating devices. In such embodiments, except for the select interface annotations affected, the content will generally be consistent among participating devices. The term “participating device” or “participant device” refers to a client device configured and authenticated for communication with the group-based communication system and interaction with a group-based communication interface. When an interface is shared among devices participating in a discussion/call/meeting, the client device from which the interface is shared may be referred to as the “interface sharing device” and each of the client devices participating in the discussion/call/meeting and receiving the shared interface may be referred to as the participating device or participant device. A “participating device identifier” refers to one or more items of data by which a participating device within a group-based communication system may be identified. For example, a participating device identifier may comprise ASCII text, a pointer, a memory address, and the like. The term “annotating device” refers to the participating device that creates the interface annotation. That is, the annotating device is respective to the particular interface annotation. In general, each of the participating devices can be an annotating device. The term “display input data” refers to the items of data associated with the interface annotation and that can be used to create interface annotation instructions for rendering of the interface annotation on one or more interfaces of the group-based communication system. The display input data may include coordinates for markings or portions of the associated interface annotation and/or participating device identifiers. “Interface annotation instructions” refers to items of data that can be transmitted to other client devices and/or to a back-end server and/or stored on a client device and/or back-end server and provide guidelines or directions and/or relevant identifiers for rendering the associated interface annotation. In some embodiments, the interface annotation instructions may comprise coordinates for virtual markings or portions of the associated interface annotation and participating device identifiers. As used herein, “intended set of display input data” refers to display input data that is generated based on the received display input data. Interface annotation instructions are then based on the intended set of display input data in place of or in addition to the received display input data. As used herein, “a higher quality, clarity, or aesthetic appearance” (e.g., cleaner or smoother lines/shapes/images) may result from the intended set of display input data. The intended set of display input data may result in smoother lines, cleaner shapes, etc. as compared to the interface annotations that would otherwise result from the display input data alone. The circuitry described herein may include instructions that relate a set of display input data or display input data falling within defined parameters with intended set of display input data such that when the set of display input data is received, the intended set of display input data may be generated and used to generate the interface annotation instructions. In some embodiments, an intended set of display input data may be generated when an algorithm is performed based on the received display input data and determines that intended set of display input data is needed. For instance, an algorithm may determine that the display input data would result in a misshapen circle or jagged edge. The circuitry disclosed herein would then generate an intended set of display data that would result in interface annotations for a symmetrical circle or smooth edge. In some embodiments, machine learning models may be used to generate the interface annotations. The term “machine learning model” refers to a computer application that employs an algorithm to build a model from sample inputs. The machine learning model can make predictions or decisions based on input data. For instance, programmatically expected interface annotations may be generated based on display input data. For example, display input data may be received and using machine learning models, predicted interface annotations may be generated. Predicted interface annotations refers to programmatically generated interface annotations with an expected likelihood that the associated display input data will result or is intended to result in the predicted interface annotations. For instance, as display input data that includes coordinates for the start of a circle are received, machine learning models may determine that a circle is intended and complete the display input data such that a circle interface annotation is generated without receiving the complete set of display input data for a circle. The term “time of receipt” refers to timestamps defined by a computer, server, or communications network. A timestamp is a sequence of characters or encoded information identifying when a certain event (e.g., an interface annotation) occurred, usually giving date and time of day, sometimes accurate to a small fraction of a second. For example, display input data may comprise a timestamp that tells when an associated interface annotation was created or last modified. Example System Architecture Methods, apparatuses, and computer program products of the present disclosure may be embodied by any of a variety of devices. For example, the method, apparatus, and computer program product of an example embodiment may be embodied by a networked device (e.g., an enterprise platform), such as a server or other network entity, configured to communicate with one or more devices, such as one or more client devices. Additionally or alternatively, the computing device may include fixed computing devices, such as a personal computer or a computer workstation. Still further, example embodiments may be embodied by any of a variety of mobile devices, such as a portable digital assistant (PDA), mobile telephone, smartphone, laptop computer, tablet computer, wearable, or any combination of the aforementioned devices. FIG.1illustrates an example computing system100within which embodiments of the present disclosure may operate. Users may access a group-based communication system105via a communications network104using client devices101A-101N. The group-based communication system105may comprise a group-based communication server106in communication with at least one group-based communication repository107. Client devices101A-101N may interact peer-to-peer or may interact through group-based communication server106and group-based communication repository107. Communications network104may include any wired or wireless communication network including, for example, a wired or wireless local area network (LAN), personal area network (PAN), metropolitan area network (MAN), wide area network (WAN), or the like, as well as any hardware, software and/or firmware required to implement it (such as, e.g., network routers, etc.). For example, communications network104may include a cellular telephone, an902.11,902.16,902.20, and/or WiMax network. Further, the communications network104may include a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols. For instance, the networking protocol may be customized to suit the needs of the group-based communication system. In some embodiments, the protocol is a custom protocol of JSON objects sent via a Websocket channel. In some embodiments, the protocol is JSON over RPC, JSON over REST/HTTP, and the like. The group-based communication server106may be embodied as a computer or computers as known in the art. The group-based communication server106may provide for receiving of electronic data from various sources, including but not necessarily limited to the client devices101A-101N. For example, the group-based communication server106may be operable to receive and post or transmit shared interfaces, interface annotations, interface annotation instructions, display input data, etc. provided by the client devices101A-101N. The group-based communication repository107may be embodied as a data storage device such as a Network Attached Storage (NAS) device or devices, or as a separate database server or servers. The group-based communication repository107includes information accessed and stored by the group-based communication server106to facilitate the operations of the group-based communication system105. For example, the group-based communication repository107may include, without limitation, a plurality of shared interfaces, interface annotations, interface annotation instructions, display input data, etc. organized within the group-based communication repository107. The client devices101A-101N may be any computing device as defined above. Electronic data received by the group-based communication server106from the client devices101A-101N may be provided in various forms and via various methods. For example, the client devices101A-101N may include desktop computers, laptop computers, smartphones, netbooks, tablet computers, wearables, and the like. In embodiments where a client device101A-101N is a mobile device, such as a smart phone or tablet, the client device101A-101N may execute an “app” to interact with the group-based communication system105. Such apps are typically designed to execute on mobile devices, such as tablets or smartphones. For example, an app may be provided that executes on mobile device operating systems such as iOS®, Android®, or Windows®. These platforms typically provide frameworks that allow apps to communicate with one another and with particular hardware and software components of mobile devices. For example, the mobile operating systems named above each provide frameworks for interacting with location services circuitry, wired and wireless network interfaces, user contacts, and other applications. Communication with hardware and software modules executing outside of the app is typically provided via application programming interfaces (APIs) provided by the mobile device operating system. Additionally or alternatively, the client device101A-101N may interact with the group-based communication system105via a web browser. As yet another example, the client device101A-101N may include various hardware or firmware designed to interface with the group-based communication system105. In some embodiments of an exemplary group-based communication system105, interface annotation instructions may be sent from a client device101A-101N to a group-based communication system105. In various implementations, the interface annotation instructions may be sent to the group-based communication system105over communications network104directly by a client device101A-101N, the interface annotation instructions may be sent to the group-based communication system105via an intermediary such as an intermediate server or another client device101A-101N, and/or the like. For example, the client device101A-101N may be a desktop, a laptop, a tablet, a smartphone, and/or the like that is executing a client application (e.g., a group-based communication app). In one implementation, the interface annotation instructions may include data such as a participating device identifier, sending user identifier, a group identifier, a group-based communication channel identifier, annotation coordinates (e.g., text, highlights, underlines, emojis, images, links, or other markings), attachments (e.g., files), annotation hierarchy data (e.g., the interface annotation may be linked to another interface annotation), third party metadata, and/or the like. In one embodiment, the client device101A-101N may provide interface annotation instructions, substantially in the form of a (Secure) Hypertext Transfer Protocol (“HTTP(S)”) POST message including eXtensible Markup Language (“XML”) formatted data. The group-based communication system105comprises at least one group-based communication server106that may create a storage interface annotation instructions based upon the display input data to facilitate indexing and storage in a group-based communication repository107. In one implementation, the storage interface annotation instructions may include data such as a participating device identifier, a group identifier, a group-based communication channel identifier, a sending user identifier, topics, responses, annotation coordinates, attachments, annotation hierarchy data, third party metadata, conversation primitive data, and/or the like. For example, the group-based communication server106may provide storage interface annotation instructions, substantially in the form of a HTTP(S) POST message including XML-formatted data. In embodiments, a group identifier as defined above may be associated with the interface annotation. In embodiments, a group-based communication channel identifier as defined above may be associated with the interface annotation. In embodiments, a participating device identifier as defined above may be associated with the interface annotation. In one implementation, the interface annotation instructions may be parsed (e.g., using PHP commands) to determine a participating device identifier of the device from which the interface annotation originated. In embodiments, topics may be associated with the interface annotation. In one implementation, the interface annotation instructions may be parsed (e.g., using PHP commands) to determine topics associated the interface annotation. In another example, the interface annotation instructions may be analyzed (e.g., by itself, with other interface annotation instructions) or parsed using a machine learning technique, such as topic modeling, to determine topics associated with the interface annotation. In embodiments, data indicating responses may be associated with the interface annotation. For example, responses to the interface annotation by other users may include reactions (e.g., selection of an emoji associated with the interface annotation, selection of a “like” button associated with the interface annotation), clicking on a hyperlink embedded in the interface annotation, replying to the interface annotation (e.g., adding an interface annotation to the shared interface in response to the interface annotation), downloading a file associated with the interface annotation, sharing the interface annotation from one group-based communication channel to another group-based communication channel, pinning the interface annotation, starring the interface annotation, and/or the like. In one implementation, data regarding responses to the interface annotation by other users may be included with the interface annotation, and the interface annotation may be parsed (e.g., using PHP commands) to determine the responses. In another implementation, data regarding responses to the interface annotation may be retrieved from a database. For example, data regarding responses to the message may be retrieved via a MySQL database command. For example, data regarding responses to the interface annotation may be used to determine context for the interface annotation (e.g., a social score for the interface annotation from the perspective of some user). In another example, data regarding responses to the interface annotation may be analyzed to determine context regarding the user (e.g., the user's expertise in a topic may be determined based on the responses to the user's interface annotation regarding the topic). In embodiments, attachments may be included with the interface annotation. If there are attachments, files may be associated with the interface annotation instructions. In one implementation, the interface annotation instructions may be parsed (e.g., using PHP commands) to determine file names of the attachments. For example, file contents may be analyzed to determine context for the interface annotation instructions (e.g., a patent policy document may indicate that the interface annotation instructions is associated with the topic “patents”). In embodiments, third party metadata may be associated with the interface annotation. For example, third party metadata may provide additional context regarding the interface annotation or the user that is specific to a company, group, group-based communication channel, and/or the like. In one implementation, the interface annotation instructions may be parsed (e.g., using PHP commands) to determine third party metadata. For example, third party metadata may indicate whether the user who sent the interface annotation instructions is an authorized representative of the group-based communication interface (e.g., an authorized representative may be authorized by the company to respond to questions in the group-based communication system). In embodiments, a conversation primitive may be associated with the interface annotation instructions. In one implementation, a conversation primitive is an element used to analyze, index, store, and/or the like an interface annotation. For example, the interface annotation (and/or interface annotation instructions) may be analyzed by itself, and may form its own conversation primitive. In another example, the interface annotation instructions may be analyzed along with other interface annotation instructions, and the interface annotations that make up the discussion may form a conversation primitive. In one implementation, the conversation primitive may be determined as the interface annotation instructions, a specified number (e.g., two) of preceding interface annotation instructions and a specified number (e.g., two) of following interface annotation instructions. In another implementation, the conversation primitive may be determined based on analysis of topics discussed in the discussion and other interface annotations (e.g., in the discussion) and/or proximity (e.g., interface annotation send order proximity, interface annotation send time proximity) of these interface annotations. In embodiments, various metadata, determined as described above, and/or the contents of the interface annotation instructions may be used to index the interface annotation (e.g., using the conversation primitive) to facilitate various facets of searching (i.e., search queries that return results from group-based communication repository107). In one implementation, a storage interface annotation may be sent from group-based communication server106to facilitate indexing in group-based communication repository107. In another implementation, metadata associated with the interface annotation may be determined and the interface annotation may be indexed in group-based communication repository107. In one embodiment, the interface annotation may be indexed such that a company's or a group's interface annotation are indexed separately (e.g., in a separate index associated with the group and/or company that is not shared with other groups and/or companies). In one implementation, interface annotation may be indexed at a separate distributed repository (e.g., to facilitate data isolation for security purposes). If there are attachments associated with the interface annotation, file contents of the associated files may be used to index such files in group-based communication repository107to facilitate searching. In one embodiment, the files may be indexed such that a company's or a group's files are indexed at a separate distributed repository. Example Apparatus for Implementing Embodiments of the Present Disclosure The client devices101A-101N and/or group-based communication server106may be embodied by one or more computing systems and include one or more components shown in circuitry200shown inFIG.2. The circuitry200may include a processor202, a memory201, input/output circuitry203, and communications circuitry205. The circuitry200may, in some embodiments, also include group-based communication repository107and group-based communication circuitry204, and in some embodiments, the circuitry200may include shared interface rendering module206and shared interface repository207. The circuitry200may be configured to execute the operations described above with respect toFIG.1and below with respect toFIGS.9-11. Although these components107and201-207are described with respect to functional limitations, it should be understood that the particular implementations necessarily include the use of particular hardware. It should also be understood that certain of these components107and201-207may include similar or common hardware. For example, two sets of circuitry may both leverage use of the same processor, network interface, storage medium, or the like to perform their associated functions, such that duplicate hardware is not required for each set of circuitry. The use of the term “circuitry” as used herein with respect to components of the apparatus should therefore be understood to include particular hardware configured to perform the functions associated with the particular circuitry as described herein. The term “circuitry” should be understood broadly to include hardware and, in some embodiments, software for configuring the hardware. For example, in some embodiments, “circuitry” may include processing circuitry, storage media, network interfaces, input/output devices, and the like. In some embodiments, other elements of the circuitry200may provide or supplement the functionality of particular circuitry. For example, the processor202may provide processing functionality, the memory201may provide storage functionality, the communications circuitry205may provide network interface functionality, and the like. In some embodiments, the processor202(and/or co-processor or any other processing circuitry assisting or otherwise associated with the processor) may be in communication with the memory201via a bus for passing information among components of the apparatus. The memory201may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory may be an electronic storage device (e.g., a computer readable storage medium). The memory201may be configured to store information, data, content, applications, instructions, or the like, for enabling the apparatus to carry out various functions in accordance with example embodiments of the present disclosure. The processor202may be embodied in a number of different ways and may, for example, include one or more processing devices configured to perform independently. Additionally or alternatively, the processor may include one or more processors configured in tandem via a bus to enable independent execution of instructions, pipelining, and/or multithreading. The use of the term “processing circuitry” may be understood to include a single core processor, a multi-core processor, multiple processors internal to the apparatus, and/or remote or “cloud” processors. In an example embodiment, the processor202may be configured to execute instructions stored in the memory201or otherwise accessible to the processor. Alternatively, or additionally, the processor may be configured to execute hard-coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Alternatively, as another example, when the processor is embodied as an executor of software instructions, the instructions may specifically configure the processor to perform the algorithms and/or operations described herein when the instructions are executed. In some embodiments, the circuitry200may include input/output circuitry203that may, in turn, be in communication with processor202to provide output to the user and, in some embodiments, to receive an indication of a user input. The input/output circuitry203may comprise a user interface and may include a display and may comprise a web user interface, a mobile application, a client device, a kiosk, or the like. In some embodiments, the input/output circuitry203may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. The processor and/or user interface circuitry comprising the processor may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processor (e.g., memory201, and/or the like). The communications circuitry205may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data from/to a network and/or any other device, circuitry, or module in communication with the circuitry200. In this regard, the communications circuitry205may include, for example, a network interface for enabling communications with a wired or wireless communication network. For example, the communications circuitry205may include one or more network interface cards, antennae, buses, switches, routers, modems, and supporting hardware and/or software, or any other device suitable for enabling communications via a network. Additionally or alternatively, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). The group-based communication circuitry204includes hardware configured to support a group-based communication system. The group-based communication circuitry204may utilize processing circuitry, such as the processor202, to perform these actions. The group-based communication circuitry204may send and/or receive data from group-based communication repository107. In some implementations, the sent and/or received data may be of enterprise-based digital content objects organized among a plurality of group-based communication channels. It should also be appreciated that, in some embodiments, the group-based communication circuitry204may include a separate processor, specially configured field programmable gate array (FPGA), or application specific interface circuit (ASIC). As described above and as will be appreciated based on this disclosure, embodiments of the present disclosure may be configured as methods, mobile devices, backend network devices, and the like. Accordingly, embodiments may comprise various means including entirely of hardware or any combination of software and hardware. Furthermore, embodiments may take the form of a computer program product on at least one non-transitory computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. Any suitable computer-readable storage medium may be utilized including non-transitory hard disks, CD-ROMs, flash memory, optical storage devices, or magnetic storage devices. One or more of processor202, group-based communication circuitry204, and shared interface rendering module206may control and direct the creation and sharing of interface annotations, such as the operations discussed with regards toFIGS.9-11. For instance in some embodiments, the shared interface rendering module206may receive interface annotations associated with display input data, process the data, and generate interface annotation instructions for then transmitting to participating devices. The shared interface rendering module206may associate intended display input data with the display input data and then, based on both, generate interface annotation instructions for subsequent rendering of the respective interface annotation. In some embodiments, the shared interface rendering module206may receive the display input data, associate the display input data with a designated layer associated with the annotating device (the participating device which created the interface annotation), and generate interface annotation instructions associated with the designated layer for rendering on participating devices. The shared interface rendering module206may control and direct any of the operations discussed with regards toFIGS.9-11. The shared interface rendering module206may be divided over a number of devices where certain operations occur on one or more of the devices. The shared interface rendering module206may store interface annotations, display input data, intended display input data, interface annotations instructions, and the like to support the operations of the shared interface rendering module206and the remaining circuitry200via shared interface repository207. In some embodiments, the shared interface rendering module206may interact with group-based communication repository107, shared interface repository207, and/or memory201to retrieve and/or store interface annotations, display input data, intended display input data, interface annotations instructions, and the like to support the operations of the shared interface rendering module206and the remaining circuitry200. Example Methods and Systems for Sharing Interface Annotations within the Group-Based Communication Network FIG.3illustrates an exemplary system for sharing interface annotations according to one embodiment of the present disclosure. In particular,FIG.3illustrates a system300comprising a plurality of participating devices301a-301dinteracting with an interface sharing device302. As shown inFIG.3, in some embodiments, a participating device, such as301d, can create an interface annotation, such as304a, that is then shared among the other participating devices via the interface sharing device, such as302. In the embodiment illustrated inFIG.3, participating device301dcreated an interface annotation304a(thereby becoming the annotating device with respect to the interface annotation304a) that is then transmitted to interface sharing device302. In some embodiments, display input data associated with the interface annotation304ais transmitted to the interface sharing device302. The interface sharing device302includes the shared interface which comprises a series of layers303a-303d. The received interface annotation304avia the display input data is associated with designated layer303dassociated with respective annotating device301d. The interface sharing device302renders the interface annotation304bon the display of the interface sharing device302as part of the shared interface and transmits interface annotation instructions to each of the participating devices for rendering of the interface annotation304con the respective displays of the participating devices. Each of the participating devices301a-301dcan create interface annotations and share the annotations via the interface sharing device302to the other participating devices301a-301d. FIG.4illustrates an exemplary shared interface according to one embodiment of the present disclosure. In the embodiment illustrated inFIG.4, the shared interface400is shown on a participating device. At least one participating device has added an interface annotation402to the shared interface400. As shown inFIG.4, the interface annotation402is a highlighted underline pointing to a certain heading for discussion. The interface annotation may include a variety of shapes, colors, sizes, and can be free-hand drawing, images pasted/clipped to the shared interface (e.g., emojis, such as custom emojis), and the like. As shown inFIG.4, with the use of the interface annotation402, each participating device in receipt of the associated interface annotation instructions is able to view the interface annotation402and can thus efficiently and effectively follow the discussion at hand. In some embodiments, only certain participating devices may receive the interface annotation instructions thereby only those certain participating devices may view the interface annotation. The interface annotation may be configured as an ephemeral image that disappears from view after a certain amount of time. For example, the interface annotation instructions may include instructions to render the interface annotation for a certain amount of time, e.g., 3 seconds, 5 seconds, 1 minutes, etc. and then remove the interface annotation from the shared interface. In other embodiments, the interface annotation instructions may include instructions to render the interface annotation for a certain amount of time along with another interface annotation. That is, two or more interface annotations may be linked together to be rendered together for the same amount of time or adjoining periods of time. FIG.5illustrates an exemplary shared interface according to one embodiment of the present disclosure. In the embodiment illustrated inFIG.5, the shared interface500is shown on a participating device. At least one participating device has added an interface annotation502to the shared interface500. As shown inFIG.5, the interface annotation502is a circle. The circle may be drawn by hand (e.g., via a mouse or touch screen) or may be drawn by selecting a circle image to insert. Various shapes and configurations are possible for annotating the shared interface. In some embodiments, the interface annotation may be colored or tagged in a manner to indicate the participating device from which the interface annotation originated. For instance, the interface annotation may be red for one participating device, yellow for another participating device, green for another participating device, etc. In some embodiments, a tag may be generated and displayed along with the interface annotations that includes a participating device identifier. In some embodiments, the tag may appear when a user scrolls over the interface annotation and may disappear after a period of time the same or different than the interface annotation. As multiple participating device may render the interface annotation, by tagging or identifying the participating device from which the annotation originated, each participating device is able to readily see who (via the participating device) added the interface annotation. In some embodiments, the interface annotation instructions may include a participating device identifier that identifies the participating device from which the interface annotation originated. Participating devices in receipt of the interface annotation instructions are then able to identify the participating device from which the interface annotation originated by an appropriate color or similar tag (e.g., an emoji). In some embodiments, each participating device may be associated with a respective layer of the shared interface such that the layer of the shared interface itself becomes indicative of the participating device from which the interface annotation originated. FIG.6illustrates an exemplary shared interface according to one embodiment of the present disclosure. In the embodiment illustrated inFIG.6, the shared interface600is shown on a participating device. The participating device has added an interface annotation602to the shared interface600on the shared document601(i.e., a schematic of a guitar). As shown inFIG.6, the interface annotation602is a circle manually drawn (e.g., via a mouse or touch screen) and thus, has rather jagged edges. Various possibilities for annotations are available by manually drawing the interface annotations. In the embodiment illustrated inFIG.6, a user via the participating device uses the interface annotation602to draw attention to a certain area of the shared document601. Each participating device in receipt of the associated interface annotation instructions is able to view and readily recognize to which portion of the shared document601the user is drawing attention. FIG.7illustrates an exemplary shared interface according to one embodiment of the present disclosure. In the embodiment illustrated inFIG.7, the shared interface700is shown on a different participating device than the participating device shown inFIG.6. In the embodiment illustrated inFIG.7, the display input data associated with the interface annotation602has been associated with intended display input data. Both the original display input data and the intended display input data are used to generate the interface annotation instructions that are then sent to other participating devices, such as the participating device illustrated inFIG.7. The intended display input data may also be used to correct or adjust the interface annotation on the originating participating device. The intended display input data is programed to smooth lines or shapes such that the drawn images, shapes, words, etc. are made clearer or more aesthetically pleasing. In some embodiments, such “smoothing” may not be used or may be turned off. However, such “smoothing” may be used in some embodiments where it is desired to create a more professional image. As shown inFIG.7, the resulting interface annotation703has a smoother finish and more professional appearance due to the correlation to intended display input data and the use of such to formulate the interface annotation instructions. In addition, in the embodiment illustrated inFIG.7, another participating device has added interface annotation704and another participating device has added interface annotation705. As shown inFIG.7, each of the interface annotations are distinguishable such that the respective participating device which originated the interface annotation can be easily recognized and identified. In addition, as shown inFIG.7, each of the interface annotations appear together on the shared interface700. In some embodiments, time limits may be established such that if additional interface annotations are added to a shared interface within those time limits, the interface annotations will be linked and appear together on the shared interface. The system may be programed such that the addition of each subsequent interface annotation resets the time limit to allow for subsequent interface annotations to be added and thereby linked to the previous interface annotations. For instance, in the embodiment illustrated inFIG.7, additional users (via respective participating devices) added interface annotations704and705to show additional details they wish to add to the schematic of the guitar. In the embodiment illustrated inFIG.7, emoji706and emoji707have been added to the shared screen. The emoji706is a gavel and may have been added to show that a decision has been made for the design of the guitar (shown in the shared document701). The emoji707may have been added to show which user added the emoji706or to merely show approval by the user identified in the emoji707. Such emojis may be added to the shared interface and thereby shown to any participating device in receipt of the respective interface annotation instructions to advance the relevant discussion. The emojis may be added through any of the aforementioned annotation tools. Custom emojis may be added to the shared interface. Various alternative configurations are possible without deviating from the intent of the present disclosure. FIG.8illustrates a menu that may appear on a group-based communication interface according to one embodiment of the present disclosure. The menu800may be a pop-up menu, a menu selected from a list, or a menu that otherwise appears in the interface for the user to interact. As shown inFIG.8, in the illustrated embodiment, the menu provides the user the opportunity to allow one or more users (via participating devices) to add interface annotations (referred to as “draw” inFIG.8) on the shared interface. The ability to add interface annotations may be enabled (803) or disabled (802). For example, in some embodiments, circuitry200may receive a signal indicating that one or more participating devices may have an enable flag associated with the device (enabling interface annotations) or a disable flag associated with the device (disabling interface annotations). The menu also may be specific to a user such that the ability to add interface annotations may be turned on/off specifically with regards to that user. For example, inFIG.8, the user speaking is distinguished (801). The ability of that user to add interface annotations may be turned on or off. Various alternative configurations are possible without deviating from the intent of the present disclosure. FIG.9is a flowchart illustrating operations that are executed by an exemplary group-based communication system for sharing interface annotations according to one embodiment of the present disclosure. In the embodiment illustrated inFIG.9, the flowchart illustrates method900which includes causing the shared interface to be rendered on displays of participating devices901, receiving display input data from participating devices902, generating interface annotation instructions based on the display input data903, and outputting interface annotation instructions to the participating devices904. In some embodiments, causing the shared interface to be rendered on displays of participating devices comprises transmitting a shared interface to participating devices where the shared interface can be displayed on the device. The shared interface may comprise various layers of information for processing and displaying as the shared interface. In some embodiments, the transmission of the shared interface may vary based on the transmitting device and/or on the receiving device. Generally, the information for display on the shared interface will be consistent among devices to which the shared interface is shared or at least the resulting display is generally consistent among devices to which the shared interface is shared (e.g., in embodiments where double rendering is avoided, the resulting display is generally consistent even though the originating participating device may not receive all of the layers of the shared interface). In some embodiments, a user via a participating device may add an interface annotation to the shared interface via an annotation device (e.g., mouse, keyboard, cursor, touch screen, etc.). The interface annotation generates display input data associated with the interface annotation. The display input data may be transmitted from the participating device that originated the interface annotation and received by the interface sharing device. The display input data may be associated with a designated layer of the shared interface and used to generate interface annotation instructions. In some embodiments, by associating the interface annotation and display input data with a designated layer of the shared interface, the interface annotation can be distinguished as an interface annotation and/or by participating device. For instance, each participating device may have a respective layer of the shared interface that an interface annotation is attached. Transmission of the display input data then identifies the participating device since the display input data is associated with the layer and the layer is associated with the participating device. In some embodiments, by associating the interface annotation and display input data with a designated layer of the shared interface, transmission of the resulting interface annotation instructions can be controlled. That is, depending on the layer of the shared interface on which the interface annotation is associated, the interface annotation instructions can be generated to avoid the participating device from which the interface annotation originated. In some instances, an interface annotation may originate on a participating device, be sent to an interface sharing device, and then sent out to participating devices. In such cases, the participating device which originated the interface annotation may receive a double rendering of the interface annotation. In some cases, the double rendering may be indecipherable such that a user cannot distinguish the original interface annotation from the second rendering of the interface annotation. However, in some embodiments, such double rendering creates a blurry, offset, or otherwise inadequate rendering of the interface annotation. To avoid such double rendering, the interface annotation instructions may be directed to certain participating devices while avoiding other participating devices, such as the participating device from which the interface annotation originated. In some embodiments, the interface annotation instructions may be transmitted to certain participating devices, as directed by the interface annotation and the display input data, rather than other participating devices regardless of the origin of the interface annotation. In some embodiments, multiple interface annotations may be created generating multiple sets of display input data and multiple interface annotations may be created simultaneously generating multiple sets of display input data. Respective interface annotation instructions may be generated and then outputted to the appropriate participating devices in the group-based communication system. In some embodiments, subsequent interface annotations may be added to the shared interface within a predetermined period of time such that the adjacent interface annotations are linked and appear together on the shared interface. Further, in some embodiments, a time of receipt may be recorded for each interface annotation such that the interface annotation may be stored and later replayed along with the recorded discussion. For instance, when the recorded discussion is replayed, the interface annotation may be displayed at the same time during the discussion in which the interface annotation was originally shared and be displayed for the same period of time in which the interface annotation was originally shared. Further in some embodiments, the method900may include associating the display input data with intended display input data and generating interface annotation instructions based on the intended set of display input data and the original display input data.FIG.10is a flowchart illustrating operations that are executed by an exemplary group-based communication system for sharing interface annotations according to one embodiment of the present disclosure. The method1000may be incorporated into the method of900in some embodiments. In the embodiment illustrated inFIG.10, the method1000includes associating the display input data with intended display input data1001and generating interface annotation instructions1002. The intended display input data may result in an interface annotation rendered with higher quality, clarity, or otherwise improved aesthetic appearance than the interface annotation without the intended display input data. The intended display input data may be stored in the circuitry200and by way of various algorithms associated with the original display input data. FIG.11is a flowchart illustrating operations that are executed by an exemplary group-based communication system for sharing interface annotations according to one embodiment of the present disclosure. In the embodiment illustrated inFIG.11, the flowchart illustrates method1100which includes receiving a shared interface1101, creating an interface annotation, thereby generating display input data1102, and causing the display input data to be transmitted to the interface sharing device1103. The method1100may be performed by one or more participating devices in the group-based communication system. In some embodiments, as an interface annotation is created, machine-learning tools may be used to predict and/or finish the interface annotation (e.g., finish drawing the circle, square, etc.). In some embodiments, as an interface annotation is created, machine-learning tools may be used to prevent the completion of an interface annotation (e.g., where the interface annotation is likely to be found inappropriate for the discussion). In some embodiments, the interface sharing device may include an annotation device to add/create interface annotations. That is, in some embodiments, similar to how participating device may create interface annotations that are then shared on the shared interface, the interface sharing device may create interface annotations on the shared interface that are then shared with the participating devices on the shared interface. In some embodiments, a participating device may cancel an interface annotation. That is, prior to interface annotation instructions being sent to participating devices, an originating participating device may cancel the interface annotation. Additional Implementation Details Although an example processing system has been described inFIG.2, implementations of the subject matter and the functional operations described herein can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter and the operations described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described herein can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, information/data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information/data for transmission to suitable receiver apparatus for execution by an information/data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). The operations described herein can be implemented as operations performed by an information/data processing apparatus on information/data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or information/data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input information/data and generating output. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and information/data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive information/data from or transfer information/data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Devices suitable for storing computer program instructions and information/data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments of the subject matter described herein can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information/data to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. Embodiments of the subject matter described herein can be implemented in a computing system that includes a back-end component, e.g., as an information/data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the subject matter described herein, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital information/data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks). The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits information/data (e.g., an HTML, page) to a client device (e.g., for purposes of displaying information/data to and receiving user input from a user interacting with the client device). Information/data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any disclosures or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular disclosures. Certain features that are described herein in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Many modifications and other embodiments of the disclosures set forth herein will come to mind to one skilled in the art to which these disclosures pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the disclosures are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. | 80,479 |
11861259 | DETAILED DESCRIPTION OF THE INVENTION The present invention provides a digital picture frame that interacts with either a picture capture or storage device, referred to herein as an electronic device, running a digital picture frame application and/or social media to automatically display digital photographs from the electronic device and/or a social media account of one or more persons viewing the picture frame. As used herein, the terms “picture”, “photograph”, or “photo” refer to or include digital still photographs or digital video, including a combination of few-second animations either or both before and after a still frame, unless otherwise noted. The photos can be obtained by social media users and stored in the users' social media accounts. When a social media user views the digital picture frame, the digital picture frame automatically recognizes that user and automatically loads photos relevant to that viewer from her or his electronic device and/or social media account and/or from at least one electronic device and/or social media account of her or his social media community member(s). For example, relevant photos from a community member's social media account may include the viewer or be from a shared activity that the viewer attended with the community member. Automated social media activity learning methods, as described herein, are used to automatically determine relevancy of photos to one or more community members. Community members may choose to make photos shared or private to avoid personal photos being shared. FIG.1illustrates a digital picture frame20according to one embodiment of this invention. The digital picture frame20includes a digital display22mounted within an edge frame24. The digital display22can incorporate any suitable screen, such as, without limitation, an LED screen or a touch screen, as are known and commercially available. A camera25is connected to and integrated in the edge frame24. The picture frame20includes a network connection module that connects over a wired and/or wireless network to a social media server computer26and can automatically load photos stored within one or more social media accounts accessed through the server26. The picture frame20further includes an automated display module in combination with the network connection module and adapted to automatically change photos displayed on the digital display22to photos from an electronic device and/or a social media account of the viewer and/or one or more community members, such as upon automatic detection with the camera of the viewer. The digital picture frame20desirably includes a microphone27in combination with the automated display module and/or network connection module, such as for receiving spoken instructions from a viewer to change the displayed photos. The digital picture frame20further includes a leveling device28connected to or otherwise integrated with the frame24. The leveling device28automatically detects when the frame is not level. Proper positioning of the digital picture frame is important to achieving user satisfaction. Auto detection of the positioning in comparison to the angle of the displayed photograph illustrated can be achieved using any of the image alignment or realignment techniques known in the art. In embodiments of this invention, the leveling device28can transmit or display corrective measures for physical correction. In another embodiment of the invention, the digital picture frame20is configured with a leveling device that includes one or more motors23, such as paired attached with a corresponding wall bracket via gearing. The motors23can be incorporated in a wall hanging mechanism as shown inFIG.1, or otherwise implemented with respect to a frame stand. Upon detection of an unleveled viewing as determined by either the leveling device28or any other mechanism, the dual motors23activate and automatically mechanically operate as needed to level the digital picture frame20, such as a vertical adjustment on wall mounting hardware, via the adjustable hanging mechanism or bracket. The digital picture frame20includes an automated display adjustment device in detection combination with the camera25. The automated display adjustment device automatically detects frame environment and/or viewer position using the camera25and automatically adjusts settings of the digital display22. Augmented with the camera25, the digital picture frame20supports a host of optional, additional capabilities. Using conventional, known in the art, image processing techniques, enhancement of the picture quality can be automatically made. For example, due to an automatically detected viewing angle or environmental brightness, picture lighting intensity and contrast can be altered. That is, depending on conditions, the brightness and contrast can be automatically modified to provide better viewing. Additionally, depending on the distance and angle of the viewer, the size of the image can be enlarged, possibly focusing on the center or content rich area of the picture. Similarly, a close viewer may wish to have the photograph or video in full scope, or maybe, even side-by-side with one or more other, possibly related pictures or videos. In one embodiment, as shown inFIG.1, the digital picture frame20includes the capability to ‘daisy chain’ one or more additional frames20′. That is, all participating frames can be connected so as to portray a single story, either chronologically or thematically, allow for two viewers, and/or provide a composite view of a single image. The frame20can include a network connection10, such as an audio-video outlet and/or network outlet, to connect to a corresponding inlet12of the second frame20′. The network connection between frames can be by wired connection (e.g., HDMI cord or Ethernet cable) or wireless communication (e.g., Bluetooth or WiFi) The frame20can optionally include a power output, such as outlet14to receive a power cord16of the second frame20′, so as to require only a single power source. An embodiment of a digital picture frame20further includes multiple power options. In addition to conventional outlet and battery options, the frame can include at least one photoelectric cell. Inductive, or wireless, charging can also be incorporated. The digital picture frame20includes a facial recognition module with any suitable facial recognition software, in combination with the camera25and automated display module. Face recognition software can be deployed to automatically detect the identity of the viewer(s). It is also within the scope of this invention to identify the viewer via the location or proximity of their personal electronic devices such as but not limited to their mobile phones or tablets. By identifying the viewer(s) the automated display module can then specially target, namely select, prioritize, and/or order the photos or videos displayed for the identified viewer. Identifying the viewer enables a variety of novel applications. In embodiments of this invention, a viewer analysis module is combined with a facial recognition module to determine, for example, one of mood or health of the viewer upon detection of the viewer with the facial recognition module. The automated display module can augment a photo display on the digital display as a function of the determined mood or heath of the viewer, such as by showing photos to increase the viewer's mood. Longitudinal analysis of the viewer is possible. Such analysis supports, for example, health commentary. By analyzing the viewer's face, comments such as, but not limited to, “you seem to have gained weight”, “you look tired”, or “you seem pale; do you feel well?” can be announced to the viewer. Similarly, “mirror, mirror on the wall” remarks such as “you look good today” can be stated, potentially elevating the viewer's mood. Displaying identified mood enhancing photographs, such as, for example, ones that include far away loved ones can likewise be shown to further elevate the viewer's mood. Identification of mood enhancing photographs can be automatically accomplished, for example, via the recognition of a “smiling face” or spoken words of the viewer when the picture was previously shown, and tagging of the photo. Viewer identification also enables photo selection personalization. A viewer personalization module can be operatively combined with the facial recognition module, whereby the viewer personalization module automatically identifies preferred photos or restricted photos as a function of viewer information upon detection of the viewer with the facial recognition module. Photos favored by the viewer (e.g., grandchildren for the grandparent) can be selected, and conversely photos containing, for example, ex-spouses or in-laws can be repressed. Likewise, identifying viewer characteristics, such as age, enables the limiting of photos displayed. For example, children might not be shown photos containing potentially inappropriate content, such as containing nudity, or particular activities or individuals, such as relatives that have recently passed away. Thus, it is within the scope of this invention to limit or specifically select photographs to be displayed based on relationships of the viewer with individuals in the photographs, such as ex-spouses; characteristics of individuals shown in the photographs, such as babies; characteristics of the viewer, such as age; activities illustrated in the photographs, such as smoking; locations displayed in photographs both physically described (e.g., Chicago) or relatively described (e.g., viewer's place of residence); or based on any characteristics that can be determined via image analysis or photograph metadata included. The filtration or selection of photographs based on criteria is determined by settings that can be set. Additional specialty settings are included. Such setting place the frame in nightlight mode where only a dim glow is illustrated; photo record mode to simply record all in the line of vision; video chat mode with other devices; and “stranger in the house” mode where only limited or stock photos are shown to prevent the potential identification of residents. The digital picture frame20can likewise be used as a viewer assistant. Multiple ailments or conditions, such as but not limited to Alzheimer's, strokes, and dementia, affect memory. Thus, the digital picture frame20can be used to remind the viewer of the identity of family members and friends as well as of occasions, locations, activities, etc., of relevance as the photos displayed are labeled both by content and/or by metadata. As previously discussed, the network connection module connects over a network connection to the social media server computer26. It can likewise connect either directly or indirectly to the mobile device29′ of any community member or interested party having an appropriately configured device, such as digital picture frame application specific software. The connection need not be within a geographic proximity, and it is likewise within the scope of this invention that a geographically remote connection is supported via networks such as but not limited to as the Internet. The social media server computer26can be any one or more computers that implement any suitable social media platform, such as the MPSM platform and automated learning steps described herein. The server computer26can obtain or have access to photos and learn user or activity information associated with each photo from, for example, mobile devices29of a plurality of MPSM users or community members, such as according to methods described herein. In embodiments of this invention, the automated display module augments a photo display on the digital display22as a function of photo metadata from the social media account stored on the server computer26. The photos are automatically downloaded from the server26to the digital frame20upon, for example, automated detection of a corresponding vi ewer(s). In embodiments of this invention, the photos can be changed according to the learned bias of a viewer, expressed and stored as part of a user profile at the server computer26and/or the digital picture frame20. The picture frame20, such as via the automated display module, includes or obtains the viewer profile of the viewer upon detection of the viewer via the camera. By establishing a viewing profile for each known or expected viewer, e.g., if a viewer is or has a community member, individual or group user viewing preferences can be maintained. A profile can be established using any one or more of the many known profile techniques known in the art, including but not limited to, asking the viewer, learning based on previous user selections, and/or correlations with previously automatically learned, via MPSM methods herein, locations, activities, and/or community member interactions. In a preferred embodiment, viewer profiles are automatically learned according to methods described herein, and uploaded via an electronic device and/or the social media account of a detected viewer and/or her or his community members. The automated display module desirably displays a slideshow of photos for a detected viewer that is uploaded from: an electronic device of the viewer, and/or a social media account of the viewer or one or more social media community member of the viewer. In the digital picture frame embodiments of this invention pictures or videos displayed can be augmented with the automatically learned and identified metadata corresponding to the pictures or videos. That is, the information learned, captured and/or displayed identifies the location, activity, and community member involvement on a per photo basis, indicating the with whom, where, when and what was being done when the picture or video was taken as well as any other associated metadata, such as the camera or video equipment used to capture the photo or video. Slideshow of photos for a viewer can be assembled and shown in a display order automatically determined as a function of photo profiling traits such as time (e.g., chronological order), photo location, photo content (e.g., people or activity shown in the photo), and/or community member presence at the time/location that a photo was taken. The invention includes a method of displaying photos on a digital picture frame, such as shown inFIG.1. The digital picture frame is hung or set on a surface by a user, for explanation purpose, named Susan. The picture frame automatically determines with the camera when Susan is viewing, or is the primary viewer of, the picture frame. The picture frame automatically determines Susan's profile from her detection. In embodiments of this invention, Susan's viewer profile is loaded from Susan's electronic device and/or social media account, and may be supplemented by additional information gathered via the picture frame itself, such as face recognition, account access information, and/or manual entry (e.g., picture frame display preferences). The viewer profile is desirably created and continually updated automatically according to the automated learning methods described herein. The viewer profile can include information such as, without limitation, family information, community member information, and/or user favorites (e.g., favorite locations, activities, colors, flowers, animals, etc.), learned through the social media account. When the camera and coordinated software determines Susan is viewing the picture frame, e.g., either directly or merely in the vicinity, the picture frame automatically displays photos shown on the digital display as a function of the viewer profile information. The photos displayed are automatically augmented or picked as a function of comparing photo metadata and the viewer profile. In embodiments of this invention, the picture frame automatically displays for Susan a slideshow of photos (still and/or videos) relevant to Susan as a function of photo context. The photo context can be selected from the time the photo was taken (e.g., all photos from July 2015), photo location (e.g., all photos taken in Chicago), and/or photo content (e.g., all photos of vacations or involving her sister and/or her friend Mary). The photos can be imported primarily from her electronic device and/or social media, and optionally supplemented from photos on a recordable medium of the frame (e.g., hard drive or inserted flash drive or memory card) or photos obtained from a third party web site. It is within the scope of this invention to prompt the user, possibly via any communication mechanism including but not limited to text and voice commands, to determine the exact wishes of the user. If the digital picture frame is additionally equipped with a microphone27and corresponding speech processing software, voice commands can likewise be processed using any of the many known speech processing systems. Susan can select a context, such as by spoken instructions, to be display. As an example, Susan may request all photos related to “college”, related to “vacations”, including “family”, or including or relating to a particular person. The digital picture frame will download relevant photos for display from one or more electronic devices and/or social media accounts, of Susan's and/or a community member of Susan, that have context matching the request. If Susan asks for pictures of her and her sister, the pictures may include her sister or have been taken in the presence of both her and her sister, such as a picture of the Eiffel Tower taken during a trip with her sister to France. The Eiffel Tower photo is identified by context metadata that identifies that both Susan and her sister were present when the photo was taken. The context information is desirably automatically learned and stored with the picture in the social media account of Susan, using the learning methods described herein. Alternatively, the Eiffel Tower photo can be part of the sister's social media account, and because her sister is a community member and has given sharing rights, the social media server computer automatically shares the photo to Susan's digital picture frame. Desirably, any photo relevant to Susan can be automatically shared from any of Susan's community members and automatically shown on the picture frame display. In one embodiment of this invention, the digital picture frame accesses social media or her electronic device without detailed instructions from Susan and loads and sequences photos displayed on the digital display as a function of profiling traits selected from chronological order, photo location, photo activity, and/or community member. Having automatically learned the location, activity, and community member involvement on a per photo or video basis using the method described herein, a “story telling” capability is supported. That is, in storytelling, chronological stories, optionally simultaneously displayed on a split screen, are grouped by: purely time, namely in sequential chronological ordering; location, namely a traversal of sites on a location based trip; activity, namely in chronological ordering of a given or similar set of activities; community member involvement, namely a pictorial interaction with community members, potentially segmented by particular community member or members; or any other profiling trait(s) of a recognized user that can be used to cluster or segment photos for automatic story telling. In one embodiment of this invention, Susan's friend Mary joins Susan in viewing the picture frame, either directly or within a detectable vicinity of the picture frame. The picture frame automatically detects the additional presence of Mary, automatically detecting her identity, automatically obtains or loads Mary's profile, and automatically changes the photos displayed on the digital display to those relevant to both Susan and Mary. The photos could be obtained from Susan's social media account(s) or personal picture capture or storage electronic device but preferably, the photos are obtained from both Susan's and Mary's social media accounts and/or local personal picture capture or storage electronic devices, e.g., mobile phones29and29′. Desirably the photos from each of the two viewers are shared activity photos; from activities shared by the two friends. The frame20and/or server computer26automatically determines shared activity photos of the two persons as a function of the learned context information automatically associated with the shared activity photos by the server computer. As more people are automatically detected, photos from more accounts or devices can be automatically added, and the photos automatically organized by context such as photo location, a photo activity, and the present community members. Options can be added on the social media accounts to share or not share photos with community members or their devices according to this invention. In another embodiment, Mary and Susan each simultaneously, or within some specified duration of time, look at their respective digital picture frame20and20′, such as linked as shown inFIG.1or network connected when each frame in a different location (e.g., their respective residences). Remote frames can communicate, via a network, such as through server26, and indicate to both Mary and Susan, each at their respective frames20and20′, their respective presence, potentially indicating when Mary and Susan are both viewing or viewed a photograph from a preselected set of photographs within the specified duration of time. The duration of time can be defined in terms of a specific time or as within a time interval commencing from when either Mary or Susan viewed the given photograph from within the preselected set. The frames20and20′ can automatically or upon instruction coordinate so that the two are viewing the same photos. Likewise the photos can be augmented to included shared experiences, and be uploaded from both of Mary and Susan's linked devices29and29′ and/or respective social media account. In embodiments of this invention, photographs from digital photo collections are provided to the frame in multiple ways including but not limited to self-capture, transmission from remote photograph capturing devices, from other frames, and through server transmission. The frame either receives photographs taken by other devices or itself takes a photograph, if so configured. The digital photo collection(s) from which photos can be obtained and sent to the frame can be associated with one or more additional viewers and/or digital picture frames, and/or any electronic device of a community member of the viewer. Independent of the means by which a photograph is obtained, at times, the main artifact within the photograph, such as, for example, the face of a person in the picture, is not properly aligned. As such, when displayed on the frame, either the main artifact is misaligned, or potentially, is removed all together. To mitigate potential misalignment, the frame and method of this invention can include or provide an automated smart cropping feature for automatically cropping and aligning the photos to fit the digital display of the frame. For proper viewing, the frame or the server driving the disclosed frame can automatically crop and/or realign images. Without requiring user interaction, the server executes cropping software that realigns an image for proper display on the frame to ensure that the main artifact within the image is automatically scaled and realigned to a given aspect ratio (4:3 or 3:4 for example). The artifact may not be a single thing or entity, and, for example, could be a group such as two faces in the foreground close to each other or a tight group of people in a family shot. An exemplary non-limiting approach identifies the face areas and attempts to align a corresponding area at either center to top-33% of the image to be portrayed, depending on the size. Large faces can be centered vertically whereas smaller faces can be aligned in a top 33% of the display to show more of the person's body. Thus, when presented, the image is esoterically properly displayed. Embodiments of this invention preferably also include image filtering, either at the frame or server level. Filtering can include removing images of faulty artifacts, such as, for example, low quality (blurry, etc.) Images. Artifacts of unlikely interest can also be filtered. For example, photographs of pets containing faces are typically of interest, however animal photographs without heads are typically not of interest, and can be removed. The curator of the photographs, such as the owner or viewer of the frame, can additionally impose other image or content quality constraints. Content constraints can span people, places, artifacts, and actions; namely anything that can be recognized automatically by any image, photograph, or document processing techniques known in the art. Embodiments of this invention enable the owner or user of a frame to specify additional constraints on the photographs displayed. Such as in the case of the photograph curator, the frame owner too may designate content restrictions; for example, pornographic photographs can be prohibited to display. Such constraining rules are defined and incorporated into the filtering approach. Note that it is also within the scope of this invention that the constraining rules include but are not limited to lighting characteristics, scene selections, number of individuals displayed, distance or horizon specifications, or any other photograph characterization features. The invention further includes receiver (displayed) side based filtering where the filtering of content to display to be defined by not only by the curator/owner of the photographs but also the owner/user of the displaying frame. In the case of multiple owners/users of a displaying frame, the identity of the current viewer can be used to select the filtering limitations imposed. It is also within the scope of this invention that a default set of filtering limitations are imposed should the viewer not be identified, multiple viewers be detected, or any other pre-specified set of conditions. Constraints imposed on the quality of the photograph to be displayed can be implemented, but also constraints that filter based on the sender's (photograph capturer's) desires; namely the sender imposed restrictions on the photographs to be displayed on the receiving frame. The receiver, namely the frame's owner/user, also has the ability to limit the photographs to display based on their image quality or on their content. The frame's owner/user provides the sender access to the frame. This access, without any receiver side filtering, aside from image quality constraints, gives the sender control over the content to be sent and correspondingly displayed on the receiving frame. By allowing the receiver to limit those photographs that are to be displayed, a greater sense of comfort and security are provided to the frame's owner; hence, the frame's owner is more likely to allow additional senders to access the frame, and thus, the community associated with the given frame is likely to grow. Receiver-side filtering rules can include, but are not limited to: general content restrictions, such as but not limited to restricting the display of nude photographs or photographs where too many people are included; individual specific restrictions, such as but not limited to restricting any photographs that include a previous spouse; or restrictions relying on metadata constraints, such as, but not limited to, restricting photographs taken prior to a certain date or taken at a certain location. FIG.2illustrates an embodiment of the invention including receiver-side photo filtering. InFIG.2, Susan is at home with her digital picture frame20. Susan's daughter Sally and Sally's husband Bob are in a remote location, where Bob takes a picture of Sally on Bob's mobile device29. Bob is a community member with Susan (and both with Sally), and Susan has set content constraints to allow pictures of Sally taken by any community member to be displayed on her frame20. In embodiments of this invention, when Bob takes the picture of Sally, the picture of Sally is automatically identified on device29and routed over a network, such as via social media server computer26, to Susan's frame20for display. To route the picture to Susan, the content of the digital photo is automatically identified, and any target electronic device, such as Susan's frame20, that would receive the photo also needs to be identified and matched to the digital photo. The identified content is automatically compared to content constraints of Susan's frame20to determine if the digital photo should be routed to Susan's frame20. Likewise other community member devices29′ can be similarly processed each time a photo is taken thereon to determine which, if any, photo matches or meets Susan's content constraints and should be routed and displayed on Susan's frame20. Thus Susan can automatically receive, for example, photos of each family member automatically upon the photos being taken. Susan can control the content and quality of the photos, to exclude, as examples, photos taken by community members of or including non-community members, poorly ‘angled’ selfies, photos taken with or through particular apps (e.g., Snapchat®), fuzzy, grainy or poorly lit photos, or duplicate photos (e.g., photos taken within a predetermined timeframe of each other). Photos not meeting Susan's content constraints are desirably not routed to Susan's frame20′. In embodiments of this invention, content identification and/or a routing decision is/are determined by a content restriction filter. The content restriction filter can be implemented via software coding stored and executed on, referring toFIG.2, the device29, server26, and/or frame20. In embodiments of this invention, the filter components can be implemented in modules on more than one device. For example, a photo quality filter module can be implemented on the photo-taking device29, to limit photos sent to the server26. The server can then perform content identification on photos received, and route each photo to the target frame for which the photo meets the content constraints. Photo processing, such as cropping, aligning, scaling, enhancement, and repetition filtering can be performed on the frame, server, and/or mobile device, as needed. The digital photo frame20can additionally detect via the camera Susan's physical response, for example a body movement or facial expression, to the routed/displayed photo, and an interest level grade based on the physical response can be associated with the photo. The interest level grade can be used to determine whether the digital photo is replayed, or a frequency at which it is replayed. The interest level grade can additionally be used to automatically update the content constraints, such as if Susan indicates a low interest to one or more photos of Sally with her tongue out, or with Sally at a particular location (e.g., a bar). In embodiments of this invention, the frame can display a message or message indicator requesting the physical response, for example so the frame can learn what photos to display. FIG.2further illustrates a second digital photo frame20′ of Susan's friend Mary, remote from the frame20. In an exemplary embodiment, Mary is a community member with Susan, Sally, and Bob, but Mary has not added/allowed photos of Sally or Bob in the content constraints of frame20′. However, Mary loves Paris, and wishes to see all photos related to Paris from her community members. If Bob's picture of Mary with mobile device29occurs in front of the Eiffel Tower, then the content restriction filter module(s) will allow the photo to route to Susan's frame20because Sally is present, and will also route the digital photo to Mary's frame20′ for including content related to Paris. In this way, community members can learn of activities of other community members that are of particular interest, without receiving every photo from the other community members. A non-limiting embodiment of this invention consists of at least one picture capture or storage electronic device29communicating directly via a network or wired or other wireless connection with the digital picture frame20. To support such interaction, the picture capture or storage device29connects via the network to the digital picture frame20and downloads the digital picture frame interaction software to the device29. Once installed, the application software scans all the available pictures on the device. The remaining process description is illustrated using a non-limiting example based on face selection. It is likewise, however, within the scope of this invention to select based on objects other than faces including but not limited to scenes, materials, and activities. A second device29′ (or more) can likewise link to the frame upon being in connection proximity. Once selected, all photos containing faces are clustered by the application, either at the server or frame level. Individual photographs can then be automatically filtered based on features. Some of these features are quality related, e.g., red-eyes or blurred; some features are content related, e.g., excluding certain activities, people, or locations. Those photographs that remain selected can be tagged with additional metadata. Metadata includes all photograph generated information, such as but not limited to location, time, or weather, as well as derived data, such as but not limited to cluster identity. The invention can use clustering to form grouped clusters according to photo content. The clustering is desirably performed as a function of a common detected content in the photos, such as things, places, activities, or combinations thereof. The clustering of photos can be used for determining a photo slideshow on the digital display of the photos from a digital photo collection as a function of the clustering. The grouped cluster can be integrated to a slideshow as preferred images, or used to provide requested slideshows, such as “show grandma.” Embodiments of this invention include identity clustering, such as by longitudinal facial clustering. Photographs are clustered not only in accordance to an individual identity at a given time, namely at a given age, but the disclosed longitudinal facial clustering clusters individuals throughout their lifetime. Consider a collection of pictures that span a prolonged period of time. In such a case, an individual naturally ages, and creating a collage of that particular individual necessitates accounting for their aging process. The disclosed approach uses photograph metadata, including but not limited to the date the photograph was taken, to age the individuals to a common age using any of the individual aging techniques known in the art. Once at a common age, clustering is performed to group pictures of the same individual, FIGS.2and3illustrate a clustering of photos of Queen Elizabeth. In one cluster (“Young Cluster”) are photographs of the young Queen. Her age progression is represented in various sub-clusters forming a photo chain to a second cluster (“Old Cluster”). Note however that these photographs are all of Queen Elizabeth, and as such, they are all eventually clustered together (as represented by the linking edges) to form a larger cluster. The sub-clusters can be linked together by features that do not change as much over time, such as eye distance, head shape, nose shape, etc. Clusters according to this invention can be constantly updated and adjusted, such as using new photos and/or other new metadata. An alternative or additional clustering technique uses statistical models of metadata. Using metadata, such as, but not limited to, location identification or date and time, the likelihood of a given person being at a given location at a given time is computed. Should photographs support such a given likelihood, they too are potentially clustered. Similarly, metadata can be used to prevent the clustering of an unlikely set of photographs. For a non-limiting example, image-processing techniques are somewhat inaccurate at recognizing and disambiguating children. This can be particularly true for infants and young children. Photo metadata that indicate when and where a photograph was taken can be used to reduce or eliminate disambiguation. For example, photographs of children taken at a close time interval but at vastly separated geographical locations are likely to be different individuals. Similarly, children of a similar age but appearing in photographs taken far apart in time are again likely to be different individuals. Clustering in additional embodiments of this invention employs a logical two-phase approach. It is within the scope of this invention that the “two phases” are integrated into a single phase, or are partitioned over a greater number of phases, Initially, a cluster is formed using facial recognition using any one or more of the many known in the art clustering techniques. This phase clusters photographs of individuals that closely resemble across all photographs. However, should an individual modify their appearance either intentionally, say but not limited to using a costume, or unintentionally, say but not limited to a lopsided profile, an erroneous separate cluster may result. Independent of cause, the merging of clusters of the same individual is needed. The second phase uses known image processing techniques to extract features of photographs within the given cluster. These features are used to represent the cluster, say via a representative centroid. By averaging the features across the photographs within the cluster, the effects of minor variances in some of the features are diminished. The centroids of all clusters are compared. If centroids of differing clusters are sufficiently close, the photographs within the close clusters are further examined. This further examination includes but is not limited to the use of respective metadata. If still deemed similar, the clusters are merged. In another embodiment, the disclosed photograph clustering approach combines clustering with classification. Current clusters form the basis for inducing classification models. These models, in turn, are used to assign currently unassigned photographs to existing clusters if the similarity measure score employed meets or exceeds the discriminative threshold. It is within the scope of this invention to employ any of the many known in the art image similarity measures. In another embodiment, the linking of similar photographs across frames or digital photo collections is performed. Frame and/or mobile device photograph collections are maintained separately, however the submission of pictures to a frame can constitute a community membership. Within such a community, photographs of individuals portrayed might be common. In some embodiments of this invention clusters of photographs across frames/collections are compared and the server suggests to the owners of the frames, as well as to those submitting pictures to the frames, to send photographs of common individuals to frames that already possess photographs of those individuals, but not necessarily the identified photographs. Thus, photograph sharing is fostered. It is also within the scope of this invention to impose a threshold on the minimum number of photographs that a particular individual must appear in a remote frame or other digital photo collection prior to suggesting that additional pictures of that individual be routed to that remote frame. Another embodiment clusters photographs based on common detected objects. Exemplary objects include artifacts (things, places, activities) and living beings (typically people and pets). Locations and activities can be detected via metadata and individuals present can be detected via facial recognition. Photo data can be preprocessed on the photograph-capturing device. The clustering and cropping approaches described herein assume the availability of the data on a server. However, given current computing capabilities on the photograph capturing devices, such as but not limited to GPUs, significant preprocessing can immediately occur on the actual device. In doing so, the bandwidth required to transfer the photographs to the server is vastly reduced. In embodiments of this invention, preprocessing includes, but is not limited to: image quality detection and enhancement (if photographs remain of poor quality then exclude); content filtering (for example, pornographic or chopped feature images are excluded); and repetitive (same picture taken multiple times, only highest of quality is sent). Additionally, picture fingerprints (compressed or very low resolution versions) can be generated on the capturing device, and only the fingerprints are initially sent to the server. If the server deems these pictures as desired, based on any pre-established criteria, the server informs the device, and the original pictures are transferred. Using such an approach, only desired pictures are sent to the server, again reducing the bandwidth involved in the transfer. Cross capturing-device filtering can also be incorporated. Using picture fingerprints, devices can interact directly among themselves to determine which photographs to transfer to the server. For example, a set of community members may wish to strictly share images of common interest, and in embodiments of this invention the devices can share fingerprints and automatically filter photographs of similar nature. Thus, only one of the photographs, potentially evaluated to be of the highest of quality among the similar pictures, is transferred to the server, again reducing the transfer bandwidth. Additionally, a voting procedure can be imposed where only those picture with a sufficiently high vote are transferred. Likewise within the scope of this invention are other criteria by which to select the photograph to transfer. Non-limiting additional or alternative criteria include but are not limited to the power availability or bandwidth capacity of the capturing device. As an example, if a device has low bandwidth capacity or is nearly out of power, another device might be selected to send its version of a similar photograph. Photographs are posted to frames for the pleasure of the receiver, namely a viewer of the frame. In embodiments of this invention, the frame provides the frame viewer with the capability to provide feedback to the sender of the photograph. Via a physical response, such as a hand gesture, the viewer can send, if they so wish, an indication of the degree of excitement or dissatisfaction with a given photograph. Once the gesture is captured and interpreted, the frame sends the server a message indicating the viewer's interest level in the photograph, which in turn, is stored and applied at the server level and/or indicated to the sender of the photograph. Such a feedback loop motivates the sending device or collection to automatically send or not to send similar additional photographs. Likewise using facial recognition, the frame can identify the viewer, and if desired, tag the feedback with the viewer's identity. Also the frame can infer feedback expressed via facial expressions using facial recognition. Similarly, other body movement gestures, such as but not limited to the nodding or shaking of head representing positive or negative feedback, respectively, can be captured and treated as feedback. The invention further includes ranking the photos to be circulated on the frame, such as for the purpose of automatically determining an order of presentation, a number of display repetitions, and/or a time period of display for each of the photos as a function of the ranking. For any given frame, the photographs sent to the frame are circulated. The ordering of presentation of photographs for a given frame can be based on a ranking. In one ranking, similar photographs are clustered based on, for example, but not limited to people, places, things, or activities. Photographs within these clusters are then shown in succession or within a predefined, or viewer set, separation of each other. In another ranking application, feedback provided by the frame viewer(s) is used to determine the ordering, with bias given towards the more favorable photographs. In yet another ranking application, photographs are presented in a random order. In yet another ranking application, popular photographs, based on feedback, are repeated, possibly even frequently. In yet another ranking application, all photographs are shown prior to any repetition. In yet another ranking application, the time stamp of the photograph is alternatively or additionally used for ranking. That is, a chronologically ascending or descending order of photographs can be presented. In yet another ranking application, photographs are displayed according to similar seasons or dates within successive or separated years. In yet another ranking application, a popular photograph is displayed longer prior to being replaced by its successor. In yet another ranking application, the duration of display is the same for all photographs. In yet another ranking application, photographs are clustered according to time slices, the number of photographs within each time slice is computed, and photographs within popular time slices are shown more frequently. It is within the scope of this invention to vary the time slices significantly, whereby brief time slices with high photograph counts tend to indicate great interest in the shown event, location, or individuals. Lengthy time slices, without loss of generality, can account for trips, seasons, or any sustained activity or event. In yet another ranking application, the viewer explicitly specifies the ordering and duration of display. One skilled in the art recognizes that any ranking rule known in the art can be imposed and that these presented rankings serve only as non-limiting examples. More so, multiple combinations of these and other ranking schemes are likewise supported by this invention. The digital frame photo systems of this invention can include and/or process great numbers of user photos. In such a massive photo repository, data can be extracted and/or trends can be detected and/or analyzed. The invention further includes methods and systems for trend (or similar information) detection in the photo collections. To facilitate understanding, exemplary trends that can be detected and analyzed from the processed photos include, without limitation, fashion/clothing, color schemes, locations, activities, etc. that are currently popular or growing in popularity. This information can be further analyzed by location (which fashions, where), timing (e.g., seasonal fashion or trips), the age of the individuals (if present in the photos), occasion setting classification (e.g., social gathering—formal or informal), etc. This information can be useful to various industries for design, production, shipping, forecasting, entertainment, etc. The invention includes a method and system that extracts one or more of content features from photos of the digital photo collection, and/or photo image features from (same or different) photos of the digital photo collection. For example, deep learning systems can be used to extract image content features. The extracted features can be represented by weighted n-dimensional feature vectors, which in turn, can be clustered using any suitable clustering techniques, including treating the feature vectors as documents, both using serial and parallel system technology, for example, as specified in U.S. Pat. No. 5,864,855, herein incorporated by reference. Additionally, in embodiments of this invention, deep learners can be trained to simply cluster all photos based on readily available pre-trained conventional models or using previously served training data. Similarly, photographs can be indexed, classified, and grouped using other technologies, such as those specified in U.S. Pat. No. 7,787,711, herein incorporated by reference. The photos analyzed can be some or all of the photos processed by the overall system, such as by a photo-sharing system described herein. The photos and/or the extracted information can be stored in a database for determining correlations within/across the photos, or at least groups of the photos. In embodiments of this invention, the photos of interest, potentially all or selected for some criterion or criteria, without loss of generality, due to the activities, locations, people, season, outfits worn, etc. shown, are tagged with metadata corresponding to the detected information extracted from the photos. For example, without loss of generality, corresponding metadata can be maintained as a vector encoding the items of interests found in the photo. Additionally, entries in each or some of the vector elements can represent specific characteristics, such as type of and color or texture of a clothing element. As an example,FIG.5shows a photo of a man and young girl at the beach. The photo can be tagged as labeled based upon the colors, shapes, light intensity, and/or color histogram of the photo. The photo includes a man wearing a pink short-sleeve shirt and blue shorts, and the photo can be tagged with “man,” “pink” or “pink shirt,” “blue” or “blue shorts,” “shorts” and/or “short-sleeve shirt.” Other possible tags that can be obtained from the photo or metadata of the photo (such as location and/or date stamp information) include “beach,” “warm weather,” “tropical location,” and/or “vacation” (due to palm trees or being taken during a winter month). The detected water can additionally or alternatively be determined to be, and tagged with, “ocean” in view of the beach and palm tree. Likewise the palm tree can lead to a “tropics” tag, and the sun can lead to a “warm” or “hot” tag, particularly in view of the other photo features like the clothing worn. The photo inFIG.5can likewise be tagged with “2 people”, “man and child”, “adult and youth”, “father and daughter”, or any other such tagging. Facial recognition and age-determination can be used to further identify information on the person in the photo. If a number of analyzed photos across one or more photo collections including photos from numerous different users show pink clothing, or at least an uptick in pink clothing in a location or timeframe, then a trend towards pink clothing can be determined, such as for the location, timeframe or the corresponding weather. As a further example, if spring break photos for 2022 show increased pink shirts and swimwear, this information can be useful to know for 2022 summer clothing trend forecasting, even by particular genders or age groups. It is within the scope of this invention to limit the tagging options. It is also within the scope to conflate multiple tag options into fewer and/or of fixed vocabulary. In embodiments of this invention, the photo extraction and/or analysis is limited to look for predetermined desired information or features. For example, the photo extraction or analysis can be limited to a particular clothing item/color trend (e.g., pink) and/or location trend (e.g., tropics or spring break) that may be desired by a particular entity. In embodiments of this invention the information detected in photos can be clustered. The metadata clustering process for trend determination can be similar to that discussed above for frame display, however, is generally for a different purpose and using different criteria/photo elements, and can be used in addition to the clustering described herein for frame display purposes. As an example, tagged photos can be organized into a plurality of sub-clusters, each for a corresponding common detected extracted content feature and/or extracted photo image feature, such as by activity, location, temperature/weather/seasons, time, attributes of a person of the photos, light intensity of the photo, or a color within the photos, etc., and various combinations of these categories. Correlations between photos can be mined, such as by comparing sub-clusters of photos across more than one digital photo collection over a network. Example correlations include determining popular locations, clothing items, colors, and/or activities within the tagged photos, such as for a predetermined time period and/or a predetermined age group. Data correlations can be performed by any suitable approach known in the art. As an example, the method of this invention can operate in two modes: mining and/or analytical processing. When mining, such as but not limited to using association rule mining, the system generates a set of “association rules,” namely a set of implications with a confidence and support for these rules. In an alternative, the system can analyze, namely investigate or probe, the tagged metadata to better understand patterns, such as using Online Analytical Processing (OLAP) Association rule mining is one of many suitable data correlation approaches that are within the scope of this invention in which a data mining process derives rules that may govern associations and causal objects between sets of items. In a given transaction with multiple items, association rule mining tries to find the rules that govern how or why such items are often bought together. The output is a set of association rules that are used to represent patterns of attributes that are frequently associated together (i.e., frequent patterns). A data warehouse is a system used to report and analyze data that support decision making. Typically, data from multiple sources are extracted, transformed and loaded into the warehouse. Then, analytics are performed using, for example, Online Analytical Processing Server (OLAP), which is based on the multidimensional data model. OLAP is a category of software that allows the system to analyze information from multiple datasets at the same time. OLAP technology enables analysts to extract and view business data from different points of view. There are various OLAP operations to group, aggregate and join data such as roll up, drill down, slice and dice, and pivot (rotate). Roll up is used to aggregate on a data cube; drill down is used to reverse the operation of roll up; and pivot is used to rotate the data axes in view to provide an alternative presentation of data. FIG.6illustrates a representative OLAP cube for a multi-dimensional dataset, with each sub-cube having metadata tags corresponding to the axes. The term “cube” here refers to a multi-dimensional dataset, which is also sometimes called a hypercube if the number of dimensions is greater than three. Data as a cube with hierarchical dimensions help with analyzing, and are easier to visualize. This dataset includes three categories, namely Location, Time and Items. The illustrated locations include Locations A-D, which could be countries, cities, schools, restaurants, sports fields, etc. The illustrated Times are shown as quarters Q1-Q4, which could represent calendar days, seasons, etc. Other times could be hours or daytimes (morning, midday, evening). Items W-Z can be any photo content and/or photo feature from the metadata tags, such as an activity content (e.g., soccer, golf, beach, party, dancing, etc.), one or more attributes of a person or animal of the photos (e.g., age, gender, race, breed, etc.), a light intensity of the photos (e.g., bright, dim, night, day, etc.), and/or a color (e.g., of clothing, jewelry, vehicles, etc.) within the photos. In embodiments of this invention, the correlation mining includes performing slicing operations and/or dicing operations on the multi-dimensional dataset. Slicing and dicing refers to a way of segmenting, viewing and comprehending data in a database or data warehouse. The term slicing and dicing is generally used in OLAP databases that present data in the multidimensional cube format. To slice and dice is to break a body of information down into smaller parts or to examine it from different viewpoints for better understanding. The slice is the act of picking a rectangular subset of a cube by choosing a single value for one of its dimensions, and creating a new cube with fewer dimensions. The dice is the act of producing a subcube by allowing the system to pick specific values of multiple dimensions. A main difference between slice and dice in data warehousing is that the slice is an operation that selects one specific dimension from a given data cube and provides a new subcube while the dice is an operation that selects two or more dimensions from a given data cube and provides a new subcube. Slicing is illustrated inFIG.7, and dicing inFIG.8. Slicing is the act of picking a rectangular subset of a cube by choosing a single value for one of its dimensions, creating a new cube with one or more fewer dimensions. InFIG.7, the Location and Item of a particular quarter are “sliced” out of the data cube inFIG.6. The dice operation produces a subcube by allowing the system to pick specific values of multiple dimensions.FIG.8shows a dicing operation, whereby the new subcube includes Items (e.g., photo content) of a limited number for two Locations and two Times. Embodiment of this invention include, and the digital picture frame is implemented with, a method, system, and/or apparatus, such as embodied in an MPSM or other software application, that automatically determines and shares a location, an activity, and/or photos of a user. The application learns user activity over time, with the learning based upon user locations and/or context. The application can learn through automatically determining activities at locations based upon known context information and past context information for the location. The application can tag photos for determining context relevancy for showing on the digital picture frame, as discussed above. The invention further includes energy saving location methods for the mobile device that can be used to more efficiently allow the location and social media aspects of the invention to be implemented on a mobile device. The method and application can be used for any suitable function, such as a safety and/or reminder serves, and is particularly useful for use in social media applications and for generating photos for display on the digital picture frame. The invention will be described below with implementation in an MPSM system, and particularly with an MPSM application that learns user activity over time, with the learning based upon user locations and/or context. The MPSM method and system of this invention is mobile and positional in nature. Such systems, like many other systems originally developed on one type of computing platform but migrated to another, operate not only on mobile environments. That is, while MPSM implementations are targeted to primarily execute on mobile devices, such as but not limited to smart-phones, tablets, and/or laptops, they often support implementation for non-mobile environments such as but not limited to desktops and workstations, servers, and large scale compute farms and cloud computing servers. The invention will be described below with a mobile device, such as smart phone having cell service, a GPS system, and access to the Internet via WiFi. The MPSM method and system of this invention is desirably executed or implemented on and/or through a mobile device computing platform. Such computing platforms generally include a processor, a recordable medium, an input/output (I/O) device, and a network interface capable of connecting either directly or indirectly to the Internet. The mobile device executes over a networked environment, a non-limiting example shown inFIG.9. The mobile device is connected, either directly or indirectly, using any of the many techniques and technologies known in the art, over a network, to back-end system or systems, itself/themselves computing devices. The mobile device can connect with a remote server, shown inFIG.9as server38, to store and/or access user or community information. MPSM systems are used to support users remaining socially aware of their community. That is, their primary usage typically is to actively monitor the location and activity of family members, friends, colleagues, and generally others within one's community. Communities can be partitioned into sub-communities where the union of the sub-communities forms the user's community. The sub-communities may or may not overlap. The partitioning of communities into sub-communities is beneficial in supporting specialized applications. For example, while a user might have general interest in the location and activity of all of their community members, they might be particularly interested in the location and activity of those who might be suddenly in need of assistance. The creation of a community can include the issuing of invitations. An invitation is a request by a user A of another user B to allow the inviting user, user A, to track the activities of the invited user, user B, and vice versa. If the invited user accepts, the inviting and invited users form a community. A community is relevant to only that user which formed it. That is, different users have different communities. A community is a grouping of invited (referred to as remote) users by the inviting (referred to as local) user. A local user can partition or merge a community, thus forming a sub-community or a parent community, respectively. For example, consider5users: Bob, Sam, Sally, Alice, and Susan. Bob can invite Sam, Sally, and Alice, thus forming his user community. Bob can likewise partition his community into a sub-community consisting of only Sam and Sally. Sally can invite Susan. Thus, Sally's community would include Bob (via his invitation) as well as Susan. If no additional invites occurred, Sam's and Alice's respective communities would only include Bob (each via Bob's invitation), while Susan's community would only include Sally (via Sally's invitation). Providing users with the opportunity to expand their communities in a convenient manner is advantageous. Such expansion can seamlessly be accommodated by including users listed in a user's contact lists either as a whole or selectively into their community. Contact lists include, but are not limited to, users listed in a user's local address book, e-mail contact list, Twitter follow list, LinkedIn connections list, and/or Facebook friends list. By incorporating users listed in a user's contact list, the user's community is expanded without effort. Note, however, that selected inclusion can be supported; thus enabling community growth without unnecessarily over-expanding the community. That is, entries from the contact list can be included in their entirety and the user can selectively remove those entries which s/he wishes to be excluded from the community. Similarly, entries from the contact list can be selectively added. Users are identified by their account identifier. To use MPSM a user account is created. User accounts generally require a user login, which is a unique user identifier, and a password or equivalent. After having created an account, a user can log in. Initially, the local user does not have a community. In embodiments of this invention, over time, the method and application tracks the activities and location of the local user. Should the local user establish a community as aforementioned described, the community members will likewise be tracked. Local users receive notifications of the location and activities of their community members. Once logged in, the local user can select to activate or deactivate self and community tracking and notification. If not overwritten, default settings are used. Whenever logged in and tracking is enabled, a user's location and activity is tracked. That is, a user periodically records their location and/or activity. Locations are tagged by name. Names can be but are not limited to the following schemes: physical (e.g., 123 Oak St.), absolute (e.g., Acme Coffee), and/or relative (e.g., my work office), or proximity (e.g., two miles from home). Activities are typically events. These events might be common to the entire community such as: “drinking coffee,” “eating lunch,” “sampling wine,” “working from home,” “commuting,” etc., to more specific to a local user such as “restoring car” or “driving to lake home.” Multiple activities can occur simultaneously. Users can change their activities at any time. Unless preloaded or derived from an external source, such as but not limited to a location database, initially, all locations and activities are unknown. Local users must record all such location-activity combinations, i.e., a local user must name or tag the location and the associated activity. A list of activities common to the local user's community can be provided. This community activity list can be ranked either arbitrarily (randomly), according to most recently used, most frequently used, relevance to location, alphabetically, etc. Eventually, an activity list specific to the local user is learned. This local user activity list can be displayed to the local user either individually, along with the community list, or merged with the community list. Again, any of these lists can be ranked as previously mentioned. FIG.9illustrates a representative area30to demonstrate a method of and application for locations and/or activities of a user participating in a social networking service. The area30is shown as a cellular communication network including a plurality of cells32each disposed around a cellular communication antennae or base station36. Within the area are a plurality of destinations each shown as including a WiFi Internet connection. The local user has one or more electronic devices, such as a mobile device that communicates with a remote server38via the cellular network and/or the WiFi connections. As will be appreciated the methods and applications of this invention can operate within any suitable size and configuration of the communication area, depending on what the user encounters. Destination40is the home of the user. The user commutes to office42for work on most business days. On the way the user typically stops at the coffee shop41. For lunch on most days, the user visits restaurant43, but on Wednesdays the user typically meets a second user for lunch at restaurant44. At each destination40-44, the user enters user information about the destination. The application and computer system that receives the user information automatically associates the user information with the destination, and stores the user information in a locations database, such as on the device and/or at server38. The destination desirably is determined automatically and tagged with the user information, such as a location name of the destination and/or the user activity being performed at the destination. For example, destination40can be tagged as “home” and likely has numerous activities associated with it. The destination41, and any photos taken, can be tagged as its establishment name “Acme Coffee” or simply “coffee shop” and associated with the user activity of “buying coffee” or “latte time.” The manually entered user information can then be automatically shared to the user's community in a social networking service. Similar user information is received for the other destinations42-44. The user information desirably includes any other information about the location or activity, whether manually entered or automatically determined, such as the time of the visit or activity. Some destinations, such as home or work will likely have multiple user activities over a period of time, such as “coffee break,” “meeting time,” and/or “quitting time.” The computer system receives user information and associates the user information with the corresponding destination, and any photos taken, for multiple visits to each of the destinations40-44. The computer system begins learning the locations and user activities. In embodiments of this invention, the user can be automatically prompted for confirmation of the user information upon arriving at a destination to confirm the location and/or user activity. For example, the user can be provided with an automatically generated list of previously entered user activities for the destination upon arrival, thereby promoting efficient collection of information. The items on the list can be listed in an order based upon a particular ranking, such as number of times entered previously or based upon a context, such as what activity is likely being performed at a particular time of a particular day. Over time, the computer system learns the user information and begins automatically associating and identifying at least some user activities for corresponding locations and any photos taken. As will be appreciated, the automatic identifying of activities at locations will likely occur at different rates for different activities and locations, with some locations having fewer activities and/or more frequent visits than others. In preferred embodiments of this invention, the system automatically shares the user information in a social networking service upon automatically detecting further user arrivals at the destination. Photos taken are likewise automatically tagged with the user information. The automatic sharing of user locations and/or activities desirably occurs upon the user's arrival at the location, or at a particular time at the location. As such the invention includes an automatic detection of the user's arrival at a destination. The automatic sharing and photo tagging desirably operates without user action and prior to receiving any additional user information for the destination. As an example, the user may typically purchase lunch at destination43, but on Wednesdays typically goes to lunch with a friend or spouse at destination44. The lunch routines of the user, and particularly the Wednesday lunch routine, can be learned by the system and automatically shared to the user's community upon the system automatically determining arrival, without manually input from the user. If the user is having lunch with a community member, then the system can automatically determine that both users are at the same location together to automatically recognize and confirm the lunch activity, and proceed to automatically share the information for both user's to their respective communities. If the user deviates from a routine, the system can know this, and refrain from sharing the typical destination, by the mobile device detecting a different location than the typical routine destination. In embodiments of this invention, learning is accomplished by any known machine learning, data mining, and/or statistical techniques known in the art. Supervised, semi-supervised, and/or unsupervised approaches can be used, including, but not limited to Naïve Bayes, Neural Networks, Support Vector Machine, and/or Associating Mining based techniques. The MPSM method and system of this invention desirably records all posted locations and activities. Throughout use, the disclosed invention learns the corresponding locations and the set of associated activities. More so, via comments made by the local user and by the local user's communities, the importance of the activities can be learned, such as for the prompting discussed above. Importance can be either local user or community biased. Additionally, importance can be biased by context. For example, community members as a whole might prefer “eating steak,” “eating pizza,” and “eating sushi,” in that respective order. On the other hand, a local user might only eat sushi. Thus, local user bias will yield “eating sushi” only, while community bias will suggest “eating steak,” “eating pizza,” and “eating sushi,” in that respective order. In embodiments of the MPSM method and system of this invention, locations are named according to a naming convention. Regardless of the naming convention used, a location is a physical geographical position. More so, physical geographic locations associate properties that can vary with or be dependent on context, namely time and date (hours, day of week, calendar date, etc.), users involved, and their relationships to each other, etc. This context can affect the associated location name or activity. A common scheme that can be used to at least assist in identifying a physical geographical location is via the use of geocoding. Geocoding is the representation of a physical location via the pairing of latitudinal and longitudinal coordinates commonly referred to as a lat-long pair. Global Positioning Systems (GPS) can also determine a physical position coordinated via the triangulation of satellite transmissions. Typically GPS devices derive lat-long pairs which are made available to a variety of applications, often via map displays. GPS economics, accuracy, and simplicity of use resulted in their wide appeal and commercial success. Their continuous use in mobile devices is problematic, however, as they are energy intensive and rapidly drain the battery. Thus, alternative means or approaches to detect locations are desired. Embodiments of the MPSM method and system of this invention, as discussed above inFIG.9, use or rely upon cell coordinates. When mobile devices communicate with a cell tower, they send their cell coordinates. These coordinates are recorded by the cell provider and are typically not publicly known. The cell phone or, in this case, the mobile device supporting the positional social media system, however, is aware of their coordinates. Thus, the device can store the cell coordinate position and automatically associate that cell coordinate with the location name provided by the local user. Over time, a location database of cell coordinate and named location pairs is created. The local portion of the database favors the local user. The union of all the local portions of the location database desirably constitutes the name space of the entire MPSM system of this invention. It is understood that any of the many database management systems or storage schemes known in the art can serve as the platform for this location database. Thus, location names can be provided without the need to rely on a global positioning system, reducing battery consumption. Location data can additionally or alternatively be purchased or otherwise provided by a third party. An additional and/or alternative approach for automatic location determination relies on WiFi triangulations. Mobile devices can grow and maintain a database of known open WiFi networks, for clarity we call this database an Open-WiFi-Net database. Such mobile devices can use the information stored or derived from the information stored in the Open-WiFi-Net database to further refine the accuracy of a location without the use of GPS. Via point triangulation, when an Open-WiFi-Net database is available, the mobile operating system uses not only the cell tower but also WiFi triangulations to determine location. It is within the scope of this invention to use either or both cell-phone and WiFi triangulations to enhance location information in addition to any other disclosed approach. The mobile device can use the WiFi signal at a destination, such as destination43, and additionally or alternatively any detectable open WiFi signal from a neighboring location, such as establishment45that is adjacent destination43. Having created the location database, searching, namely querying, the database uses the cell coordinate or the location name. That is, a location name query takes a location name as input and returns the corresponding cell coordinate. A cell coordinate query takes a location name as input and returns the corresponding location name. Note that, multiple names can be attributed to a given cell coordinate. That is, a local user might name a location using multiple different names; different users can name same locations using different names. Similarly, the same name can be used for different cell coordinate locations. All names corresponding to a given cell coordinate are returned. It is within the scope of this invention to selectively return names based on context, user, or community bias. Similarly, all cell coordinates corresponding to a given name are returned. Again, it is within the scope of this invention to selectively return coordinates based on context, user, or community bias. Ranking of the results returned can, when desired, be biased towards the local user. A key concern for MPSM systems is collecting location information. Clearly any location information available within the mobile device should be harnessed. Thus, if GPS readings or any other location information is generated by other device resident applications, these readings are desirably recorded and utilized by the method and application of this invention. However, reliance on strictly other applications to obtain positional information is obviously not realistic or possible. In embodiments of the MPSM method and system of this invention, positional information is obtained via the use of geofences. A geofence is geographical boundary or “fence” surrounding a positional reading. As these boundaries are radius based, geofences are generally circular. Location transmission occurs whenever a handover of one cell tower to another occurs and is expected but not guaranteed to occur once a geofence boundary is crossed. To track location, periodic location transmissions are required. Since location transmissions must be minimized to conserve device energy, transmissions should only occur given geographical movement. Thus, crossing a geofence should generate such a transmission. Unfortunately, as crossing a geofence does not guarantee a location transmission, increasing the likelihood of a transmission is necessary. In contrast to the known uses that surround a location with a single geofence, to increase the likelihood of a location transmission during movement, embodiments of this invention include surrounding a location geofence with a plurality of geofences. In one embodiment of this invention, a method and system of tracking a user includes determining a location of the mobile user, automatically establishing a first geofence around the location, and automatically establishing a plurality of additional geofences around the first geofence, with each geofence including a boundary. A location transmission is obtained by the mobile device upon crossing a boundary of the first geofence or any of the plurality of additional geofences. Multiple neighboring geofences are advantageous since they increase the likelihood of a location transmission as their boundaries are likewise likely to be crossed given movement. FIG.10representatively illustrates a geofence60surrounding a current location62. The geofence60is surrounded by additional geofences64, all within a given cellular tower transmissions cell65. Note that part of a neighboring geofence64′ is not fully within the cell and hence, limits its benefits since a cell tower handoff by movement into cell65′ will generate a location transmission. Geofences are implemented as software processes. Operating systems for mobile devices, such as but not limited to iOS and Android, limit the number of processes available to an application, and thus, the number of geofences is bounded. However, this limit typically exceeds the number of geofences generated using the approach described above. Therefore, additional processes are available, and hence, additional geofences are possible. To increase the likelihood of a location transmission given movement, in embodiments of the invention, the remaining available processes implement static geofences. A static geofence is not dynamically generated given a new location as previously described. Rather, a static geofence is one that is fixed and represents those locations that are likely to be crossed by a given user. That is, users are habitual and frequent a limited set of locations often, for example but not limited to, their home, office, or favorite wine or sushi bar. By learning the frequent locations of users both individually and system wide and setting static geofences at these locations, biasing by the individual user, the probability of a location transmission is increased since additional geofences are likely crossed. More so, these repeated locations vary by city, county, state, country, etc., as well as by other factors such as but not limited to day and time. Geographical and temporal presence can thus be used to vary the set of static geofences for a given user. For example, the set of static geofences for a given user will vary if the user is in Washington, DC rather than in San Francisco, CA. Similarly, the set of static geofences for a given user will vary depending on the day and time. For example, a user frequents work on weekday mornings but frequents their favorite bagel shop on Sunday mornings and their favorite sushi bar on Thursday evenings. Location transmissions suffer from a margin of error. Thus, it is difficult to precisely pinpoint and tag a location with a single transmission. Embodiments of this invention include automatic refining of a location of a user destination as a function of user routines, such as established by several user visits to the destination. As time progresses however, and a user frequents the same location multiple times, multiple location transmissions for the same location are recorded. In one embodiment of this invention, as representatively shown inFIG.11, by overlapping the transmitted location along with its margin of error, a more accurate location can be derived. The overlapping of location transmissions for a given location70between streets72and within geofence74, along with their margin of errors, represented as circles76, yields an accurate location placement. As shown inFIG.11, location accuracy improves as related data are collected. Related data, however, can, at times, be somewhat erroneous (in terms of accuracy). A non-limiting example is an entrance to a shopping mall. Such an entrance is not necessarily at the center of the complex. Regardless of the entrance displacement from the center of the complex, the entrance location can still be used to increase location accuracy of the mall complex since the readings for the entrance are consistent. That is, for a given user, given mobile device, given carrier, etc., such location recordings remain consistent, all be it, slightly erroneous. Thus, even dirty, namely potentially inaccurate, data can result in correct location identification. Additionally, having established a location, corresponding lat-long pair coordinates can be reversed engineered, namely mapped back onto, a place name. These derived lat-long pair coordinates become yet an additional information component that is used by a learning system to better refine a mapping to a named place. Machine learning, data mining, and statistical approaches that are supervised, semi-supervised, or unsupervised can be used, as known in the art, to cross-correlate all available location related data. Once determined, the user information including the location and/or the user activities are automatically stored in a database. Embodiments of the MPSM method and system of this invention include a computer server for providing and implementing the tracking and/or social networking service of this invention. The computer server includes a location module to determine the user location and/or a tagging module configured to correlate manually entered user information to a user destination and a database module configured to store user information including user locations and user activities at the user locations. For social media and photo sharing, the server further desirably includes a communication module configured to automatically share a user activity or photo in the social networking service upon further user arrivals at a corresponding one of the user or community locations. The server can also include an association module configured to associate the user activity with the corresponding user location and any photo taken. Since location transmissions are needed during movement, the obvious question arises: when should the transmissions cease? That is, the system must determine when the user has arrived at a location to know when to perform the automatic steps discussed above. As discussed above, GPS systems are an energy drain on a mobile device, particularly as the GPS remains on and linked with the satellites to maintain location detection. Keeping a GPS application operating is a drain on both the processor and the battery of the mobile device. This invention provides a method and executable application that conserves energy by not continually running during use of the mobile device. Embodiments of the MPSM method of this invention provide an automated method of tracking a mobile user that includes providing a location module configured to receive location transmissions, placing the location module into a sleep mode, awakening the location module upon receipt of a location transmission, and determining a location with the location module. These placing, awakening, and determining steps are repeated, thereby placing the application into a sleep mode when not needed, thereby reducing the drain on the mobile device. The application goes into sleep mode when necessary or when desired, such as when the application is not needed, e.g., during extended movement or upon an arrival at a location. In embodiments of the MPSM method and system of this invention, the application can go into sleep mode whenever a time since the device awakening exceeds a predetermined time allocation, or upon a determined rate of travel exceeding a predetermined threshold, thereby indicating extended travel. FIG.12illustrates one exemplary, and non-limiting, method according to an embodiment of this invention to automatically detect arrival at a destination. The method is useful for tracking a user's location for any of various reasons, including, for example, for safety, to provide automated reminders, and/or to provide automated suggestions to the user based upon the destination and/or surrounding area. The method ofFIG.12is particularly useful for implementing the method and system discussed above, and can be used to implement other applications and method to provide energy savings compared to GPS location methods in mobile devices. FIG.12includes a flow chart100that includes and/or represents three distinct situations, namely, an actual arrival, rapid movement, and sporadic movement without an actual arrival. Initially, the application is in sleep mode. Sleep mode is a state when no processing, and hence no energy consumption, takes place. Processing occurs once the application is awoken. A location transmission, such as a cell tower transmission or another application obtaining location information, awakens the application in step102. Since the application awakening occurs due to a location transmission, the current location is known. Once awakened, the application typically has a maximum amount of time to complete its processing. This limit, called time allotment, is set by the device operating system. All processing must complete prior to exceeding the time allotment. Ideally, the application should relinquish the processing flag back to the device operating system before the operating system forcefully removes the application from its active queue. Voluntarily terminating an application, namely returning it to the sleep mode, rather than having it forcefully terminated by the host operating system, is consider good citizenship. In step104, the application initializes two timers, namely, a timer count representing the duration of time the process has executed since last awakening, and a stationary count representing the duration of time since the last detected device movement. As time progresses and the process executes, the timer count is incremented in step106. In one embodiment of this invention, whenever the application processing time exceeds the operating system time allocation (108—YES branch), the application is voluntarily placed in sleep mode105. Note that the time allocation threshold is not necessary, but set to support good citizenship. Assuming that the time limit has not been reached (108—NO branch), the application waits for t time units in step110. After waiting t time units, new current location data are obtained is step112and stored locally on the device in step114. In step116, the current location is compared to the previously known location. If the two locations differ (116—NO branch), the rate of travel is computed in118. If the rate of travel exceeds a threshold (120—YES branch), the process is desirably and voluntarily placed in sleep mode122. Rapid travel is unlikely to result in an immediate or near term arrival; thus, checking locations while moving rapidly unnecessarily uses device energy. Eventually, the application process is awoken with the device moving at a slower rate. At that time, location checking is needed as an arrival might soon occur. If or when the rate of travel is slow (120—NO branch), movement is noted in step124, and the loop is repeated commencing with the indication that additional processing time has elapsed in step106. Thus far, the arrival detection process has been voluntarily placed in sleep mode either due to having exceeded the self-imposed processing allotment quota which is desirably set just slightly below the operating system's time limit that leads to the removal of the application from the active queue (108—YES branch) or having travelled too rapidly (120—YES branch). Slow travel has resulted in simply recording the locations traveled, noting the movement exists in step124, and awaiting either arrival or process termination. Arrival is determined when the same location is detected for a sufficient duration of time. That is, an arrival is potentially determined when the location remains the same (116—YES branch). The stationary detection count is then incremented in step126. If the stationary threshold is not yet exceeded (128—NO branch), the application waits for t time units in step110, and the current location is obtained in step112and stored locally in step114. A sufficient and predetermined duration at the same location eventually surpass the arrival detection threshold (128—YES Branch). Once arrival is determined, arrival is declared in step130, all data regarding the prior locations visited and stored locally are compressed and sent to the back end system supporting the application in step132. A new location checkpoint is established in step134, and the process is placed in sleep mode136. From the sleep modes, the process ofFIG.12repeats upon a known location. Compression of location data is typically performed prior to local device to back-end system transmission as often the location data may not be needed at the back end. Location data may not be needed in cases, for example but not limited, during rapid travel. Although exemplified as having data compression occur prior to the sending of the data to the back-end, it is within the scope of this invention to compress location data prior to storing them locally. All parameters described above forFIG.12, for example t (for the time units), timer count, etc., are system and device dependent. Experimentation with and fine tuning of these and other parameters is within the scope of this invention. Also within the scope of this invention is the tuning of these and other parameters via the use of machine learning, data mining, and statistical approaches; supervised, semi-supervised, and unsupervised approaches can be used. As discussed above, once the user has arrived at a destination, the location identification, user activities at the location, and/or any proximate third party members of the user's community are determined, if not already known. In this way, the devices automatically continually determine locations which can be used to identify any establishments and/or any community members at or within proximity to the location. User activities are actions or events. Example user activities include but are not limited to “drinking wine,” “flying,” “reading,” “attending conference,” or “commuting.” Users specify a particular user activity either by selecting from a provided list or by entering a different user activity. As discussed above, the provided list is generated by storing all previously entered user activities from all systems users but biasing the ranking of the provided activities based on context, the local user, their community, or a combination thereof. All location and user activity pairs are stored in a database correlating the location with the activity. Any of the many database management systems or storage schemes known in the art can serve as the platform for this location-activity database. Furthermore, it is well understood in the art that the location-activity database can store many additional features. For example, the user identity and date and time of the pair are recorded. Over time, the database grows and contains a sufficient number of pairs to support mining. The volume of data needed to mine correlations is dependent on the mining algorithm deployed and the level of accuracy needed. As known in the art, there are many machine learning, data mining, and statistical approaches to support mining. By using any of the many available such approaches, either individually or in combination, a local user activity preference per location is learned. Example learning approaches include supervised, semi-supervised, and unsupervised approaches including but not limited to Naïve Bayes, Neural Networks, Support Vector Machine, and Associating Mining based techniques. The use of proprietary mining techniques is likewise within the scope of this invention. Once local user preference is learned, this preference is used to bias the aforementioned provided user activity list. There are many approaches to identify locations. Automated location identification is accomplished by periodic checking of the current location. Periodicity of checking depends on, for example, the method used to determine the location, the desired frequency of reporting, recording, and notification, and the resources available to support the checking. Other periodicity determination approaches known in the art can likewise be used. One approach to automate location identification is the periodic determination of lat-long pairs via the use of a GPS device. An online service or a locally resident database can be used to correlate the GPS readings with locations. A preferred embodiment of this invention uses the aforementioned location database. Whenever a transmission to a connected cell tower is made, the cell coordinates of the transmitting device are used as a search query against the location database. If a match is detected, that location is identified. Another preferred embodiment detects locations upon the crossing of geofence boundaries as previously discussed. Note that both dynamically determined geofence boundaries and static geofence boundaries detect a location. Yet another preferred embodiment detects locations by capitalizing on location transmissions generated by any other application operating on the mobile device requesting location information. In embodiments of the MP SM method and system of this invention, local users, unless disabled by a local user, can be provided with automated notifications for themselves and for their community members. These notifications describe locations, activities, or correlated locations and activities for themselves and their community members. For example, unless disabled by the user, any time a user arrives at a new location, the local user and their communities can be notified of the user's new location. Automated location detection and notification, unless disabled, occurs without requiring a local user prompt. Similarly, activity notification can be automated. Once a user arrives at a location, a set of activities previously occurring at that location is shared with the community or provided to the local user for information or sharing. If the user chooses to confirm at least one of these past activities, both the local user and their respective community members are notified of this at least one activity, and any photo taken is automatically tagged with the context information. In another embodiment of this invention, automated notification involves shared experiences. A shared experience is one that associates multiple users. These associations can be passive or active. A passive association is strictly informative in nature while an active association requests an action. Non-limiting examples of passive shared experiences based on locations include: “User A is at User A's office, as is User B” and “User A is at home as is User C.” Note that the first example involves multiple users at the same physical location, namely User A's office, while the second example involves multiple users at the same relative locations, namely their homes, but at different physical locations. Similarly, passive shared experience notifications can be based on user activity. Non-limiting examples of passive shared experiences based on activity include: “User A is eating lunch as is User B” and “User A is participating in her favorite sport as is User B.” Note that the first example involves multiple users participating in the same activity, namely eating lunch, while the second example involves multiple users involved in similar nature of activities, namely participating in their own favorite sport, which can be different actual activities, namely racquetball and swimming. In both passive shared experiences based on location and on activity, known in the art machine learning, data mining, and statistical approaches that are supervised, semi-supervised, or unsupervised approaches can be used to correlate relative locations and activities to physical locations and activities. Other shared experiences can prompt for action, and are thus considered active. A non-limiting example of an active shared experience prompting for action includes: “User A posted a picture when at Penn Station; you are now at Penn Station; please post a better picture?” Thus, active shared experiences request the user to actively react. As above, active shared experiences can be location or activity based and can be absolute or relative. Note that it is likewise within the scope of this invention that individual user notifications be active and passive, in a similar manner as described above. However, the correlation of locations and activities both for passive and active are based strictly on the current, past, or projected expected activities of the individual user rather than those of multiple users. Typically, only changed locations and activities are notified. That is, a location or activity is not typically repeatedly identified. However, a local user can request repetitive notifications based on any triggering condition known in the art. Local users do not always remember to indicate a new location name or confirm which of the possible suggested name or names the system indicated for the given the location. As such, it is at times advantageous to prompt the local user for information. However, overly aggressive prompting might annoy the user. In embodiments of this invention, the application non-invasively prompts the user upon detecting an unknown location for the given local user. To avoid annoyance, prompting is repeated only rarely, say twice; the number of repeated prompts can be set as a parameter. Similarly, to provide a sense of comfort, if the back-end system recognizes the location based on the local user's community members' naming schemes, it prompts the local user with guiding messages, for example but not limited to “Many of your community members call this location The Tasting Room.” Identification of activities associated with a given location or a given community member can be additionally or alternatively automatically inferred in multiple ways. In embodiments of this invention, the computer system can automatically determine a positional destination of a user, such as by using a mobile device discussed herein, and automatically deduce as user information a location type and/or user activity of the positional destination. The user information can be deduced, at least in part, based upon the destination context. Exemplary context information includes, without limitation, time-dependent information (e.g., what time of day is it?), community information (e.g., who is also there?), and/or third-party information about the positional destination. This method, tied with automatic sharing of the user information in a social networking service, can provide a partially or fully automated process for determining user location and activity, and tagging photos taken with the context information. In one embodiment of the MPSM method and system of this invention, the automatic deducing of the user information is based upon known or learned user routine. As discussed above, local users typically follow standard routines. Some routines are daily, weekly, monthly, etc. Other routines are context dependent. Regardless of the nature of the routine, learning via any of the many statistical, machine learning, data mining, or business analytical techniques known in the art, enables predictive behavior and automated activity and location suggestion. For example, but not limited to, if a local user always goes out to lunch at noon on every weekday, then if an unknown location is detected on a Tuesday at noon, then the application can suggest that this unknown location is likely a restaurant and the activity is likely eating lunch. Similarly, routine identification enables the prevention of transmissions both positional and informational. For example, but not limited to, if a local user always goes to sleep at midnight on Sunday through Thursday and awakens at 7:00 am the following day, then energy can be saved if the application voluntarily places itself in sleep mode during the hours that the local user is known to be sleeping. Additionally, routines can involve a sequence of activities and locations. A non-limiting example of a sequence of activities includes: On weekdays, Eric arrives at his office at 8:00 am, drinks coffee at 10:00, develops software from 11:00 am until 5:00 pm, commutes home at 5:30, and finally, arrives at home at 6:00 pm. Another location and/or activity deduction approach is by association. The automated deducing can include automatically associating a user with a second user at a positional destination. If the second user's location and/or second user's activity is known, then the system can automatically infer the location type and/or user activity of the first user from the second user location and/or activity. Consider a previous known event such as: “Community member Sally swimming at the Somerset pool,” assuming that the Somerset pool location was previously identified. As an example of automatically determining a current activity of community user Sam, the system identifies through location determination that Sam is currently at the same location as Sally, and also that Sally is currently at the Somerset pool. From this information, possible automatically postulated associations and activities are: “Sam is at the Somerset pool,” “Sally is swimming,” and “Sam is swimming,” Thus, it is possible to infer an activity for a community member from association with another community member. It is within the scope of this invention to use any logical inference methods known in the art to generate plausible associations. It is also within the scope of this invention to obtain confirmation of the plausible postulated activity by the community member, in this case Sam, by asking either Sam or Sally or by any other means known in the art. Desirably the computer system operating the MPSM automatically stores past user information, including past location type and/or user activity of the positional destinations of all users. User information for future visits to repeat positional destinations can be automatically deduced as a function of the stored past location type and/or user activity of the positional destination. In embodiments of this invention, the system can rely on recorded previous activities of a user, a community member, or any system user at a given location to postulate on a user's activity at a given location. Past context information for past visits to the positional destination by the user and/or community members of the user can be compared to a current context of the user's visit to the positional destination to deduce the user information. In one embodiment, the system can reduce possible location types and/or user activities as a function of the past location type and/or user activity of the positional destination. As an example, at a given Location A, users previously studied, talked, ate, and drank. Thus, if a user's positional destination is detected as at Location A, then plausible activities postulated can be studying, talking, eating, and drinking. More so, if the given user's community members only previously talked, ate, and drank, it is with a higher probability to postulate that the given user is talking, eating, and drinking rather than studying. Furthermore, if the given user visited Location A previously, but only talked and drank, then an even higher probability is that the user is currently talking and drinking rather than eating and studying. It is within the scope of this invention to postulate some or all of the previously detected activities of a given location. More so, it is within the scope of this invention to rank order the activity suggestions according to the relevance of the previously visiting users to the given current user. As previously described, the system can request confirmation of suggested activities through the user's mobile device. The system can additionally or alternatively reduce possible location types and/or user activities as a function of the past location type and/or user activity of the positional destination as a function of the time of day. The system can rank possible location types and/or user activities of the positional destination based upon known past time periods corresponding to the time of day of the current user visit. For example, again given Location A, if previous visiting users were recorded to study one or more times during the intervals: 3:00-4:30 PM and 7:30-9:00 PM, and to drink one or more times during the intervals: 4:00-7:00 PM and 8:30 PM 2:00 AM, then a current visiting user at Location A at 3:15 PM is likely studying, at 4:15 PM is likely to be either studying or drinking, and at 1:00 AM is likely to be drinking. More so, if the given user's community members only studied between 3:15-4:30 PM then it is with a higher probability to postulate that the given user is studying rather than drinking at 4:15 PM. Furthermore, if the given user visited Location A previously but only studied, then an even higher probability is that the user when at Location A is studying. It is within the scope of this invention to postulate some or all of the previously detected activities of a given location. More so, it is within the scope of this invention to rank order the activity suggestions according to the relevance of the previously visiting users to the given current user. As previously described, the system can request confirmation of suggested activities through the user's mobile device. In embodiments of this invention, time context alone can be used to postulate activities. For example, if most days, a user is recorded to be drinking coffee between 9:00-10:00 AM, then, without contradictory information, a plausible activity postulate is that at 9:35, the user's activity is drinking coffee. Again, as previously disclosed, it is within the scope of this invention to rank order the postulated activity suggestions according to the relevance of the previous users to the given current user and/or to obtain confirmation of suggested activities. Additionally, it is also within the scope of this invention to rank order the time postulates based on frequency of occurrence within the time interval. This rank ordering applies to both location based and location independent time based postulates. For example, if in the interval 4:00-4:30 PM, community members studied 25 times but drank 5 times then, at 4:15, it is with a higher probability to postulate that the given user is studying rather than drinking. In embodiments of the MPSM method and system of this invention, the system can search and/or use, if available, external, third party information about the positional destination for postulating activities for a given location. For example, third party vendors might provide, either free of charge or for a fee, activity information for a given site. Consider a marketing website of a centralized homepage for a grocery store chain. Such websites are likely to contain addresses of many or all of the associated stores. Since these stores all support shopping, an activity associated with these locations is shopping. Similar information can be derived or purchased from other sources such as but not limited to commercial information repositories. Additionally, maps can be parsed. Given a location of a road, an activity of that location is likely to be driving. Various and alternative third party information gathering approaches and their incorporation into activity classification and postulation can be incorporated into the method and system of this invention. Suggested activity information, particularly but not limited to information obtained or derived from third party vendors, might be additive or might be contradictory. Thus, combining or reconciling potential activities is needed. The use of voting schemes, biased based on credibility of the source or on frequency, such as majority, or other known techniques, can be incorporated in the method and system of this invention. Note that differing suggested plausible activities may additive or may be contradictory. The use of techniques such as, but not limited to, conflict resolution methods, ontology determination, and clustering, etc., can be incorporated to recognize potential conflicts and to expand classification is within the scope of this invention. Additionally, the classification of plausible activities based on activities occurring in the surrounding vicinity is likewise within the scope of this invention. For example, consider an unknown location adjacent to two known locations, such as, but not limited to, two neighboring stores or two neighboring beaches. For the neighboring stores, known activities might include shopping and strolling, while for the neighboring beaches, known activities might include sunbathing and swimming. Given location proximity, it is within the scope of this invention to suggest a user's activity at the unknown location to be either shopping and strolling or sunbathing and swimming, respectively. Confirmation can always be obtained for suggested activities and to bias suggested activities based on user familiarity and frequency of occurrence. In embodiments of the MPSM method and system of this invention, local users can opt to delay their notifications. That is, once a location is visited or an activity occurs, a local user can opt to have the notification of the location or activity be delayed by some period of time. Delaying a notification provides the local user with the ability to notify their community of the location visit or activity occurrence, but still provides the local user time to progress to the next location or activity. As discussed above, users can also choose to automatically share or not share photos taken with the digital picture frames of this invention. Notifications can be complemented with correlations with other community members. That is, both the local user and their respective community can be automatically notified with a comparison. A comparison, for example but not limited to, can identify other community members having previously conducted a specific activity or having visited a given location previously. Comparisons are made by checking other community member locations and activities against those of the local user. Checking is performed via a search of the location-activity database. If a match exists within a specified or predetermined period of time, a comparison notification is made automatically. The period of time can be arbitrarily set or can follow some logical time quantum such as hour, day, week, etc. Locations and activities are known by name. However, in addition to a name, locations and activities can have associated personal labels. Labeling locations and activities can detail familiarity to the location and activity. User labels for locations can be surrogate names, for example, “favorite city” for Chicago, can be songs or sound waves, for example song words “my kind of town, Chicago is” for Chicago, can be a picture, for example “the Water Tower” for Chicago, can be a video, for example “a panoramic view of the Chicago skyline” for Chicago, or any combination of these and other multimedia tags supported by the local device. Similarly, user labels can exist for activities. For example, “favorite vice” for drinking wine, or it can be a song or sound wave, for example the song words “a bottle of red” for drinking wine, or it can be a picture, for example, a wine bottle picture for drinking wine, or it can be a video, for example “a panoramic view of a vineyard” for drinking wine, or any combination of these and other multimedia tagging labels supported by the local device. In embodiments of the MPSM method and system of this invention, local users and community members can comment on their own and each other's locations and activities. Comments can take any of the many multimedia forms provided by the local device. These include, but are not limited to, text, sound, pictures, videos, or any combination thereof. Multiple comments can be made by the local user, their community, or combination thereof. In addition to stating their opinions (commenting), community members can prompt for clarification. That is, by issuing “what” comments, community members request additional information on the posted locations and activities. Additionally, user can “like” their own and each other's locations and activities. By “liking” a location or activity, community members express their satisfaction of their respective community members' presence in terms of location and activity. Multiple community members as well as the local user can “like” a location and activity. The MPSM method and systems of this invention can track vast data on both the local user and their respective community members. These data cover, including but not limited to, locations, activities, and also individuals both who are system users and those who are not. These data can be stored and summarized. A summary of the local user and community member locations, activities, time durations involved in each of these locations and activities, individuals who they encountered, etc., can be computed and presented to the user. This summarization can range from simple statistical aggregation to advanced correlations as derived by any of the many, both individually and combined, machine learning, data mining, business analytics, and statistical techniques known in the art. Information that can be aggregated or derived can answer, exemplary but not limiting, questions such as: how much time a local user spent doing things, such as, working at home, working out, walking the dog, commuting to work?; how much time a particular community member spent doing things, such as, working at home, working out, walking the dog, commuting to work? (Note that the information derived for the community member is based strictly on the information that that particular community member chose to share.); who are the five most common individuals that a particular user interacts with?; what is the likelihood that after seeing a particular user, the given local user would see a particular different individual?; which activities and locations are most closely associated with each other and when are they most likely to occur?; which three users among a given community are most likely to visit a particular location together? Local users can be provided with summaries of their locations, durations at these locations, and activities at these locations. Furthermore, at the discretion of the local user, these summaries are made available to their community members. The system can also generate and maintain both aggregation and derived information. This information can be used to optimize suggestions to avoid obstacles, for example, but not limited to preferred routing of commuting path, promoted target advertising, for example but not limited to location of nearby ice cream store for those users who frequently record “eating ice cream” as an activity, and a host of other informational services known in the art. Digital Frames according to embodiments of this invention include a conversational agent module, e.g., a “chatbot,” which is a software application that mimics written and/or spoken human speech for the purposes of simulating a conversation or interaction with one or more viewers of the frame. The conversational agent preferably operates on natural language processing and/or machine learning/data mining of photo content and/or metadata, such as any or all of the determined/extracted information, photo features, photo content, and trends discussed above. The conversational agent processes or parses instructions, questions, or other spoken words (or written text) presented by the frame viewer or other user. The conversational agent responds, according to a complex series of algorithms that interpret and identify what the viewer/user said, infers what viewer/user meant and/or wanted, and determines a series of appropriate responses based on this information. The digital photo frames can include suitable AI chips, such as are known in the art, incorporated into the frame architecture and communicating/interacting with the central processor, to provide the computing capacity of the conversational agent. In one embodiment, the conversation involves multiple languages. In one embodiment, the viewer/user and frame communicate in different languages. In one embodiment, the conversation involves the use of American Sign Language (ASL) captured by the camera incorporated into the frame. In embodiments of the invention, the conversational agent is a conversation bot configured to provide interactive conversations with the frame viewer(s). As generally shown inFIG.13, embodiments of this invention include an intelligent digital picture frame220including a conversation bot that interacts in back-and-forth verbal interactions230with a viewer240, such as using the microphone227, a speaker228(e.g., on rear of frame220), and camera225of, or attached to, the frame220. The verbal interaction can include a spoken statement or question232(either initial or responsive) from the viewer240, and a spoken statement or question234(either initial or responsive) from the frame220. In one embodiment, the communication from the viewer240is using sign language (ASL) via camera225. In one embodiment, the communication from the frame220is subtitled onto the display. The verbal interaction230can be triggered by the viewer240, a remote community member242of the viewer240(seeFIG.14), or automatically by the frame220upon a triggering condition. Exemplary triggering conditions include, without limitation, detecting a present or new viewer, detecting a viewer-preferred picture (e.g., a “liked” photo) on the display, visually detecting a viewer response to a displayed photo, after a random or predetermined viewing period of a viewer, and/or combinations thereof. The verbal interactions can be a conversational interaction, such as including information and/or questions, and/or a story telling function, with both types of interactions preferably related to photo content relevant to the viewer. The conversational interaction can include changing of the displayed photos to match the conversation topic, as the conversation proceeds. For example, if the conversation is about the viewer's dog, the frame may ask for some of the viewer's best memories of the dog. The viewer may reply that the dog loved the park, and chasing the geese at the lake. The frame can continually search for photos matching the reply, and display relevant photos, including information about the photo (e.g., “here is a picture of the dog at the park in August 2020”). A slideshow of photos can thus be created to coordinate with the conversation iterations/topics. In embodiments of this invention, the conversational interaction can be used to determine who is viewing the frame. This can further be combined with viewer detection with a frame camera225. Desirably the tone of the viewer's responses and/or the viewer's body language/facial expressions can be used to adjust the conversation to an appropriate tone or sentiment. In desired embodiments, an automated greeting is generated upon detecting a viewer, such as to initiate conversation interaction. In the embodiment ofFIG.14, the frame conversation can be initiated remotely by a community member242for viewer240. The frame conversation can be actuated through an application on the community member's mobile device229(e.g., smartphone). Such remote actuation can be particularly helpful with viewers that may need assistance, such as very young children or persons with mental shortcomings. The community member242can request an initiation234of general conversation for the viewer240(e.g., “how are you doing today?”) and/or request an informational conversation about an area of interest to the viewer240(e.g., “would you like to talk/hear about . . . ”). The community member242may, for example, suspect that an interaction is needed based upon a tone of a text or phone call received from the viewer240(e.g., if dad sounds sad, person242can actuate photos and conversation234about grandkids or old friends/pets to cheer him up.). Upon actuation, the frame220, upon detection of viewer240, initiates conversation by spoken statement or question234. If/when viewer240replies, the back-and-forth conversation continues. The community member242can select a display photo remotely for the automated conversation (e.g., instruct the frame to display and talk to dad about this selected photo), or simply request a conversation about an automatically selected photo. In one exemplary embodiment, frames can be set up in medical assistance or childcare facilities, and initiated by central nursing or teacher stations. In embodiments of this invention, automated conversations between the frame and viewer provide supplemental search capabilities to identify photographs for display, such as that contain specified one or more individuals, specified one or more activities, specified one or more locations, and any or all combinations of individuals, activities, and/or locations. Searches can be automatically accomplished based on technologies discussed herein or otherwise known in the art techniques, such as those that rely on either or both metadata or image content related to the photograph. As an example, the processor can determine keywords from the conversation, and more particularly the viewer's responses, for searching. When paired with the clustering techniques described above, appropriate photos relevant to the viewer and/or the conversation or requests/replies/comments therein can be efficiently identified and displayed. In one embodiment of the invention, these conversations provide story telling functions, preferably based upon photo content and descriptions. Large (pretrained) language models, such as are known in the art, are used to automatically generate full stories about or including the displayed photograph. These stories are preferably guided by displayed image content, metadata related to the photograph, voice prompts and setting narratives provided by the interactive viewer, and any or all combinations of image content, metadata, and/or voice prompts and setting narratives. Referring toFIGS.5and6, the viewer240or the community member242can request a story234developed by the frame220about a favorite person, location, activity or any combination thereof, where the frame displays one or more photos of the requested content during the story. As an example, the viewer once lived in London, and the story request is for a fictional or actual mystery occurring in London, while showing coordinating photos from the viewer's time in London (and perhaps supplemented from general photo repositories online). The story can include community members, or various other people (e.g., movie stars), as characters in the story, such as depending on prompts from the viewer if received. FIG.15illustrates one example of a story telling, where the viewer240(e.g., a young girl) requests232a story234about a princess, perhaps herself as a princess. The story request232can be made upon seeing a photo of herself in her princess outfit on frame220, or the displayed photo can be shown after, as relevant to, the story request232. The frame may ask for additional details, such as the princess's name or a desired adventure (e.g., meeting a prince or fighting a dragon). The frame may also allow the viewer to dictate parts of the story, such as in a choose-your-adventure story type (e.g., “should the princess enter the dark cave?”). The level of continued interaction can preferably be set at the beginning of the story by the viewer, so as to minimize story disruption if desired. The conversation and storytelling features of this invention can also be implemented through the frame's interaction application on a user's mobile device, such as to provide these photo-based stories as entertainment for a child when away from the frame (e.g., a car ride). FIG.16shows the photo ofFIG.5on display220, and where a viewer requests a pirate treasure story. As shown inFIGS.17and18, the photo can be annotated with overlay images250and/or sounds252, such as a discovered treasure chest and seagulls/ocean waves inFIG.17. The story continues inFIG.18, where pirates return to look for the treasure that the persons in the photo discovered, etc. Overlay images250and sounds252can be obtained from various online databases or otherwise stored in the frame system. In embodiments, the conversation or story occurs in or incorporates one or more non-native (foreign) language(s). In one embodiment the conversation occurs with a given dialect (British vs. American English) and/or a given accent (Northern Germany vs. Bavarian pronunciation). For example, character dialogue in the London mystery above can be presented in an English accent. Different voices for different characters is also contemplated. The conversation can also be text-based, e.g., close-captioned, as needed. The frame can also be used for foreign language lessons, pairing the photos with non-native equivalent words (e.g., “Papa and ich am Strand” or “” forFIG.16). In one embodiment, multiple scripts are supported. In one embodiment the captioning provides either both or just the transliteration of the response such as “Aba ve'ani al hakhof” and/or “”. In embodiments of storytelling, animation is provided. In embodiments of storytelling, image composition for panoramic viewing of a scene is provided. In embodiments of storytelling, mood is intensified by background music selection; the music can be obtained from frame resident or external sources or self-generated via large language models, potentially guided to generate a particular tone, style, genre, or mood. In embodiments, the conversation or story-telling interaction occurs with virtual bots representing the people in the photograph displayed. In one embodiment the intended or desired sentiment of the conversation is provided by the viewer. In one embodiment the motivation or purpose of the conversation is provided by the viewer. For example, it is possible that the motivation is to serve an educational mission (potentially is the case for describing an activity) or to provide comfort to the viewer (in the case of recent loss). In embodiments, the conversation is governed by viewer demographics such as but not limited to age or gender. Sentiment analysis is a subfield of computer science that uses NLP and machine learning to measure the sentiment and tone of a text or spoken language. Sentiment analysis can help a chatbot analyze user messages and identify whether the person's attitude towards certain products or services is negative, positive, or neutral. To assist the intelligent conversation or story-telling interactions, the invention includes a method and system that extracts one or more of content features from photos of the digital photo collection(s), and/or photo image features from (same or different) photos of the digital photo collection(s), as described above. As an example,FIG.16shows a photo of a man and young girl at the beach. The photo can be tagged as described above forFIG.5as labeled based upon the colors, shapes, light intensity, and/or color histogram of the photo. It is within the scope of this invention to again limit the tagging options. It is also within the scope to conflate multiple tag options into fewer and/or of fixed vocabulary. In embodiments of this invention, the photo extraction and/or analysis is limited to look for predetermined desired information or features that may be particularly useful for conversation. For example, the photo extraction or analysis can be limited to a particular person and/or location (e.g., tropics or spring break) that may be desired by a particular viewer. In embodiments of this invention, the limits of tagging options can be based on the viewer's identity, as potentially detected via face recognition from the photo captured by camera225. In embodiments of this invention, the limits of tagging options can be based on the viewer's characteristics such as but not limited to age and gender, as potentially detected via age or gender recognition software based on the photo captured by camera225. In embodiments of this invention, the limits of tagging options can be based on parental control software to limit access of a viewer based on identity or age using the photo captured by camera225. In embodiments of this invention the information detected in photos can be clustered and can be used for conversation and story-telling purposes. The metadata clustering process for trend determination can be similar to that discussed above for frame display, however, is generally for a different purpose and using different criteria/photo elements, and can be used in addition to the clustering described herein for frame display purposes. As an example, tagged photos can be organized into a plurality of sub-clusters, each for a corresponding common detected extracted content feature and/or extracted photo image feature, such as by activity, location, temperature/weather/seasons, time, attributes of a person of the photos, light intensity of the photo, or a color within the photos, etc., and various combinations of these categories. Correlations between photos can be mined, such as by comparing sub-clusters of photos across more than one digital photo collection over a network. Example correlations include determining common people, locations, and/or activities within the tagged photos, such as for a predetermined time period and/or a predetermined family group. Thus, the invention provides a digital picture frame including a camera, microphone, and speaker connected to the frame, and a network connection module for use as a device for displaying pictures from a user's electronic device and/or social media account or her or his community members' social media accounts. The frame allows for efficient, automated access to photos relevant to the viewer(s) of the frame, as well as conversational interactions with a viewer. The automated frame allows for changing photos for the viewer(s) without multiple manual steps. The invention illustratively disclosed herein suitably may be practiced in the absence of any element, part, step, component, or ingredient which is not specifically disclosed herein. While in the foregoing detailed description this invention has been described in relation to certain preferred embodiments thereof, and many details have been set forth for purposes of illustration, it will be apparent to those skilled in the art that the invention is susceptible to additional embodiments and that certain of the details described herein can be varied considerably without departing from the basic principles of the invention. | 134,102 |
11861260 | DETAILED DESCRIPTION FIG.1is a diagram illustrating an electronic device100according to an embodiment of the present invention. As shown inFIG.1, the electronic device100comprises a host device110and an audio control circuit120, where the host device110comprises a core circuit112, a USB interface circuit114, a specific interface circuit116and a storage component118; and the audio control circuit120includes a processing circuit122, a USB interface circuit124and a specific interface circuit126. In the present embodiment, the electronic device100may be a personal computer, a laptop or any other electronic device with an audio playback function, and the audio control circuit120may be a built-in audio device on the motherboard, that is, the host device110and the audio control circuit120are made on one motherboard. In one embodiment, the storage component118may be a flash memory, an electronically-erasable programmable read-only memory (EEPROM), an one time programmable read-only memory (OTPROM), or other types of non-volatile storage components. In addition, in the present embodiment, the storage component118is arranged in the host device110. In another embodiment, the storage component118may be disposed outside the host device110. In the present embodiment, the host device110may be a processing chipset in the electronic device100, which is connected to the USB interface circuit124of the audio control circuit120through the USB interface circuit114. The host device110transmits an audio signal to the processing circuit122through the USB interface circuits114and124for related processing (such as encoding and decoding, etc.), and then generates an output audio signal to a speaker106for playing, where the speaker106may be external to the electronic device100, or may be a built-in loudspeaker device of the electronic device100. In one embodiment, the specific interface circuit116of the host device110and the specific interface circuit126in the audio control circuit120conform to a specification of inter-integrated circuit (I2C) interface, a specification of serial peripheral interface or a specification of universal asynchronous receiver/transmitter (UART), but the present invention is not limited thereto. In other embodiments, as long as the specific interface circuit116of the host device110and the specific interface circuit126of the audio control circuit120can perform data transmission with each other, any suitable specifications other than the USB interface circuit can be adopted. Regarding the operation of the electronic device100, the electronic device100is first powered on to perform the initialization operation, and before the host device110recognizes a type of the audio control circuit120completely, the core circuit112obtains a plurality of parameters from the storage component118that can be used by the audio control circuit120. For example, the plurality of parameters may include a number of supported configurations, a vender identification number (ID), a product ID, a sample rate, a volume range, etc. The plurality of parameters is transmitted to the audio control circuit120through the specific interface circuit116, and the processing circuit122of the audio control circuit120receives the plurality of parameters through the specific interface circuit126. Then, during an enumeration between the host device110and the audio control circuit120, the audio control circuit120can use the plurality of parameters to communicate with the host device110to finish related settings. Specifically, during the enumeration, the host device110transmits USB packets to the audio control circuit120through the USB interface circuit114and allocates an address; then, the audio control circuit120reports to the host device110through the USB interface circuit124to inform that it is a device with an audio function, and sends relevant information to the host device110through the USB interface circuit124, where the relevant information comprises the plurality of parameters that are previously received from the host device110through the specific interface circuit126and may include the number of supported configurations, the vender ID, the product ID, the sample rate, the volume range, etc.; in the end, the host device110recognizes the type of the audio control circuit120and transmits USB audio class (UAC) related packets to the audio control circuit120, and ends the enumeration. It should be noted that, since the operation of the enumeration process is known to those skilled in the art and the focus of the embodiment is that the plurality of parameters used by the audio control circuit120in the enumeration come from the host device110, the above description merely describes the main operations of the enumeration that are related to this embodiment, and other details of the enumeration are omitted here for brevity. After the enumeration is finished, the host device110can transmit the audio data to the audio control circuit120through the USB interface circuit114. The audio data is used by the processing circuit122to generate the output audio signal to the speaker106for playing. In the embodiment ofFIG.1, the host device110actively transmits the plurality parameters that are used by the audio control circuit120in the enumeration through the specific interface circuit116when it is powered on. Therefore, the audio control circuit120itself does not need to be equipped with storage components to store the above-mentioned plurality of parameters, including the number of supported configurations, the vender ID, the product ID, the sample rate, the volume range, etc. In this way, the production cost of the audio control circuit120can be reduced. In addition, the host device110is a processing chipset and originally has the storage component118for storing program codes and other parameters. Therefore, storing the plurality of parameters used by the audio control circuit120into the storage component118does not increase much cost. On the other hand, since the host device110actively provides the plurality parameters that are used by the audio control circuit120when it is powered on, the audio control circuit120does not actively retrieve the plurality of parameters, but passively receives the plurality of parameters. FIG.2is a flowchart illustrating a control method according to an embodiment of the present invention. With reference to the above embodiment, the flow is as follows. Step200: Flow starts. Step202: A host device sends a plurality of parameters to an audio control circuit. Step204: The host device and the audio control circuit start to perform an enumeration. Step206: The audio control circuit uses the plurality of parameters to perform the enumeration with the host device. Step208: The host device and the audio control circuit finish the enumeration. Step210: The host device transmits an audio signal to the audio control circuit through a USB interface circuit, and the audio control circuit uses the audio signal to generate an output audio signal to a speaker for playing. Briefly summarizing the present invention, in the audio control circuit, the host device and the related control method of the present invention, the audio control circuit receives the plurality of parameters through the specific interface circuit during the boot process. The plurality of parameters are used for the subsequent enumeration between the USB interface circuit and the host device. Therefore, since the plurality of parameters that are used by the audio control circuit in the enumeration are provided by the host device, the audio control circuit does not need to be equipped with storage components to store these parameters, which reduces the cost of the audio control circuit. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims. | 8,071 |
11861261 | DETAILED DESCRIPTION OF THE EMBODIMENTS Terms “first”, “second” and so on are used to distinguish components, but not used to limit the order or the differences of the components. In addition, the terms “coupled” or “connected” refers to two or more components directly or indirectly physically or electrically contact with each other. For example, when the first device is coupled to the second device, it means that the first device is directly and electrically connected to the second device, or indirectly and electrically connected to the second device through other devices or connection means. FIG.1is a block diagram showing an electronic device100according to an embodiment. The electronic device100includes an audio module101, an external audio processing module102, a local audio processing module103, a switching module104, a setting module105, a storage unit106, and a system processing unit107. The switching module104is connected to the audio module101, the external audio processing module102and the local audio processing module103. The setting module105is connected to the switching module104. The system processing unit107is connected to the local audio processing module103and the storage unit106. The local audio processing module103processes the local audio signal with audio processing programs, such as encoding, decoding, compression, decompression, analog-to-digital conversion, or digital-to-analog conversion. That is, the local audio processing module103converts aforesaid audio data to audio signals adapted to be played by the audio module101. The local audio processing module103also converts the audio signals from the audio module101to audio data. The local audio signal is stored in the storage unit106of the electronic device100. In embodiments, the storage unit106is a temporary or non-temporary storage medium such as a memory or a hard disk. In an embodiment, the storage unit106and the system processing unit107are formed integrated, that is, the storage unit106is an inner storage of the system processing unit107. The local audio processing module103includes an audio codec. The external audio processing module102processes the external audio signal with audio processing programs such as encoding, decoding, compression, decompression, analog-to-digital conversion, and digital-to-analog conversion. The external audio signal is stored in another device (the user equipment200hereinafter) other than the electronic device100. The user equipment200communicates with the electronic device100via the communication channel10. The electronic device100and the user equipment200have communication interfaces corresponding to the communication protocol of the communication channel10, respectively. The communication interface supports wired communication technology or wireless communication technology. The wired communication technology is Universal Serial Bus (USB), Thunderbolt, High Definition Multimedia Interface (HDMI), or Ethernet in embodiments, which is not limited herein. The wireless communication technology is Wifi, Bluetooth, radio frequency in embodiments, which is not limited herein. The external audio processing module102includes an audio codec for converting audio data from the communication channel10to audio signals that adapted to be played by the audio module101, or converting audio signals from the audio module101to the audio data. Then, the audio data is sent through the communication channel10. The switching module104is used to switch audio transmission paths of the audio module101. In a first state, the switching module104is connected between the audio module101and the external audio processing module102. In the second state, the switching module104is connected to between the audio module101and the local audio processing module103. In the first state, the external audio signal is transmitted between the audio module101and the external audio processing module102. In the second state, the local audio signal is transmitted between the audio module101and the local audio processing module103. In an embodiment, the switching module104includes a switch1041. The switch1041includes a plurality of terminals A, B, and C. The terminal A is connected to the audio module101, the terminal B is connected to the external audio processing module102, and the terminal C is connected to the local audio processing module103. In the first state, the switch1041is conducted between the terminal A and the terminal B, and then the audio module101is connected to the external audio processing module102. At the time, the terminal A and the terminal C are disconnected (not conducted), and thus the audio module101is disconnected to the local audio processing module103. In the second state, the switch1041is conducted between the terminal A and the terminal C, and then the audio module101and the local audio processing module103are connected. At the time, the terminal A and the terminal B are disconnected (not conducted), and then the audio module101and the external audio processing module102are disconnected. In an embodiment, the switch1041is an electronic switch (such as a transistor). The setting module105sets the switching module104in response to an input signal to switch the electronic device100to the first state or the second state. In an embodiment, the setting module105is a button (such as a mechanical button or a capacitive button) to detect the input signal generated via an input operation. In an embodiment, the setting module105also receives the input signal from another device (such as a remote control). The setting module105transmits an enable signal to the switching module104in response to the input signal. The switching module104switches the conduction state in response to the enable signal. For example, the switching module104switches the electronic device100from the first state to the second state, or switches the electronic device100from the second state to the first state. In an embodiment, the audio module101includes a broadcasting unit1011. The broadcasting unit1011is connected to the switching module104(the terminal A of the switch1041). In the first state, the broadcasting unit1011plays the external audio signal. In the second state, the broadcasting unit1011plays the local audio signal. In embodiments, the broadcasting unit1011is a speaker or an analog audio source interface, which is not limited herein. In an embodiment, the audio module101further includes an audio receiver unit1012. The audio receiver unit1012is connected to the switching module104(the terminal A of the switch1041). The audio receiver unit1012is used to record sound to generate audio signals. In the first state, the audio receiver unit1012records the sound as the external audio signal, and the external audio signal is transmitted to the external audio processing module102. In the second state, the audio receiver unit1012records the sound as the local audio signal, and the external audio signal is transmitted to the local audio processing module103. The audio receiver unit1012is a speaker or an analog audio source interface, which is not limited herein. From the above description, the audio transmitting paths are able to be switched by users via the setting module105, and then the audio module101plays/records the local audio signal or the external audio signal. In an embodiment, the broadcasting or recording function of the electronic device100is better than that of the user equipment200. For example, the user equipment200does not have the broadcasting or the recording function; or the user equipment200has a broadcasting and recording device, but the sound playing quality or recording quality is not as good as that of the electronic device100. In this situation, the broadcasting and recording function are provided by the electronic device100instead of the user equipment200. In embodiments, the electronic device100is a notebook computer, a tablet computer, a desktop computer, and a mobile phone, which is not limited herein. The user equipment200is a personal digital device, a mobile phone, and a smart watch, which is not limited herein. FIG.2is a block diagram of an electronic device according an embodiment of the disclosure. In an embodiment, the electronic device100further includes a light-emitting element108. The light emitting element108is connected to the setting module105. In an embodiment, the light-emitting element108is a light-emitting diode module. When the setting module105receives the input signal, the light-emitting element108selectively displays a first notification signal or a second notification signal according to the input signal. For example, when the setting module105sets the electronic device100to the first state, the light-emitting element108displays the first notification signal. When the setting module105sets the electronic device100to the second state, the light-emitting element108displays the second notification signal. In this way, users distinguish whether the electronic device100is switched to process the local audio signal or the external audio signal according to the first indication signal or the second indication signal. In embodiments, the first notification signal and the second notification signal are distinguished by luminous intensity, luminous color, or luminous frequency (such as brightness, blinking speed). FIG.3is a block diagram of an electronic device according to an embodiment of the disclosure. In an embodiment, the electronic device100further includes a power source109. The power source109is connected to the external audio processing module102. In the first state, the power source109supplies power to the external audio processing module102. In an embodiment, the setting module105sets the electronic device100to the first state when the electronic device100is in a standby mode. For example, a switch (not shown) is configured between the power source109and the external audio processing module102. In the first state, the setting module105sets the switching module104to conduct the switch (or the setting module105conducts the switch directly) and supplies power to the external audio processing module102. Then, the electronic device100processes the external audio signal. Therefore, when the electronic device100is set to the first state in a standby mode (such as a sleep state) or a power-off mode, the electronic device100can still process the external audio signal. In the second state, the setting module105sets the switching module104to turn off the switch (or the setting module105directly turns off the switch) and stops supplying power to the external audio processing module102. In embodiments, the power source109is a battery or an external power device (such as an external adapter charging device). It embodiments, the audio module101, the external audio processing module102, the local audio processing module103, the switching module104, the setting module105, the storage unit106, the system processing unit107, and the light emitting element108are connected to power source109to perform functions, respectively. FIG.4is a block diagram of an electronic device according to an embodiment of the disclosure. In an embodiment, the electronic device100further includes a voltage converter110. The voltage converter110is connected to between the power source109and the external audio processing module102. In an embodiment, the voltage converter110is a DC-to-DC converter. The voltage converter110adjusts the voltage value of the power source109to an input voltage adapted for the external audio processing module102. In the first state, the power source109supplies power to the voltage converter110. The voltage converter110performs voltage modulation (for example, regulating, stepping down, or stepping up) on the power received from the power source109to adjust the voltage value of the power to be equal to, slightly greater or slightly less than the input voltage specified by the external audio processing module102. Then, the power with the adjusted voltage value is outputted to the external audio processing module102as the input voltage. In an embodiment, the voltage converter110is also connected to the setting module105. When the setting module105sets the electronic device100to the second state, the voltage converter110is disabled (for example, the setting module105transmits a disable signal to the voltage converter110) to stop the power source109supplying power to the external audio processing module102. Therefore, the voltage converter110stops modulating the voltage value of the power source109, and the adjusted power is not outputted to the external audio processing module102. As result, the external audio processing module102does not operate because of having no power from the power source109. Similarly, when the setting module105sets the electronic device100to the first state, the voltage converter110is enabled (for example, the setting module105transmits an enable signal to the voltage converter110), the power source109supplies power to the external audio processing module102. Therefore, the voltage converter110is activated to modulate the voltage value of the power source109. In this way, the external audio processing module102receives the power for operation. In an embodiment, the voltage converter110further includes a plurality of output terminals (in the embodiment, the first output terminal111A and the second output terminal111B are taken as an example). The external audio processing module102is connected to the first output terminal111A, and the light-emitting element108is connected to the second output terminal111B. In the first state, the voltage converter110outputs the adjusted power to the external audio processing module102via the first output terminal111A. The voltage converter110outputs the adjusted power to the light emitting element108via the second output terminal111B, and then the light-emitting element108emits light (for example, the light-emitting element108displays the first notification signal). In an embodiment, when the driving voltage regulated by the light-emitting element108is the same as the input voltage regulated by the external audio processing module102, the first output terminal111A and the second output terminal111B are integrated into one output terminal. In an embodiment, the light-emitting element108is connected to the external audio processing module102(not shown), and the light-emitting element108is powered by the external audio processing module102. In other words, the light-emitting element108and the startup and shutdown of the external audio processing module102are synchronous. FIG.5is a block diagram of an electronic device according to an embodiment of the disclosure. In an embodiment, the system processing unit107is also connected to the setting module105. The setting module105outputs a notification signal to the system processing unit107in response to the switching between the first state and the second state. In an embodiment, the setting module105controls the conduction state of the switching module104, and the setting module105also outputs a notification signal to the system processing unit107according to the controlled conduction state. In an embodiment, the setting module105transmits an enable signal to the switching module104in responds to the input signal, and the switching module104responds to the change of the conduction state and transmits a signal (called as a feedback signal hereinafter) back to the setting module105. The setting module105outputs a notification signal to the system processing unit107when the setting module105detects the feedback signal. Then, the system processing unit107determines whether the electronic device100is switched to process the local audio signal or the external audio signal by the notification signal. In embodiments, the electronic device100further includes a human-machine interface112. The human-machine interface112is connected to the system processing unit107. In embodiments, the human-machine interface112is a display screen, a touch screen, or a vibrator which can output vision, touch, or other user-perceivable output signals. The system processing unit107outputs a switching signal via the human-machine interface112in response to the notification signal to remind the users that the electronic device100already switches the audio transmission path at the time. In an embodiment, the switching signal is a text or a graphical prompt message displayed at the screen. In an embodiment, the switching signal is a vibration signal generated by a vibrator. To sum up, the electronic device of the disclosure provides users to switch the local audio signal or the external audio signal alternatively. The function of processing the external audio signal is not affected by the mode of the electronic device. Although the present invention has been described in considerable detail with reference to certain preferred embodiments thereof, the disclosure is not for limiting the scope of the invention. Persons having ordinary skill in the art may make various modifications and changes without departing from the scope. Therefore, the scope of the appended claims should not be limited to the description of the preferred embodiments described above. | 17,381 |
11861262 | DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Consumer electronics devices, such as smartphones and tablet devices, conventionally implement an audio playback function. Applications and device functions (hereafter referred to as “applications”) that are configured to use the audio playback function may request access to the audio playback function. The device may then grant access to audio playback function for one requesting application at a time. Where such a device simultaneously executes more than one application or device function and each requests access to the audio playback function, the device may grant access to an unexpected or undesirable requesting application or device function. Users experiencing unexpected or undesirable behavior in granting control over the audio playback function may be discouraged from using certain applications entirely. Additional challenges arise where an audio signal generated by one of the applications and device functions is not comprehensible to the user or where information supplementary to the audio signal is desired. Many classes of consumer electronics devices incorporate display devices capable of reproducing a display screen from video signals. Many applications for such devices intrinsically generate a video signal in addition to an audio signal. It is known to provide metadata for visual reproduction in parallel with the audio signal: the metadata may be, for example, the lyrics of a song, a translation of the dialogue into a different language, or a closed caption audio description of the action in multimedia content). The metadata may be synchronized with a video portion of multimedia content so that, for example, the lyrics of a song are arranged to appear on a user display at approximately the time they are sung in the audio portion of the same multimedia content. Furthermore, audio playback is not always desirable, even when an audio signal is generated by an application. It is not uncommon for users to consume multimedia content without audio playback to avoid disrupting other people in their surroundings. Anticipating such operation, it is known to playback multimedia content with overlaid subtitles. Embodiments of the present disclosure address these and other issues. It is a goal to operate a more convenient policy governing access to the audio playback function. It is a further goal to ensure that policies governing the presentation of metadata cooperate effectively with the policy governing access to the audio playback function. Where such a device simultaneously executes more than one application, each requesting access to the audio playback function, the device may grant one or more requesting application access in ways that are unexpected or undesirable. By default, an audio signal from the most recently instantiated application may be applied exclusively to the audio playback function. Any audio signal generated by another application may be muted, stopped or paused to allow uninterrupted playback of the audio signal from the most recently instantiated application. Users experiencing unexpected or undesirable behavior in granting control over the audio playback function may be discouraged from using certain applications entirely. For example, a user may be playing back music content from a music streaming application and then open a news application intending to read the news content, but instead the audio signal for multimedia content presented by the news application may be given precedence over the audio signal from the music streaming application. Embodiments of the present disclosure provide a method for determining which audio signal (or signals) from a plurality of applications is applied to an audio playback function in a device, the plurality of applications including a first application and a second application. The first application generates a first audio signal, while the second application generates a second audio signal. In certain embodiments, the first application generates multimedia data including video data, the first audio signal being associated (and optionally synchronized) with the video data. An access policy may require that the first audio signal is no longer to be applied to the audio playback function when the second audio signal is received from the second application, even while the video data from the first application is being reproduced on a display of the device. In certain embodiments, the first application also generates metadata associated with the first audio signal. Examples of metadata include subtitles, subtitles, closed caption/audio description (“CC”), translation, etc. The metadata may conveniently be stored in a subtitles file, such as a SubRip Text file (SRT) or Video Timed Text file (VTT). Unless the context indicates otherwise, the term “subtitles” is used in this disclosure to refer generically to metadata such as subtitles, subtitles and CC that may be presented textually, or otherwise visually, providing a visual alternative for a soundtrack of video footage. Thus, “subtitles” is not only the spoken words of characters, narrators, and other vocal participants—but may additionally be a supplement to dialogue that includes other relevant parts of the soundtrack describing the lyrics (and phrasing) of songs, background noises, phones ringing, and other audio cues that need to be described. The subtitles may equally be translations of the words used or a version of the soundtrack adapted to the user's preferences (for example, by removing or replacing profanity). In certain embodiments, users may consume content, and specifically videos, reproduced by the first application without audio or sound and the presentation of subtitles increases the overall enjoyment of the content. When such content consumed without audio does include metadata such as subtitles, the subtitles are only presented when specifically requested by the user, Namely, to view the subtitles, the user has to stop playback of the content, navigate through several menus, activate the presentation of subtitles and then return to viewing the content with the subtitles. These steps place a burden on the user and make the user experience less seamless and enjoyable. In addition, because of this additional burden, users often fail to access the subtitles, which results in wasted resources dedicated to providing the unconsumed subtitles. In certain embodiments the device may enable subtitle presentation by default, for instance by placing the subtitle presentation setting of the device in the enabled state. The burden then falls on the user to override that subtitle presentation setting when audio playback from a subtitled video content is preferred to subtitle presentation. An access policy that indicates that the first audio signal (i.e. the audio output of the first application) is no longer to be applied to the audio playback function when the second audio signal (i.e. second audio output) is received from the second application may be defined. The access policy may further indicate that the metadata from the first application is to be presented on a screen of the device while the second audio signal is played back. Thus, the first application may generate video data and subtitles (say) for display while the second application generates audio data for playback by the audio playback function. The disclosed embodiments therefore improve the efficiency of using the electronic device by providing a subtitle control system that provides users with an efficient and easy-to-user interface for providing subtitles content to accompany content played back from a first application when audio/sound content is being played back by a second, different, application. The subtitle control system, according to the disclosed embodiments, also improves the efficiency of using the electronic device by automatically controlling presentation of subtitles (when available for a given video presented by a first application) based on the playback of audio/sound content by a second application. Specifically, according to the disclosed embodiments, a simple and straight-forward user interface is provided that allows a given user to view the visual components of video content from a first application while listening to the audio components of content reproduced by a second application. The given user is presented with subtitles for that video content, where suitable metadata is present. In this way, the disclosed embodiments improve the efficiency of using the electronic device by reducing complexity that a user experiences when executing more than one application having audio content for playback. Subtitle presentation is triggered automatically, where subtitles are present for playback content, thereby reducing the number of screens and interfaces a user has to navigate through to ensure a desired balance of playback of content form the plurality of applications. This reduces the device resources (e.g., processor cycles, memory, and power usage) needed to accomplish a task with the device. In some embodiments, a determination of whether to present subtitles to a given user viewing a video is made on the basis of volume controls. Namely, the disclosed embodiments seamlessly, and without user input, control whether to present subtitles for a video being consumed based on volume settings of the device. The device volume controls include an interface for increasing or decreasing the volume (i.e. volume UP/DOWN controls) and may be provided with a dedicated mute switch: these controls may be implemented in hardware and/or in software. Volume control activity may be used to infer an alternative user playback requirement. Thus, the user pressing the volume UP button may be interpreted as a trigger to discontinue subtitle playback and/or to switch the source of audio content from second application to first application. The MUTE state of a mute switch may be used to infer that the subtitle presentation setting of the device should be in the enabled state but pressing the mute switch when the first application is executing may be interpreted differently: a single press may indicate the user desire for subtitles without altering the audio playback from the second application, while a double press or a press lasting longer than a predetermined time—2 seconds, say, may mute audio playback from all applications. In certain embodiments, the device may detect actuation of an input device, the actuation of the input device may be interpreted as a request to override a default action (i.e., an action required by default by an access policy). For instance, in certain embodiments, the device may monitor for a user input via the volume UP/DOWN controls, while playing back audio content from a first application and watching a given video reproduced by a second application, to determine whether user playback requirements have changed. Whereas the default action may be to allow the audio signal from a newly opened application to take precedence over that of a previously executing application, the actuation of a volume button may trigger an alternative operation where the audio signal for the previously executing application continues to have access to the audio playback function and the newly opened application executes outputting a video data and metadata to a display function without outputting a corresponding audio signal to the audio playback function. As a result, the user may continue to listen to music from a previously executing music application, while executing a messaging client application in a silent/subtitle mode. In this way, the disclosed embodiments improve the efficiency of using the electronic device by reducing the number of screens and interfaces a user has to navigate through to view (or discontinue viewing) subtitles for a given video. This reduces the device resources (e.g., processor cycles, memory, and power usage) needed to accomplish a task with the device. In certain embodiments, at least one of the applications requesting access to the audio playback function is a messaging client application104. FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications including a messaging client application104. Each messaging client application104is communicatively coupled to other instances of the messaging client application104and a messaging server system108via a network106(e.g., the Internet). A messaging client application104is able to communicate and exchange data with another messaging client application104and with the messaging server system108via the network106. The data exchanged between messaging client applications104, and between a messaging client application104and the messaging server system108, includes functions e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network106to a particular messaging client application104. While certain functions of the messaging system100are described herein as being performed by either a messaging client application104or by the messaging server system108, the location of certain functionality either within the messaging client application104or the messaging server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108, but to later migrate this technology and functionality to the messaging client application104where a client device102has a sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application104. This data may include, message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, texture maps, virtual effects and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client application104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the application server112. The Application Program Interface (API) server110receives and transmits message data (e.g., commands and message payloads) between the client device102and the application server112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application104in order to invoke functionality of the application server112. The Application Program Interface (API) server110exposes various functions supported by the application server112, including account registration, login functionality, the sending of messages, via the application server112, from a particular messaging client application104to another messaging client application104, the sending of media files (e.g., images or video) from a messaging client application104to the messaging server application114, and for possible access by another messaging client application104, the setting of a collection of media data story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the adding and deletion of friends to a social graph, the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client application104). The application server112hosts a number of applications and subsystems, including a messaging server application114, a location sharing system116, a social network system122and a subtitle control system124. The messaging server application114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client application104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging server application114, to the messaging client application104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. The application server112also includes a location sharing system116that is dedicated to performing various image processing operations, typically with respect to images or video received within the payload of a message at the messaging server application114. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the messaging server application114. The social network system122supports various social networking functions services and makes these functions and services available to the messaging server application114. To this end, the social network system122maintains and accesses an entity graph206(as shown inFIG.2) within the database120. Examples of functions and services supported by the social network system122include the identification of other users of the messaging system100with which a particular user has relationships or is “following”, and also the identification of other entities and interests of a particular user. Examples of functions and services supported by the social network system122also include generating a geographically-based graphical user interface (GUI). This interface may be referred to herein as a “map GUI,” and may be used in conjunction with a social media application. In some embodiments, the map GUI may include representations of at least approximate respective positions of a user and a user's friends in a social network graph accessed by the social media application using avatars for each respective user. The social network system122may receive user authorization to use, or refrain from using, the user's location information. In some embodiments, the social network system122may likewise opt to share or not share the user's location with others via the map GUI. In some cases, the user's avatar may be displayed to the user on the display screen of the user's computing device regardless of whether the user is sharing his or her location with other users. In some embodiments, the location sharing for a user can be turned off or on by the user from within the map GUI (e.g., via a setting accessed by a menu presented in conjunction with the map GUI). In some embodiments, the social network system122may still present the user's avatar at the user's current location on the map GUI on the user's own device after the user turns off location sharing. This mode is referred to herein as “ghost mode.” In some embodiments, the social network system122may present an icon on the display screen of the user's computing device to indicate the user's location is not currently being shared with others. Note that the ghost mode functionality described herein may be distinguished from turning off location services on a mobile user device. Accordingly, in some embodiments when ghost mode is turned on, the device location services are still functioning, so that the user's location can still be determined. In some embodiments, when the user turns on ghost mode after previously sharing his or her location, and the user's avatar being displayed on the map, the user's avatar disappears from other users' maps. In some embodiments, when in ghost mode, the user may still see anyone on the map who has chosen to share their location with the user. In some embodiments the user may also be provided the option of specifying who will get to see their location, and at what granularity. Examples of granularity options that may be selected by a user include a “precise” option (e.g., the user's location will be presented on the map as accurately as the location information from the user's computing device can provide); and a random location within a predetermined area (e.g. a city) based on the location information from the user's computing device. In some embodiments, when the user (or group of users) selects the random location granularity option, the user's avatar will be shown in the map GUI within a predetermined distance of the user's current location (e.g., within the predetermined area such as a city the user is in), and the position of the user's avatar will not change if the user does not leave that area. In some embodiments, the user's avatar may include a label specifying the geographic area in which the user is located (e.g., “New York City”). In some embodiments, a user can select groups of other users to which his/her location will be displayed and may in specify different display attributes for the different respective groups or for different respective individuals. In one example, audience options include: “Best Friends,” “Friends,” and “Custom” (which is an individual-level whitelist of people). In this example, if “Friends” are selected, all new people added to the user's friends list will automatically be able to see their location. If they are already sharing with the user, their avatars will appear on the user's map. In some embodiments, when viewing the map GUI, the user is able to see the location of all his/her friends that have shared their location with the user on the map, each friend represented by their respective avatar. In some embodiments, if the friend does not have an avatar, the friend may be represented using a profile picture or a default icon displayed at the corresponding location for the friend. In some embodiments, the user can select between friends on the map via a menu, such as a carousel. In some embodiments, selecting a particular friend automatically centers the map view on the avatar of that friend. Embodiments of the present disclosure may also allow the user to take a variety of actions with the user's friends from within the map GUI. For example, the system may allow the user to chat with the user's friends without leaving the map. In one particular example, the user may select a chat icon from a menu presented in conjunction with the map GUI to initiate a chat session. The subtitle control system124controls automatic presentation of subtitles for content being consumed by a given user based on their volume controls. For example, subtitle control system124presents a simple and straight-forward graphical user interface that allows a given user to view video content (“videos”). The given user can universally add subtitles videos by toggling a subtitles presentation setting to an enabled state. Alternatively the user may selectively require that one or more pieces of video content are subtitled by dragging a subtitles file, such as a SubRip Text file (SRT) or Video Timed Text file (VTT), over an icon or representation of the given video and/or by selecting an upload option for the given video. Once added, the subtitles are automatically processed and associated with the given video and made available for consumption to other users when the video is shared on a messaging application. In some embodiments, the subtitle control system124controls whether to present subtitles for a given video being consumed based on volume settings of the device. In particular, the subtitle control system124determines whether a dedicated physical mute switch of the device is currently in the enabled position (meaning that the audio function of the device is to be muted). In response to determining that the physical mute switch is in the enabled position, the subtitle control system124automatically determines whether a subtitles file is associated with the video being consumed and, if so, automatically presents the subtitles with the video being consumed on the device. Also, the subtitle control system124determines whether a subtitles presentation setting of the device is currently in a state where subtitles are presented by default. In response to determining that the default subtitles setting is in the enabled position, the subtitle control system124automatically presents the subtitles for any video the user consumes on the device. It is noted that while the subtitle control system124inFIG.1is described as a component of the messaging server system108, part or all of its functionality may be performed in the messaging client application104of the client device102. FIG.2is a schematic diagram illustrating data structures200which may be stored in the database120of the messaging server system108, according to certain example embodiments. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table212. An entity table202stores entity data, including an entity graph206. Entities for which records are maintained within the entity table202may include individuals (e.g., users), corporate entities, organizations, objects, places, events, etc. Regardless of type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph206furthermore stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. Message table212may store a collection of conversations between a user and one or more friends or entities. Message table212may include various attributes of each conversation, such as the list of participants, the size of the conversation (e.g., number of users and/or number of messages), the chat color of the conversation, a unique identifier for the conversation, and any other conversation related feature(s). The database120also stores annotation data, in the example form of filters, in an annotation table210. Database120also stores annotated content received in the annotation table210. Filters for which data is stored within the annotation table210are associated with and applied to videos (for which data is stored in a video table214) and/or images (for which data is stored in an image table208). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a gallery of filters presented to a sending user by the messaging client application104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters), which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a UI by the messaging client application104, based on geolocation information determined by a Global Positioning System (GPS) unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client application104, based on other inputs or information gathered by the client device102during the message creation process. Examples of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102, or the current time. As mentioned above, the video table214stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table212. Similarly, the image table208stores image data associated with messages for which message data is stored in the entity table202. The entity table202may associate various annotations from the annotation table210with various images and videos stored in the image table208and the video table214. Subtitles216stores subtitles for one or more videos available for consumption by the messaging client application104. Namely, subtitles216stores a subtitles file (e.g., an SRT and/or VTT file) and a link to the associated video for the subtitles file. In some implementations, subtitles that are received for a given video are stored in two formats (e.g., SRI and VTT). Specifically, in response to a given user uploading a subtitles file in a first format (e.g., an SRT file), the subtitles file in the first format is stored in association with the corresponding video. Also, the subtitles file in the first format is automatically converted to a subtitles file in a second format (e.g., a VTT file) and also stored in association with the video in the second format. A given request for subtitles for a given video may specify the type of device on which the subtitles are to be presented and the corresponding subtitles in the first or second format are retrieved and returned for presentation with the video. When subtitles for a given video being played or presented are enabled (e.g., a determination is made by the subtitle control system124to automatically present subtitles), the subtitles216for the given video are accessed and retrieved (e.g., by obtaining a title or identifier of the given video being consumed and searching the subtitles216for any subtitles that are linked to the title or identifier of the given video). The subtitles retrieved from subtitles216that are linked to the given video being played are then presented together with the given video being played. A story table204stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table202). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the UI of the messaging client application104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. Video content played back by applications may include such stories. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from various locations and events. Users whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a UI of the messaging client application104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client application104based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). FIG.3is a block diagram300showing an example subtitle control system, according to example embodiments. As noted previously, the subtitle control system124may be implemented in either a user device (such as client device102) or a server device (as illustrated inFIG.1). Subtitle control system124includes a volume control module302, a mute switch module304, and a subtitle enablement and retrieval module126. Volume control module302continuously (or in response to detecting activation of a volume button) monitors volume controls of a user device (e.g., client device102). The volume controls may include physical volume UP/DOWN buttons on the user device. In some cases, the volume control module302monitors the volume controls when audio content is being played back by a first application and a video is being played back by a second application (such as messaging client application104). In response to the volume control module302detecting activation of a volume DOWN button or a volume UP button on the user device (e.g., while audio content is being played back by the first application and video is being played back by the second along with subtitles), the volume control module302communicates with the subtitle enablement and retrieval module126to toggle the presentation of subtitles to a disabled state. The mute switch module304similarly monitors for user input at the mute switch of the device. In response to a detected user input at the mute switch, the subtitle enablement and retrieval module126operates to retrieve and display subtitles (when the mute is enabled) and to cease or override previous subtitle display (when the mute is disabled). If entering a mute state, the volume control module302may retrieve an identifier of a current multimedia content item being played back by the display function (e.g. through a graphical user interface of a user device display). The identifier may be provided to a database to search subtitles216for any available subtitles for the content item being played back. If a match is found in subtitles216, the subtitles file is retrieved. In addition, a current playback position is retrieved and used as an index in the subtitle file to access the correct set of subtitles for the current play position of the video (e.g., to access the subtitles corresponding to the 1:30 [minute:second] segment of the video). The subtitles are presented simultaneously over or next to the video frames of the video. In some cases, a language of the device is searched to determine whether language is desired in a specific language other than a default language. If so, the subtitles in the desired language (if available) are retrieved and displayed. In some embodiments, the subtitle enablement and retrieval module126may access a display characteristics field that is stored on the user device. The subtitle enablement and retrieval module126may modify the display characteristics (e.g., the font size, color, and shape) of the subtitles that are presented with the video being played. The subtitles may be presented on an area of the screen that does not impede any important aspect of the video content. In some embodiments, the subtitle enablement and retrieval module126may monitor user interactions while the video is being played to determine whether to display or to continue to display subtitles. For example, the subtitle enablement and retrieval module126may detect, by receiving an instruction from the mute switch module304, that the mute switch has been moved to the enabled position in which audio of the device is muted (or that the volume controls monitored by the volume control module302have gradually reduced the volume level of 0%). In response, the subtitle enablement and retrieval module126may automatically retrieve and display subtitles for a video being played back and any subsequent videos that are played back. In some embodiments, the subtitle enablement and retrieval module126may detect that a touch and hold action is performed by the user while the video is being played back. For example, the subtitle enablement and retrieval module126may detect physical contact by a user's finger with a display in which the video is being played back. The physical contact may be continuous for more than a threshold period of time (e.g., more than 3 seconds) in which the finger is not lifted or removed from physically contacting the display. In response, the subtitle enablement and retrieval module126may present an overlay on the video being played back that includes a menu of options. The options may include a subtitles option that allows a user to toggle the activation of subtitle (switching between a state where subtitles are turned on/activate for the graphical user interface or turned off/deactivated). In some embodiments, the subtitle enablement and retrieval module126may access the default global subtitles setting of the user device to determine whether to display subtitles. In response to determining that the default global subtitles setting of the user device is set to the enabled state, the subtitle enablement and retrieval module126may automatically retrieve and display subtitles for a video being played back and any subsequent videos that are played back. FIG.4illustrates the main operational blocks in a typical routine400in accordance with the present disclosure. The routine determines how the presence of respective audio signals generated or otherwise output by a plurality of applications is to be handled, and which one (or ones) of the audio signals are to be applied to an audio playback function in a device. At block402, a first application is executing. The first application generates audio data, which is played back by the audio playback function. The first application may, for example, be a music streaming application that outputs music, via the audio playback function: the user listens to that music through headphones or speakers coupled to the device. There being no other source of audio data, precedence of access to the audio playback function is granted to the first application. At block404, a second application is started. The second application outputs multimedia content, the multimedia content includes video data and audio data. The second application may be at least one of a multimedia playback application, a camera application, or a messaging client application. While certain aspects (i.e., operational modes) of the second application may have no impact on the operation of the audio playback function (e.g., a chat function, a visual augmentation function or a camera function), other operational modes do have an impact (such as when the multimedia content has a sound track of its own). Optionally, the device may detect whether an operational mode of the second application is likely to have an impact on the operation of the audio playback function. At decision block406, the device optionally determines whether the multimedia content from the second application includes metadata (such as subtitle information). If it is determined that the multimedia content from the second application does not include metadata, the device may then grant each of the first and second application access to the audio playback function, block408. The respective audio signals may be mixed together with no precedence of access or with precedence according to an audio mixing algorithm. If it is determined that the multimedia content from the second application does include metadata, and optionally the operational mode of the second application is determined to be likely to have an impact on audio playback function operation, the device causes the audio playback function to deny the second application access to the audio playback function and instead to display the metadata (e.g., subtitles or audio description information), block410. The second application is executed in silent/subtitle mode. The device is arranged to monitor for key press inputs, and if a key press of a volume button (for example) is detected (decision block412), that keypress is interpreted as a request to toggle the mode of operation of the second application to a default operation, block314. Turning now toFIG.5, there is shown a diagrammatic representation of a processing environment500, which includes at least a processor502(e.g., a GPU, CPU or combination thereof). The processing environment500may be implemented in a user device, such as client device102, arranged to capture video frames in real-time and to process and display augmented or virtual reality 3D experiences as described below. The processor502is shown to be coupled to a power source504, and to include (either permanently configured or temporarily instantiated) modules, namely a location component508, a GUI component310, a messaging UI component512, and a virtual effect UI component514. The location component508operationally determines location of users based on location information. The GUI component310operationally generates user interfaces and causes the user interfaces to be displayed on client devices. The messaging UI component512operationally generates user interfaces and causes the user interfaces to be displayed on client devices. As illustrated, the processor502may be communicatively coupled to another processor506. In certain embodiments, the virtual effect UI component514performs semantic segmentation upon image frames from an image capture device (i.e. a video stream), as described in detail below, and generates augmented or virtual reality 3D experiences for presentation in user interfaces generated by the GUI component510. In certain embodiments, the virtual effect UI component514is implemented in a graphics processing unit (GPU). In certain embodiments, the processor502is, itself, a GPU. FIG.6is a block diagram600illustrating a software architecture604, which can be installed on any one or more of the devices described herein. The software architecture604is supported by hardware such as a machine602that includes processors620, memory626, and I/O components638. In this example, the software architecture604can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture604includes layers such as an operating system612, libraries610, frameworks608, and applications606. Operationally, the applications606invoke API calls650through the software stack and receive messages652in response to the API calls650. The operating system612manages hardware resources and provides common services. The operating system612includes, for example, a kernel614, services616, and drivers622. The kernel614acts as an abstraction layer between the hardware and the other software layers. For example, the kernel614provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services616can provide other common services for the other software layers. The drivers622are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers622can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries610provide a low-level common infrastructure used by the applications606. The libraries610can include system libraries618(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like, in addition, the libraries610can include API libraries624such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries610can also include a wide variety of other libraries628to provide many other APIs to the applications606. The frameworks608provide a high-level common infrastructure that is used by the applications606. For example, the frameworks608provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks608can provide a broad spectrum of other APIs that can be used by the applications606, some of which may be specific to a particular operating system or platform. In an example embodiment, the applications606may include a home application636, a contacts application630, a browser application632, a book reader application634, a location application642, a media application644, a messaging application646, a game application648, and a broad assortment of other applications such as third-party applications640. The applications606are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications606, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party applications640(e.g., applications developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party applications640can invoke the API calls650provided by the operating system612to facilitate functionality described herein. FIG.7is a diagrammatic representation of a machine700within which instructions708(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine700to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions708may cause the machine700to execute any one or more of the methods described herein. The instructions708transform the general, non-programmed machine700into a particular machine700programmed to carry out the described and illustrated functions in the manner described. The machine700may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine700may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine700may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions708, sequentially or otherwise, that specify actions to be taken by the machine700. Further, while only a single machine700is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions708to perform any one or more of the methodologies discussed herein. The machine700may include processors702, memory704, and I/O components742, which may be configured to communicate with each other via a bus744. In an example embodiment, the processors702(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor706and a processor710that execute the instructions708. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.7shows multiple processors702, the machine700may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The processors502may form a processing environment as illustrated inFIG.5. The memory704includes a main memory712, a static memory714, and a storage unit716, both accessible to the processors702via the bus744. The main memory704, the static memory714, and storage unit716store the instructions708embodying any one or more of the methodologies or functions described herein. The instructions708may also reside, completely or partially, within the main memory712, within the static memory714, within machine-readable medium718within the storage unit716, within at least one of the processors702(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine700. The I/O components742may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components742that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components742may include many other components that are not shown inFIG.7. In various example embodiments, the I/O components742may include output components728and input components730. The output components728may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components730may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), video input components (e.g. a camera or other image capture device) and the like. In further example embodiments, the I/O components742may include biometric components732, motion components734, environmental components736, or position components738, among a wide array of other components. For example, the biometric components732include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components734include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components736include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components738include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components742further include communication components740operable to couple the machine700to a network720or devices722via a coupling724and a coupling726, respectively. For example, the communication components740may include a network interface component or another suitable device to interface with the network720. In further examples, the communication components740may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices722may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components740may detect identifiers or include components operable to detect identifiers. For example, the communication components740may include Radio Frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components740, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., memory704, main memory712, static memory714, and/or memory of the processors702) and/or storage unit716may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions708), when executed by processors702, cause various operations to implement the disclosed embodiments. The instructions708may be transmitted or received over the network720, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components740) and using any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions708may be transmitted or received using a transmission medium via the coupling726(e.g., a peer-to-peer coupling) to the devices722. | 57,419 |
11861263 | DETAILED DESCRIPTION FIG.1schematically depicts an example environment in which selected aspects of the present disclosure can be implemented, in accordance with various implementations. Any computing devices depicted inFIG.1or elsewhere in the figures can include logic such as one or more microprocessors (e.g., central processing units or “CPUs”, graphical processing units or “GPUs”, tensor processing units or (“TPUs”)) that execute computer-readable instructions stored in memory, or other types of logic such as application-specific integrated circuits (“ASIC”), field-programmable gate arrays (“FPGA”), and so forth. Some of the systems depicted inFIG.1, such as application control system120, can be implemented, in whole or in part, using one or more server computing devices that form what is sometimes referred to as a “cloud infrastructure,” although this is not required. The application control system120can be operably coupled with client computing device110(also referred to herein as “client”) via one or more computer networks114to enable robust natural language control, via client110, of one or more applications. For example, the application control system120can enable NL control of application112that is locally installed at the client110and/or of additional locally installed applications. As another example, the application control system120can enable NL control of cloud-based remote applications, that are utilized via interactions at the client110(e.g., controlling inputs are received at the client110and corresponding outputs are rendered at the client110), but that may not be installed at the client110or that may only have a thin-client installed at the client110. Although application control system120is depicted inFIG.1as connected to the client110via the network(s)114, in various implementations one or more aspects of control system120can be implemented locally at a client110. For example, one or more of the engines of application control system120can be implemented at the client110. In various implementations, application control system includes a request engine122, an action(s) engine124, an action set(s) engine126, a simulation (SIM) engine128, a selection engine130, an implementation engine132, a feedback engine134, and/or machine learning (ML) training engine(s). In some implementations, the request engine122can interface with one or more request ML models, such as request ML model152. For example, the request engine122can interface with each of multiple disparate request ML models (or sets of request ML models), and which request ML model (or set) it interfaces with in controlling a given application can be dependent on the given application. For example, the request ML model(s)152ofFIG.1can be specific to a particular application, such as a particular CAD application or specific to a domain of applications, such as any CAD application—and can be used by the request engine122in response to corresponding user input being provided to control the particular application or an application within the domain of applications. For instance, the request ML model(s)152can be used (in lieu of others for other domains) based on NL input, for controlling the particular application, referencing the particular application and/or being provided when the particular application is executing or presented in the foreground when the NL input is received. In some implementations, the action(s) engine124can interface with one or more action ML models, such as action ML model154. For example, the action(s) engine124can interface with each of multiple disparate action ML models, and which action ML model it interfaces with in controlling a given application can be dependent on the given application. For example, the action ML model154ofFIG.1can be specific to a particular application, such as a particular CAD application or specific to a domain of applications, such as any CAD application—and can be used by the action(s) engine124in response to corresponding user input being provided to control the particular application or an application within the domain of applications. Yet further, in some implementations the action set(s) engine126can optionally interface with one or more action set ML models, such as action set ML model156. For example, the action set(s) engine126can interface with each of multiple disparate action set ML models, and which action set ML model it interfaces with in controlling a given application can be dependent on the given application. For example, the action set ML model156ofFIG.1can be specific to a particular application, such as a particular CAD application or specific to a domain of applications, such as any CAD application—and can be used by the action set(s) engine126in response to corresponding user input being provided to control the particular application or an application within the domain of applications. Moreover, in some implementations the selection engine130can optionally interface with one or more selection ML models, such as selection ML model(s)156. For example, the selection engine130can interface with each of multiple disparate selection ML models (or sets of ML model(s)), and which selection ML model it interfaces with in controlling a given application can be dependent on the given application. For example, the selection ML model(s)156ofFIG.1can be specific to a particular application, such as a particular CAD application or specific to a domain of applications, such as any CAD application—and can be used by the selection engine130in response to corresponding user input being provided to control the particular application or an application within the domain of applications. The machine learning models152,154,156, and160can be of various architectures and trained in various manners. For example, one or more of the models can be a graph-based neural network (e.g., as a graph neural network (GNN), graph attention neural network (GANN), or graph convolutional neural network (GCN)), a sequence-to-sequence neural network such as a transformer, an encoder-decoder, or a recurrent neural network (“RNN”, e.g., long short-term memory, or “LSTM”, gate recurrent units, or “GRU”, etc.), a BERT (Bidirectional Encoder Representations from Transformers). Also, for example, reinforcement learning, supervised learning, and/or imitation learning can be utilized in training one or more of the machine learning models. Additional description of some implementations of the machine learning models152,154,156, and160is provided herein. Turning toFIG.2, description is provided of examples of: the engines122,124,126,128,130,132,134, and136of application control system120; the interactions that can occur amongst those engines; and the models152,154,156, and160that can be utilized by the application control system120. InFIG.2, the request engine122processes at least NL input101in generating a request embedding123. The NL input101is provided by a user, via interaction with user interface input device(s) of client device110(FIG.1), and the NL input101includes a request to control a computer application (e.g., the application112ofFIG.1). For example, the NL input101can be spoken input, of the user, that is detected via microphone(s) of the client device110, and the request engine122can process recognized text, generated based on the spoken input (e.g., using automatic speech recognition (ASR)), in generating the request embedding123. As another example, the NL input101can be typed input provided via a virtual or hardware keyboard of the client110, and the typed text can be processed by the request engine122in generating the request embedding123. For instance, the recognized text or typed text can be processed using an NL ML model152A of request ML model(s)152, to generate an NL embedding. The NL ML model152A can be, for example, an LLM. The request embedding123can be the NL embedding or can be a function of the NL embedding and other NL embeddings. In some implementations, the request engine122additionally utilizes domain specific knowledge (DSK)102and/or context data103in generating the request embedding123. For example, the request engine122can use the DSK102to alter the NL input101, and process the alteration of the NL input101using the NL ML model152to generate an NL embedding. For instance, the NL input101can be “make the wicket 15% smaller”, the request engine122can use the DSK102to find a domain specific (e.g., specific to a particular application or a class of applications) definition for “wicket” of “small door beside a larger door”, and alter the NL input101to “make the small door 15% smaller” or to “make the wicket, the small door beside the larger door, 15% smaller”. Continuing with the example, the request embedding can be the NL embedding, generated based on the alteration of the NL input101, or can be a function of the NL embedding (generated based on the alteration) and other embedding(s). As another example of using the DSK102in generating the request embedding123, the request engine122can identify, from DSK102, particular domain specific knowledge that relates to term(s) of the NL input101. Further, the request engine122can process the particular domain specific knowledge using one or more of the request ML model(s)152to generate DSK embedding(s). The request engine122can then concatenate or otherwise combine the DSK embedding(s) with the NL embedding and/or a context embedding (e.g., processed along with, over an additional neural network, to generate a lower-dimensional embedding)—and the combined embedding used as the request embedding123. For example, the NL input101can be “make the wicket 15% smaller”, and the request engine122can generate an NL embedding based on the NL input101(unaltered). Further, the request engine122can identify, from DSK102, a domain specific NL definition for “wicket” and/or can identify a domain specific image of a “wicket”. The request engine122can process the domain specific NL definition for “wicket”, using the NL ML model152A to generate a domain specific NL definition embedding and/or can process the domain specific “wicket” image using the image ML model152B to generate a domain specific image embedding. The request engine122can then generate the request embedding123as a function of the NL embedding and the domain specific NL definition embedding and/or the domain specific image embedding. The image ML model152B can be, for example, a convolutional neural network (CNN) or other neural network trained to generate semantically rich embeddings of images based on processing of those images. As indicated by the double arrow between the feedback engine134and the request engine122, in some implementations the feedback engine134can selectively interact with the request engine122to solicit additional domain specific knowledge for the NL input101. For example, if the request engine122determines that existing DSK102lacks any definition, image, and/or other information for term(s) of NL input101, it can prompt feedback engine134to interact with the user in obtaining additional information. For example, the feedback engine134can generate audible and/or visual prompt(s), that request the user to provide input(s) that define what the user means by certain term(s) of the NL input101, and can cause the prompt to be rendered via the client110. For example, a prompt can request the user to select an area, of the current display of the computer application112, that is currently rendering an object referenced by a term the NL input101. In response to the user selecting a particular area in response to the prompt, a screenshot of that particular area can be utilized as a domain specific image of the object. As another example, an additional or alternative prompt can request the user to describe what the term is, and the user input provided in response can be utilized as a domain specific definition for the term. The feedback engine134can provide the domain specific image and/or the domain specific definition to the request engine122for use in generating the request embedding123for the current NL input101. Optionally, the feedback engine134can also store the domain specific image and/or the domain specific definition, in association with the term and as part of DSK102, for future use by the request engine122(e.g., for any future NL inputs for the domain and optionally from the same user). As referenced above, the request engine122can optionally utilize context data103in generating the request embedding123. For example, the context data103can include current state data for the application112, the request engine122can process the current state data to generate current state embedding(s), and can generate the request embedding123as a function of the current state embedding(s). For instance, the current state data can include a current screenshot of the application112, and the request engine122can process the current screenshot, using image ML model152B, to generate a current state embedding. Also, for instance, the current state data can include an NL description of the current state of the application112(e.g., an NL description provided by the application112via an API), and the request engine122can process the NL description, using NL ML model152A, to generate a current state embedding. Additional or alternative context103can be utilized in generating the context embedding(s), such as an indication of the particular application, an indication of a task currently being performed via the particular application, other current state data such as data derived from a current screenshot of the computer application, other application(s) that are executing on the client device along with the particular application, a current time of day or other current temporal data, a current location, and/or other context data. The action(s) engine124processes the request embedding123, using an action ML model154, to select a subset of action(s)125from a corpus of candidate actions. For example, processing the request embedding123using the action ML model154can generate output that defines a corresponding measure for each of one or more actions of a corpus of candidate actions (e.g., atomic and/or more coarse actions). For instance, the output can define a probability distribution over the corpus of candidate actions, with each probability of the distribution reflecting a corresponding measure for a corresponding candidate action. The action(s) engine124can select the action(s)125based on the measure(s). For example, the action(s) engine124can select the ten, twenty, or other threshold quantity of action(s)125with the highest measures in the distribution. As another example, the action(s) engine124can select the action(s)125that have a corresponding measure that satisfies a threshold. As yet another example, the action(s) engine124can select up to N actions as the action(s)125, but will only include, in the N actions, those that have the highest measures and that also have measures that satisfy a threshold. In some implementations, the action ML model154is a transformer network and/or can be trained, at least in part, in a supervised manner on past positive request embedding, action pairs. For example, the pairs can each include a request embedding paired with a positive ground truth indication of the action(s) that successfully resolved the corresponding request. As indicated by the double arrow between the feedback engine134and the action(s) engine124, in some implementations the feedback engine134can selectively interact with the action(s) engine124to solicit user feedback in selecting the action(s)125. For example, the action(s) engine124can, based on a “rotate right 90 degree” action having a high measure and a conflicting “rotate left 90 degree” action also having a high measure, prompt the feedback engine134to interact with the user in resolving only one of those actions to include in the action(s)125. For example, the feedback engine134can generate audible and/or visual prompt(s), that request the user to select between the two actions, and can cause the prompt to be rendered via the client110. In response to user input that affirmatively selects one of the actions over the other, the selected action can be included amongst the action(s)125and the unselected action excluded from the action(s)125. When multiple actions are included amongst the action(s)125, the action set(s) engine126optionally generates multiple action sets by composing different sequences and/or combinations of the actions. For example, if the action(s)125include A1, A2, A7, and A8, the action set(s) engine126can generate action sets such as (but not limited to): {A1}; {A1, A2}; {A1, A2, A7}; {A1, A2, A7, A8}; {A2, A1, A7}, and {A8, A7, A1, A2}. Note that, in composing action sets, the action set(s) engine126can operate within constraints of some action(s), such as constraint(s) for an action that constrain the action set(s) within which it can be included. For example, an action can have a constraint that it must be preceded by one or more particular actions, can have a constraint that it must be followed by one or more particular actions, and/or can have a constraint that it must be a non-terminal action of an action set. As another example, a coarse action can have a constraint that it cannot be included in an action set with any other actions. In some implementations, the action set(s) engine126provides, as action set(s)127and to SIM engine128, all of the generated action sets. In other implementations, the action set(s) engine126can provide, as action set(s)127and to SIM engine128, only a subset of the generated action sets. In some of those other implementations, the action sets can each be processed, using action set ML model156, to generate output that defines a corresponding measure for the processed action set. Each action set can optionally be processed, using action set ML model156, along with the request embedding123. The measures can be utilized to select a subset of the generated action sets to include in the action set(s)127. For example, one hundred action sets can be generated, but only those twenty with the highest measures selected as candidate action sets. This can enable constraining of the quantity of simulations, and/or other analysis of the candidate action sets, to be performed. In some implementations, the action set ML model156is a transformer network and/or can be trained, at least in part, in a supervised manner on past training instances that each include training instance input of a corresponding action set (optionally along with a corresponding request embedding) and a training instance output that indicates whether the corresponding action set successfully resolved the corresponding request. As indicated by the double arrow between the feedback engine134and the action set(s) engine126, in some implementations the feedback engine134can selectively interact with the action set(s) engine126to solicit user feedback in selecting the action set(s)127. For example, the action set(s) engine126can, based on a first action set having a high measure and a second action set also having a high measure (e.g., the first action set and the second action set have the two highest measures), prompt the feedback engine134to interact with the user in resolving only one of those action sets to include in the action set(s)127. For example, the feedback engine134can generate audible and/or visual prompt(s), that request the user to select between the two action sets, and can cause the prompt to be rendered via the client110. In response to user input that affirmatively selects one of the action sets over the other, the selected action set can be included amongst the action set(s)127and the unselected action set excluded from the action set(s)127. The SIM engine128, for each of the action set(s)127, simulates performance of controlling the application112using the action set. The SIM engine128can use, for example, an emulator in performing the simulations and can start each of the simulations from the current state of the application112. The SIM engine128can perform multiple simulations in parallel to reduce latency. Further, the SIM engine128generates simulated data129for each of the simulations. The simulated data129is provided to the selection engine130and the selection engine130uses the simulated data129to select, from the action set(s)127, an action set131for actual implementation. For example, the selection engine130can select the action set131for automatic implementation and without first prompting the user for verification. As another example, the selection engine130can select the action set131, but can first prompt the user for verification before implementing, and only implement the action set131if affirmative user input is received in response to the prompt. As another example, the selection engine130can first select a small (e.g., two, three, four, or five) subset of the action set(s)127, and interact with the feedback engine134in prompting the user to select the action set131from amongst those of the subset. The subset can be selected based on suitability metrics for the action sets, described in more detail herein. Further, the selection engine130can determine whether to interact with the feedback engine134in prompting the user to select the action set, based on the suitability metrics. For example, a prompt can only be generated when the best suitability metric fails to satisfy a threshold and/or when two or more of the best suitability metrics are within a threshold of one another. As one particular instance of feedback, the feedback engine134can present a video, from simulation data139, and for each action set, and prompt the user to select which most closely aligns with the NL input101. Also, for instance, the feedback engine134can present a final screenshot, from simulation data139, and for each action set, and prompt the user to select which most closely aligns with the NL input101. Also, for instance, the feedback engine134can present a natural language prompt (e.g., an audible prompt) that describes each action set, and prompt the user to select which most closely aligns with the NL input101. In various implementations, the feedback engine134can determine which format(s) to utilize for the prompt based on attributes of the action sets that are the focus of the prompt. For example, the feedback engine134can determine to include final screenshots in the prompt, but not videos, based on the action sets each including less than a threshold quantity of actions. As another example, the feedback engine134can determine to include videos in the prompt based on the action sets each including greater than a threshold quantity of actions. As yet another example, the feedback engine134can determine to include audible natural language output only in the prompt (i.e., no visual content) based on the final screenshot(s), from one or more of the simulations of the action sets, having less than a threshold amount of change relative to a current screenshot of the application112(i.e., no visible change was made to the application as a result of the simulation(s)). In some implementations, the selection engine130eliminates some candidate action set(s) from being selected as the action set131based on their implementation, in simulation, resulting in an error (from the simulated application) during the simulation and/or based on their implementation violating predefined rule(s) for the application. In some implementations, violating predefined rule(s) may not result in elimination of a candidate action set but, instead, can negatively impact the suitability metric (described below) for the candidate action set. In some additional or alternative implementations, the selection engine130selects the action set131based on it having the best suitability metric amongst suitability metrics for each of the action sets of the subset131. The selection engine130can generate suitability metrics for an action set based on the SIM data129that is from the simulation using that action set. In some of those implementations, the selection engine130generates the suitability metric for an action set based on how closely, as indicated by the SIM data129for that action set, the simulation conforms to the request of the NL input. As one example, the selection engine130can process a final state of the simulation to generate an NL description of the final state, and that NL description (e.g., an embedding thereof) compared to the NL input (e.g., an embedding thereof—or the request embedding123) in generating the suitability metric (e.g., “closer” embeddings lead to better suitability metrics). For instance, a screenshot of the simulated final state can be processed using an image captioning model, of selection ML model(s)160, to generate the NL description. An embedding of that NL description can be compared to an embedding of the NL input101(optionally altered based on the DSK102), and a distance between the two embeddings used as a suitability metric. As another example, a video (e.g., series of screenshots) from the simulation can be processed to generate a NL description of the simulation, and that NL description compared to the NL input101in generating the suitability metric. The implementation engine132interacts with the application112in implementing the action set131. The control of the application can be through emulated input(s) (e.g., emulated touch input(s)) and/or via an API of the application. In some implementations, after an action set131is implemented in controlling the application112, the implementation engine132prompts the feedback engine134to generate a user interface output feedback prompt that reflects the actions of the action set131and the sequence of the actions in the given action set. The feedback engine134can cause the feedback prompt to be rendered to the user via the client. The feedback engine134can receive user interface input in response to the user interface output feedback prompt. Further, training of the action ML model154and/or of the action set ML model156can be performed using the user interface input as a supervision signal. For example, the feedback engine134can provide, to the ML training engine(s)136, an indication of the implemented action set, the user feedback, and the request embedding. Based on this data, the ML training engine(s)136can further train the action ML model154and/or the action set ML model156. Further, it is noted that the ML training engine(s)136can additionally or alternatively further train these models based on other feedback received by the feedback engine134. For example, feedback received from the feedback engine134through interaction with the action(s) engine124can be used to train the action ML model154. As another example, feedback received from the feedback engine134through interaction with the action set(s) engine126can be used to train the action set ML model156. FIG.3is a flowchart illustrating an example method300for practicing selected aspects of the present disclosure, according to implementations disclosed herein. For convenience, the operations of the flow chart are described with reference to a system that performs the operations. This system may include various components of various computer systems, such as one or more components of application control system120. Moreover, while operations of method300are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted or added. At block302, the system receives NL input that includes a request to control a computer application. At block304, the system generates a request embedding based on processing the NL input. For example, the NL input (or an alteration based on DSK) can be processed using a LLM to generate an NL embedding, and the request embedding generated as a function of the NL embedding. Block304optionally includes sub-block304A and/or sub-block304B. At sub-block304A, the system generates the request embedding based on domain specific knowledge (DSK), such as pre-stored DSK. For example, the system can alter the NL input, with DSK, and generate the request embedding based on processing the altered NL input. As another example, the system can separately process DSK that is relevant to the NL input to generate domain specific embedding(s), and generate the request embedding as a function of an NL embedding from processing the NL input and as a function of the domain specific embedding(s). Sub-block304A optionally includes further sub-block304A1, where the system interacts with the user to obtain at least some DSK. For example, the system can interact with the user to obtain at least some DSK in response to determining that there is no pre-stored DSK corresponding to one or more terms of the received NL input. At sub-block304B, the system generates the request embedding based on context data, such as current state data for the application. For example, the system can process the context data to generate context embedding(s), and generate the request embedding as a function of an NL embedding from processing the NL input and as a function of the context embedding(s). At block306, the system processes the request embedding to generate a corresponding measure for each of multiple actions. For example, the system can process the request embedding using a transformer, or other action ML model, to generate a measure distribution over the multiple actions. At block308, the system selects, based on the corresponding measures of block306, a subset of the multiple actions. In some implementations, block308includes sub-block308A in which the system interacts with the user in selecting one or more of the actions for inclusion in the subset. At block310, the system generates multiple candidate action sets using the subset of actions selected at block308. At block312, the system, for each of the action sets generated at block310(or a subset thereof selected using an action set ML model), simulates controlling a computer application using the action set. At block314, the system selects, based on simulated data from the simulations of block312, a given action set. In some implementations, block314includes sub-block314A in which the system interacts with the user in selecting the given action set. At block316, the system interacts with the computer application using the given action set. In some implementations, block316includes sub-block316A in which the system interacts with the user, after the interaction with the computer application, to solicit feedback on the given action set. At block318, the system uses data, from the optional interactions of blocks308A,314A, and/or316A to further train one or more of the ML model(s)318used in the method300, such as an action ML model and/or an action set ML model described herein. FIG.4Aillustrates an example of an interface400A rendered by a computer application (e.g., application112ofFIG.1) on a client device (e.g., client device110ofFIG.1). The computer application is a graphics/drawing application and a cloud and a lightning bolt have been rendered in the application. The cloud and lightning bolt are rendered as a result of prior natural language input of a user and/or prior mouse and/or touch interactions. Further, inFIG.4Athe user has provided the NL input401of “rotate the lightning bolt right 90 degrees and put it at the bottom of the cloud”. FIG.4B1illustrates an example of providing a user interface output prompt, illustrated in interface400B1, that reflects aspects of two different action sets generated based on the natural language user input ofFIG.4A, and solicits the user to provide input to select one of the action sets. For example, FIG.4B1is an example of an output that can be provided in sub-block314A of method300ofFIG.3. More particularly, FIG.4B1illustrates (in “A”) a final screenshot from a first simulation of a first action set and illustrates (in “B”) a final screenshot from a second simulation of a second action set. The first action set and the second action set can include the same set of actions, except the first action set includes an action of “move selected lightning bolt object back”, whereas the second action set does not include that action. FIG.4B1also illustrates a prompt, graphically (and optionally audibly) provided to the user of “which is correct”. In response to the user selecting “A” (e.g., by tapping that area of the interface400B1or by saying “A” or “the left one”), the first action set can be implemented. Further, as described herein, the user's selection of “A” can be used as a positive signal, for the first action set, for further training of an action ML model and/or action set ML model described herein. FIGS.4B2A and4B2B illustrate examples of providing user interface output feedback prompts based on an action set implemented automatically based on the natural language user input ofFIG.4A. For example, FIGS.4B2A and4B2B are examples of output that can be provided in sub-block316A of method300ofFIG.3. More particularly, FIG.4B2A illustrates an interface400B2A of the graphics/drawing application after automatic implementation of an incorrect action set, along with a prompt of “correct?” and options for responding “yes” or “no” (e.g., by selecting or speaking one of the two options). FIG.4B2B illustrates a further interface400B2B that can be rendered, in response to the user selecting “no” in FIG.4B2A, to present details of the incorrect action set and enable the user to provide fine-grained input as to why it was incorrect. More particularly, the further interface400B2B provides a natural language listing of the actions of the incorrect action set, and the order of those actions in the action set. The further interface400B2B also prompts the user to specify what action(s) were errant. In response, the user provides NL input of “two should be rotate right 90 instead of 90 left”. This fine-grained input can be used, for example, to generate a positive action set training example, for further training an action set ML model, that replaces “rotate selected object 90 degrees left” in step two with “rotate selected object 90 degrees right”. This fine-grained input can additionally or be used to generate a training example, for training an action ML model, that includes a high measure for “rotate selected object 90 degrees right” and a low measure for “rotate selected object 90 degrees left”. FIG.5Aillustrates another interface500A rendered by a computer application (e.g., application112ofFIG.1) on a client device (e.g., client device110ofFIG.1). The computer application is a graphics/drawing application and a cloud, a lightning bolt, and a moon have been rendered in the application. The cloud, lightning bolt, and moon are rendered as a result of prior natural language input of a user and/or prior mouse and/or touch interactions. Further, inFIG.5Athe user has provided the NL input501of “make the waning moon smaller and put it closer to the cloud”. FIG.5Billustrates, in interface500B, an example of providing a user interface output prompt that solicits additional detail on the natural language input ofFIG.5A. For example,FIG.5Bis an example of an output that can be provided in further sub-block304A1of method300ofFIG.3. More particularly,FIG.5Billustrates a prompt of “what is a waning moon”, that solicits further details on the term “waning moon” from the NL input501. In response, the user can provide touch input505that encircles the “waning moon” and/or can provide NL input503that defines the “waning” moon. An image, based on the touch input505, can then be used as DSK and/or the definition provided in NL input503can then be used as DSK. FIG.6is a block diagram of an example computing device610that may optionally be utilized to perform one or more aspects of techniques described herein. In some implementations, the client device110, the application control system120, and/or other component(s) can comprise one or more components of the example computing device610. Computing device610typically includes at least one processor614which communicates with a number of peripheral devices via bus subsystem612. These peripheral devices may include a storage subsystem624, including, for example, a memory subsystem625and a file storage subsystem626, user interface output devices620, user interface input devices622, and a network interface subsystem616. The input and output devices allow user interaction with computing device610. Network interface subsystem616provides an interface to outside networks and is coupled to corresponding interface devices in other computing devices. User interface input devices622may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touch screen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device610or onto a communication network. User interface output devices620may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device610to the user or to another machine or computing device. Storage subsystem624stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem624may include the logic to perform selected aspects of the method300ofFIG.3and/or other methods described herein. These software modules are generally executed by processor614alone or in combination with other processors. Memory625used in the storage subsystem624can include a number of memories including a main random access memory (RAM)630for storage of instructions and data during program execution and a read only memory (ROM)632in which fixed instructions are stored. A file storage subsystem626can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem626in the storage subsystem624, or in other machines accessible by the processor(s)614. Bus subsystem612provides a mechanism for letting the various components and subsystems of computing device610communicate with each other as intended. Although bus subsystem612is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses. Computing device610can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device610depicted inFIG.6is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computing device610are possible having more or fewer components than the computing device depicted inFIG.6. While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure. In some implementations, a method implemented by one or more processors is provided and includes receiving natural language input that includes a request to control a computer application. The method further includes generating, based on processing the natural language input, a request embedding. The method further includes processing the request embedding to generate a corresponding measure for each of multiple actions. Each of the actions is optionally specific to a domain of the computer application and is implementable through interaction with the computer application. The method further includes generating, based on the corresponding measures, multiple candidate action sets. Each of the candidate action sets includes a respective one of the actions or a respective ordered sequence of multiple of the actions. Further, each of the action sets is unique relative to all other of the action sets. The method further includes, for each of the candidate action sets: performing a simulation of controlling the computer application using the candidate action set; and generating, based on simulated data from the simulation, a suitability metric for the candidate action set. The method further includes selecting, from the candidate action sets and based on the suitability metrics, a given action set. The method further includes interacting with the computer application to control the computer application using the selected given action set. These and other implementations of the technology disclosed herein can optionally include one or more of the following features. In some implementations, generating the suitability metric for the given action set includes: processing at least some of the simulated data, from the simulation using the given action set, to generate natural language output that describes the processed at least some of the simulated data; comparing the natural language output to the natural language input; and generating the suitability metric based on the comparing. In some versions of those implementations, the natural language output is natural language text or a natural language embedding. In some additional or alternative versions of those implementations, at least some of the simulated data includes a screenshot of a final state, of the computer application, from the simulation using the given action. In yet additional or alternative versions of those implementations, generating the suitability metric for the given action set further includes: determining that the simulation of controlling the computer application, using the given action set, does not cause any error of the computer application to be generated during the simulation and/or does not violate any rules defined for the computer application; and in response to determining that the simulation, using the given action set, does not cause any error and/or does not violate any rules: positively influencing the suitability metric for the given action, or refraining from negatively influencing the suitability metric for the given action. In some implementations, generating the suitability metric for an additional action set, of the action sets, includes: determining that the simulation of controlling the computer application, using the additional action set, causes an error of the computer application to be generated during the simulation. In some of those implementations, the method further includes, in response to determining that the simulation, using the additional action set, causes the error: generating the suitability metric, for the action, as a minimum value suitability metric. In some implementations, generating the suitability metric for an additional action set, of the action sets, includes: determining that the simulation of controlling the computer application, using the additional action set, violates one or more rules defined for the computer application. In some of those implementations, the method further includes, in response to determining that the simulation, using the additional action set, violates the one or more rules: negatively influencing the suitability metric for the given action. In some implementations, generating the request embedding includes: generating a natural language embedding based on processing the natural language input using a language model; and generating the request embedding based on the natural language embedding. In some versions of those implementations, the method further includes, prior to processing the natural language input using the language model: modifying and/or supplementing the natural language input using one or more terms from a domain specific knowledge base for the computer application. In some additional or alternative version of those implementations, generating the request embedding further includes: generating a current state embedding based on processing current state data that reflects a current state of the computer application; and generating the request embedding further based on the current state embedding. The current state data could include, for example, a current screenshot of the computer application, data derived from the current screenshot of the computer application, and/or current state information provided by the computer application via an application programming interface. In some implementations, the given action set includes: a first group of the actions in a first sequence and the action sets further include an additional action set that includes: the first group of the actions in a second sequence; or a second group of the actions in a third sequence. In some implementations, interacting with the computer application to control the computer application using the given action set is automatically performed, is automatically performed in response to the natural language input, and is automatically performed without requiring any further input from the user after providing the natural language input. In some of those implementations, the method further includes: determining that the suitability metric, for the given action set, satisfies an automatic performance threshold; and automatically performing the interacting with the computer application in response to the suitability metric, for the given action set, satisfying the threshold. In some implementations, the method further includes, prior to interacting with the computer application to control the computer application using the given action set: providing a user interface output prompt, that reflects one or more aspects of the given action set; receiving affirmative user interface input in response to providing the user interface output prompt; and in response to receiving the affirmative user interface input, interacting with the computer application to control the computer application using the given action set. In some versions of those implementations, the user interface output that reflects one or more aspects of the given action set includes: a final simulated state screenshot, from the simulated data from the simulation using the given action set; a video from the simulated data from the simulation using the given action set; natural language output that describes one or more interactions from implementing the given action set, and/or underlying code for the given action set. In some additional or alternative versions of those implementations, the method further includes: determining that the suitability metric, for the given action set, fails to satisfy an automatic performance threshold; and providing the user interface prompt in response to the suitability metric, for the given action set, failing to satisfy the threshold. In some implementations, processing the request embedding to generate the corresponding measure for each of the multiple actions comprises processing the request embedding, using a trained machine learning model, to generate the corresponding measures. In some versions of those implementations, the method further includes, subsequent to interacting with the computer application to control the computer application using the given action set: providing a user interface output feedback prompt that reflects the actions of the given action set and the sequence of the actions in the given action set; receiving user interface input in response to the user interface output feedback prompt; and performing training, of the machine learning model, using the user interface input as a supervision signal. In some versions of those implementations, the method further includes determining that the user interface input indicates an alternate sequence that varies from the sequence and/or an alternate action for at least one of the actions. In some of those versions, performing training, of the machine learning model, using the user interface input as the supervision signal includes, in response to determining that the user interface input indicates the alternate sequence and/or the alternate action: performing supervised training, of the machine learning model, using positive supervision data that includes the alternate sequence and/or the alternate action. In some implementations, a method implemented by one or more processors is provided and includes receiving natural language input, that includes a request to control a computer application, and generating a request embedding based on processing the natural language input. The method further includes processing the request embedding to determine at least a first candidate action set and a second candidate action set. The first candidate action set includes a first group of the actions in a first sequence. The second candidate action set includes the first group of the actions in a second sequence or a second group of the actions in a third sequence. Each of the actions is implementable through interaction with the computer application. The method further includes performing a first simulation of controlling the computer application using the first candidate action set and performing a second simulation of controlling the computer application using the second candidate action set. The method further includes generating user interface output that is based on first simulated data from the first simulation and second simulated data from the second simulation. The method further includes causing the user interface output to be rendered in response to receiving the natural language input. The method further includes receiving user interface input in response to rendering the user interface output and determining, based on the user interface input, to select the first candidate action set in lieu of at least the second candidate action set. The method further includes, in response to determining to select the first candidate action set, interacting with the computer application to control the computer application using the first candidate action set. These and other implementations of the technology disclosed herein can optionally include one or more of the following features. In some implementations, the user interface output includes: a final simulated state screenshot, from the first simulated data from the first simulation and/or a second simulated state screenshot, from the second simulated data from the second simulation. In some implementations, the user interface output additionally or alternatively includes a first video, from the first simulated data from the first simulation and/or a second video, from the second simulated data from the second simulation. In some implementations, the user interface output additionally or alternatively includes first natural language output that is generated based on the first simulated data and that describes one or more aspects of the first simulation, and/or second natural language output that is generated based on the second simulated data and that describes one or more aspects of the second simulation. In some of those implementations, the first natural language output is audible output and the second natural language output is audible output. In some implementations, generating the user interface output includes selecting, based on one or more attributes of the first candidate action set and/or the second candidate action set, a format for the user interface output, and generating the user interface output in conformance with the selected format. In some implementations, generating the user interface output includes selecting, based on one or more attributes of the first simulated data and/or the second simulated data, a format for the user interface output, and generating the user interface output in conformance with the selected format. In some implementations, processing the request embedding to determine at least the first candidate action set and the second candidate action set includes: generating, based on processing the request embedding using a machine learning model, a corresponding measure for each of the multiple actions; generating the first candidate action set based on the corresponding measures for the first group of actions; and generating the second candidate action set based on the corresponding measures for the second group of actions. In some implementations, the method further includes: generating, based on the first simulated data from the first simulation, a first suitability metric for the first candidate action set; and generating, based on the second simulated data from the second simulation, a second suitability metric for the second candidate action set. In some of those implementations, generating the prompt and/or causing the prompt to be rendered is based on the first suitability metric and the second suitability metric. In some implementations, the method further includes, subsequent to interacting with the computer application to control the computer application using the first candidate action set, and in response to determining to select the first candidate action set: performing training, of the machine learning model, using the first candidate action set as a supervision signal. In some implementations, a method implemented by one or more processors is provided and includes receiving natural language input, that includes a request to control a computer application, and generating a request embedding based on processing the natural language input. The method further includes generating, based on processing the request embedding using a machine learning model, a corresponding measure for each of multiple actions that are each implementable through interaction with the computer application. The method further includes selecting a given action based on the corresponding measure for the given action and generating user interface output, that is based on the given action and that prompts for confirmation of one or more aspects of the given action. The method further includes causing the user interface output to be rendered in response to receiving the natural language input and receiving user interface input in response to rendering the user interface output. The method further includes determining, based on the user interface input, to select the given action or a refinement of the given action. The method further includes, in response to determining to select the given action or the refinement: generating an action set that includes the given action or the refinement, and includes one or more additional actions; and interacting with the computer application to control the computer application using the action set. These and other implementations of the technology disclosed herein can optionally include one or more of the following features. In some implementations, the method further includes performing a simulation of controlling the computer application using the action set. In some versions of those implementations, interacting with the computer application is in response to the simulation satisfying one or more criteria. In some of those versions, the one or more criteria include that a suitability measure, generated based on simulated data from the simulation, satisfies a threshold. | 57,643 |
11861264 | DESCRIPTION OF EMBODIMENTS In the following, examples of embodiments of the present invention will be described using the drawings. Embodiment 1 FIG.1is a block diagram showing an internal configuration example of a portable terminal device100by an embodiment of the present invention. Here, description will be made with an example of a case of a smart phone. The portable terminal100includes a controller101, a voice recognition unit102, a lip movement recognition unit103, a memory104, a storage105, a GPS (Global Positioning System) receiver106, a geomagnetic sensor107, an acceleration sensor108, a gyro sensor109, a base station communication unit110, a wireless communication unit111, a microphone112, an audio processor113, a speaker114, a voice output unit115, a touch panel116, an operation input unit117, a display118, an image processing unit119, an imaging unit120, and an input/output I/F121, and each of them and a bus150are connected with each other. The base station communication unit110is a communication interface such as W-CDMA (Wideband Code Division Multiple Access) and GSM (Registered trademark) (Global System for Mobile communications) which execute long distance wireless communication with a base station400. With the base station communication unit110, it is also possible to connect with an external network600through the base station500, and to transmit/receive information. The controller101is formed of a CPU (Central Processing Unit) and the like, and controls respective constituting units and executes various processes by executing programs stored in the memory104. The voice recognition unit102recognizes the voice of the operator captured from the microphone112through the audio processor113, and recognizes the operation instructed by the voice. Also, the lip movement recognition unit103recognizes the images including the lips of the operator captured from the imaging unit120through the image processing unit119, and recognizes the operation instructed by the lip movement of the operator. The controller101selects whether the operation is to be executed by the result recognized from the voice of the operator or the operation is to be executed by the result recognized from the lip movement of the operator, and executes the operation based on the result selected. The memory104is a flush memory and the like, and stores programs, data, and the like. The data used for recognition by the voice recognition unit102and the lip movement recognition unit103described above are stored in predetermined areas104a,104bof the memory104. Also, the portable terminal100includes the storage105such as a memory card, and mail addresses, data of music, video, and photos, and the like can be stored also in the storage105. The programs or the data stored in the memory104or the storage105can be renewed and added from time to time by that the base station communication unit110executes wireless communication with the base station and downloads the programs or the data from an external server and the like not illustrated. Further, it is also possible to renew and add the data, programs and the like by connecting with an external device300such as a personal computer through the input/output I/F121. The GPS receiver106receives signals from GPS satellites overhead. Thereby, the current position of the portable terminal100can be detected. The geomagnetic sensor107is a sensor that detects the direction to which the portable terminal100faces. The acceleration sensor108is a sensor that detects the acceleration of the portable terminal100, and the gyro sensor109is a sensor that detects the angular velocity of the portable terminal100. The inclination and movement of the portable terminal100can be detected in detail by them. The wireless communication unit111is a communication interface that executes wireless communication by a wireless LAN of IEEE802.11a/b/n and the like, and can connect with the external network600through a wireless router500. The microphone112inputs the voice of the outside, and the speaker111outputs the voice to the outside. The external voice output unit115outputs the voice by connecting an earphone200. The voice inputted/outputted is subjected to audio processing by the audio processor113. A touch panel116includes the operation input unit117and the display118. The display118is an LCD and the like, displays a picture or image, and includes the operation input unit117such as a touch pad on the display surface thereof. The operation input unit117is a touch pad of a capacitance type for example, and detects the touch operation (hereinafter referred to as “touch”) by a finger, touch pen and the like as an operation input. The imaging unit120is a camera and the like. The image displayed on the display118and the image inputted from the imaging unit120are processed by the image processing unit119. The input/output I/F121is a USB (Universal Serial Bus) and the like for example, and is an interface that transmits/receives data to/from the external device300. Next, an example of a flowchart of a process of the controller101is shown inFIG.2which is for executing operation by voice recognition or lip movement recognition in the portable terminal device100. InFIG.2, first, what kind of operation is to be executed is determined out of options of executable operations by the state of the portable terminal device100(S201). An example of a table of the executable operations corresponding to the state of the portable terminal device100is shown inFIG.3. For example, in a state that a home screen is displayed, “music reproduction”, “mail” and the like become operation options, and, in a state that music is reproduced, “stop”, “forward skip” and the like become operation options. Next, a branch process is executed according to whether selection of the object of the operation is needed or not (S202). For example, when “music reproduction” is to be executed as the operation, selection of the object (music piece and the like) of operation (music reproduction and the like) becomes necessary such as which music piece is to be reproduced. Also, when “stop” is to be executed as the operation during music reproduction, selection of the object of the operation (stop and the like) is not necessary. When there is selection of the operation object (Yes), a process S203for determining the operation object is executed, and the operation (music reproduction for example) is executed for the selected operation object (music piece for example) (S204). When there is not selection of the operation object in the branch process S202(No), operation (stop for example) is executed. The table data of the operation options corresponding to the state of the portable terminal device shown inFIG.3are stored in a memory area104c.FIG.4is a flowchart showing an example of the operation determination process S201. InFIG.4, first, a voice is captured from the microphone112through the audio processor113, and the image including at least the lip portion of the operator is captured from the imaging unit120through the image processing unit119(S401). Next, a voice recognition operation determination process S402is executed by the voice recognition unit102, and a lip movement recognition operation determination process S403is executed by the lip movement recognition unit103. In a branch process S404, whether the voice recognition has been successful in the voice recognition operation determination process S402is determined by a voice recognition flag. When the voice recognition has been successful (Yes), which operation is to be executed is determined (S405) based on the result recognized in the voice recognition operation determination process S402. Next, in a branch process406, the image is captured without that the lip portion departs from the imaging range, and whether lip movement data has been successfully acquired is determined by a lip detection flag. When the lip movement data has been successfully acquired (Yes), the lip movement recognition data of the memory area104bis renewed corresponding to the voice recognition result (S407), manner mode is released (S408), and the process is finished. In operation of the manner mode release and onward, operation guide by the voice from the speaker114(or from the earphone200through the external voice output unit115when the earphone200has been connected), incoming call guiding by sound, and the like are executed. On the other hand, when it is determined that acquisition of the lip movement data has failed by the lip detection flag in the branch process S406(No), the lip movement recognition data of the memory area104bis not renewed, the manner mode is released (S408), and the process is finished. When it is determined by the voice recognition flag that the voice recognition has failed in the branch process S404(No), whether the recognition has been successful in the lip movement recognition operation determination process S403is determined by a lip movement recognition flag in a branch process409. When the lip movement recognition has been successful (Yes), which operation is to be executed is determined based on the result recognized in the lip movement recognition operation determination process S403(S410), the manner mode is set (S411), and the process is finished. In the manner mode, the output from the speaker114is put off, and operation guide or incoming call guiding or the like by the screen display without the sound is executed. On the other hand, when it is determined that the lip movement recognition has failed by the lip movement recognition flag in the branch process S409(No), the process returns again to the process for acquiring the voice and image (S401). By the process described above, when the voice recognition operation determination process has been successful, the operation is determined according to the voice recognition result, and when the voice recognition operation determination process has failed and the lip movement recognition operation determination process has been successful, the operation is determined according to the lip movement recognition. Also, when the voice recognition has been successful and acquisition of the lip movement data has been successful, the lip movement recognition data of the memory area104bis renewed. As described above, even when the voice recognition operation determination process cannot be executed in a very noisy environment of the bustle and the like and an environment of a library and the like where uttering is not appropriate, which operation is to be executed can be determined by executing the lip movement recognition operation determination process. Also, setting/releasing of the manner mode can be automatically executed by the process of the voice recognition and the lip movement recognition. FIG.5is a flowchart showing an example of the process S401for capturing the voice and the images including the lip portion. InFIG.5, first, start of capturing of the voice and image is determined (S501). As the determination of the start of capturing, for example, determination is made by whether a predetermined portion M of the touch panel116of the portable terminal device100shown inFIG.6(a)has been touched or not. When it is determined that the predetermined portion M has been touched (Yes), capturing of the voice and the image of the lip portion (S502) is started, and the captured image is displayed at a predetermined portion W of the display118of the portable terminal device100(S503). Whether the lip portion has not departed from the imaging range is detected by the lip movement detection unit103based on the captured image (S504). In a branch process S505, a branch process is executed by the lip detection result, and, when the lip portion has not departed from the imaging range as shown inFIG.6(a)(Yes), for example, the display frame of the predetermined portion W of the display118is colored blue (S506). When it is determined that the lip portion has departed from the imaging range as shown inFIG.6(b)(No), for example, the display frame of the predetermined portion W of the display118is colored red (S507). In a branch process S508, finish of the capturing of the voice and images is determined. As the determination of the finish of the capturing, determination is made by whether the predetermined portion M of the touch panel116of the portable terminal device100has been touched again or not. When it is determined that the predetermined portion M has not been touched (No), the process returns to S502, and the capturing of the voice and images is continued. When it is determined that the predetermined portion M has been touched (Yes), the capturing of the voice and the images of the lip portion is finished (S509), and the process is finished. By the process described above, the voice and the images of the lip portion are captured. Also, by the display of the captured image and the color of the display frame, whether the lip portion has departed from the imaging range can be easily determined, and the operator can correct the imaging position. Further, here, as a method for notifying the operator of whether the image of the lip portion has departed or not from the imaging range, it is configured to change the color of the display frame, however, it is also possible to notify the operator of the same by other displaying methods. Next, an example of a flowchart of the voice recognition operation determination process S402in the voice recognition unit102is shown inFIG.7. InFIG.7, voice analysis is executed first, and the time series pattern of the characteristic parameter of an input voice (more specifically, the time series of a spectrum and cepstrum) is extracted (S701). Next, the likelihood for the voice recognition data corresponding to the operation options stored in the memory area104aas an acoustic model by HMM (Hidden Markov Model) (S702) is calculated. In a branch process S703, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), a voice recognition flag is set to OK (S704), an operation option that gives the maximum probability is determined as the recognition result (S705), and the process is finished. On the other hand, when it is determined to be No in the branch process S703, it is determined that the voice recognition has failed due to a noise and the like, the voice recognition flag is set to NG (S706), and the process is finished. Next, the lip movement recognition operation determination process S403in the lip movement recognition unit103will be described using an example ofFIG.8. In the flowchart ofFIG.8, first, a movement of the lips is detected from the image of the lip movement inputted, and the lip movement data is acquired (S801). As the data of the lip movement, for example, a temporal change of the lateral size X of the lips and the vertical size Y of the lips is detected as shown inFIG.9. When the lip portion departs from the imaging range, the lip portion cannot be detected from the image inputted, and acquisition of the lip movement data fails in the lip movement data acquisition process S801, it is determined to be No in a branch process802, the lip detection flag and the lip movement recognition flag are set to NG (S803, S809), and the process is finished. On the other hand, when the lip movement data has been successfully acquired from the image inputted, it is determined to be Yes in the branch process802, and the lip detection flag is set to OK (S804). Next, the likelihood of the lip movement data acquired and the lip movement recognition data corresponding to the operation options stored in the memory range104bis calculated (S805). In a branch process S806, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the lip movement recognition flag is set to OK (S807), an operation option that gives the maximum probability is determined as the recognition result (S808), and the process is finished. On the other hand, when it is determined to be No in the branch process S806, the lip movement recognition flag is set to NG (S809), and the process is finished. InFIGS.10(a) and10(b), examples of the lip movement recognition data Xr(t), Yr(t) and the acquired lip movement data Xd(t), Yd(t) are shown.FIG.10(a)corresponds to a selection option “o-n-ga-ku-sa-i-se-i (music reproduction)”, andFIG.10(b)corresponds to “bi-de-o-sa-i-se-i (video reproduction)”. X shows the lateral size of the lips, and Y shows the vertical size of the lips. For example, the size of the lips for “ga” and “sa” corresponding to the vowel “a” is large in both X and Y. On the other hand, the lip size X of “i” and “bi” corresponding to the vowel “i” is comparatively large, whereas Y is small. Thus, from the lip movement data Xd(t), Yd(t) acquired as the lip movement and the lip movement recognition data Xr(t), Yr(t) corresponding to the operation options stored in the memory104, an option whose temporal change of the lip size X, Y is closest can be determined as the recognition result. An example of a lip movement recognition data renewal process S407will be described usingFIGS.11(a)-11(c). FIG.11(a)shows lip movement recognition data before renewal Yr(t),FIG.11(b)shows acquired lip movement data Yd(t), andFIG.11(c)shows lip movement recognition data after renewal Yr(t)′. Here, the lip movement recognition data after renewal Yr(t)′ is defined by an expression below. Yr(t)′=Yr(t)+α·(Yd(t)−Yr(t)) (MATH. 1) The lip recognition data after renewal Yr(t)′ is used as the lip movement recognition data Yr(t) in the next lip movement recognition. Here, α is a coefficient that determines the speed at which the lip movement recognition data converges to the acquired lip movement data. When α=1 for example, Yr(t)′=Yd(t) (MATH. 2) is fulfilled, and the acquired lip movement data Yd(t) becomes the lip movement recognition data in the next lip movement recognition. When α=0.5, Yr(t)′=0.5−(Yd(t)+Yr(t)) (MATH. 3) is fulfilled, and the average of the acquired lip movement data Yd(t) and the lip movement recognition data before renewal Yr(t) becomes the lip movement recognition data in the next lip movement recognition. With respect to the range of α, α that fulfils 0<α<1 (MATH. 4) is selected. As α is larger, the lip movement recognition data converges to the acquired lip movement data more quickly.FIG.11(c)shows a case of α=0.5. The lip movement recognition data after renewal Xr(t)′ is also given similarly by the formula below. Xr(t)′=Xr(t)+α·(Xd(t)−Xr(t)) (MATH. 5) By the process described above, the lip movement recognition data after renewal Xr(t)′, Yr(t)′ are renewed to data closer to the actually acquired lip movement than those before renewal, and are used as the lip movement recognition data Xr(t), Yr(t) in the next lip movement recognition. By repeating it, the lip movement recognition data Xr(t), Yr(t) which match the lip movement of the operator more closely can be obtained, and the accuracy of the lip movement recognition can be improved. By the voice recognition operation determination process or the lip movement recognition operation determination process described above, which operation is to be executed can be determined. Next, the process for determining the object of operation (S203) will be described. InFIG.12, an example of a flowchart of the operation object determination process is shown. InFIG.12, first, a voice is captured from the microphone112through the audio processor113, and the image including at least the lip portion of the operator is captured from the imaging unit120through the image processing unit119(S1201). Next, a voice recognition category determination process S1202and a lip movement recognition category determination process S1203are executed. In a branch process S1204, whether the voice recognition has been successful in the voice recognition category determination process S1202is determined by the voice recognition flag. When the voice recognition has been successful (Yes), the category of the operation object is determined based on the result recognized in the voice recognition category determination process S1202(S1205). Next, in a branch process S1206, whether the image has been captured without that the lip portion departs from the imaging range and acquisition of the lip movement data has been successful is determined by the lip detection flag. When acquisition of the lip movement data has been successful (Yes), the lip movement recognition data of the memory area104bare renewed corresponding to the voice recognition result (S1207), and the process proceeds to the next process S1210. On the other hand, when it is determined that acquisition of the lip movement data has failed by the lip movement flag in the branch process S1206(No), the lip movement recognition data of the memory area104bis not renewed, and the process proceeds to the next process S1210. When it is determined that the voice recognition has failed (No) by the voice recognition flag in the branch process S1204(No), whether recognition has been successful in the lip movement recognition category determination process S1203is determined by the lip movement recognition flag in a branch process1208. When the lip movement recognition has been successful (Yes), the category of the operation object is determined based on the result recognized in the lip movement recognition category determination process S403(S1209), and the process proceeds to the next process S1210. On the other hand, when it is determined that the lip movement recognition has failed by the lip movement recognition flag in the branch process S1208(No), the process returns again to the process for acquiring the voice and image (S1201). In S1210, a voice and an image including at least the lip portion of the operator are acquired again. A voice recognition operation object determination process S1211and a lip movement recognition operation object determination process S1212are executed based on the voice and image acquired. In a branch process S1213, whether recognition has been successful in the voice recognition operation object determination process S1211is determined by the voice recognition flag. When the voice recognition has been successful (Yes), the operation object is determined based on the result recognized in the voice recognition operation object determination process S1211(S1214). Next, in a branch process S1215, whether the image has been acquired without that the lip portion departs from the imaging range and acquisition of the lip movement data has been successful is determined by the lip detection flag. When acquisition of the lip movement data has been successful (Yes), the lip movement recognition data of the memory area104bis renewed corresponding to the voice recognition result (S1216), and the process is finished. On the other hand, when it is determined that acquisition of the lip movement data has failed by the lip detection flag in the branch process S1215(No), the lip movement recognition data of the memory area104bis not renewed, and the process is finished. When it is determined that the voice recognition has failed by the voice recognition flag in the branch process S1213(No), whether the lip movement recognition has been successful in the lip movement recognition operation object determination process S1212is determined by the lip movement recognition flag in a branch process1217. When the lip movement recognition has been successful (Yes), the operation object is determined based on the result recognized in the lip movement recognition operation object determination process S1212(S1218), and the process is finished. On the other hand, when it is determined that the lip movement recognition has failed by the lip movement recognition flag in the branch process S1217(No), the process returns to the process for capturing the voice and image again (S1210). FIG.13is a flowchart showing an example of a lip movement recognition category determination process. In the flowchart ofFIG.13, first, the lip movement is detected from the image of the lip movement inputted, and the lip movement data is acquired (S1301). When the lip portion has departed from the imaging range, the lip cannot be detected, and acquisition of the lip movement data has failed in the lip movement data acquisition process S1301, it is determined to be No in the branch process1302, the lip detection flag and the lip movement recognition flag are set to NG (S1303, S1309), and the process is finished. On the other hand, when acquisition of the lip movement data from the image of the lips inputted in the lip movement data acquisition process S1301has been successful, it is determined to be Yes in the branch process1302, and the lip detection flag is set to OK (S1304). Next, the likelihood of this lip movement data acquired and the lip movement recognition data corresponding to the operation options stored in the memory area104bis calculated (S1305). InFIG.14, an example of a table of category options corresponding to operations is shown. The attribute of the metadata imparted to the data such as music and photo is equivalent to the category. For example, to respective music data, data on the attribute (category) such as the name of the music piece, artist, and album are imparted as the metadata. In a branch process S1306, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the lip movement recognition flag is set to OK (S1307), an operation category that gives the maximum probability is determined as the recognition result (S1308), and the process is finished. On the other hand, when it is determined to be No in the branch process S1306, the lip movement recognition flag is set to NG (S1309), and the process is finished. Next, the voice recognition category determination process S1202will be described. FIG.15is a flowchart showing an example of the voice recognition category determination process. InFIG.15, first, the voice inputted from the microphone112through the audio processor113is analyzed, and the time series pattern of the characteristic parameter of the input voice is extracted (S1501). Next, likelihood for the voice recognition data corresponding to the category options stored in the memory area104aas an acoustic model by HMM (S1502) is calculated. In a branch process S1503, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the recognition flag is set to OK (S1504), a category option that gives the maximum probability is determined as the recognition result (S1505), and the process is finished. On the other hand, when it is determined to be No in the branch process S1503, it is determined that the voice recognition has failed due to the noise and the like, the recognition flag is set to NG (S1506), and the process is finished. By the lip movement recognition category determination process or the voice recognition category determination process described above, to which category an operation object belongs can be determined. Next, a lip movement recognition operation object determination process and a voice recognition operation object determination process for determining an operation object that belongs to a determined category will be described. FIG.16is a flowchart showing an example of the lip movement recognition operation object determination process. In the flowchart ofFIG.16, first, movement of the lips is detected from the image of the lip movement inputted, and the lip movement data is acquired (S1601). When the lip portion departs from the imaging range, the lip cannot be detected, and acquisition of the lip movement data has failed in the lip movement data acquisition process S1601, it is determined to be No in a branch process1602, the lip detection flag and the lip movement recognition flag are set to NG (S1603, S1611), and the process is finished. On the other hand, when the lip movement data has been successfully acquired from the image of the lip portion inputted in the lip movement data acquisition process S1601, it is determined to be Yes in the branch process S1602, and the lip detection flag is set to OK (S1604). To the data such as music and photo stored in the storage105, the metadata on the attribute such as the title, artist, and filming date have been imparted. In S1605, the likelihood of the lip movement recognition data corresponding to the description of the attribute portion of a selected category (lip movement recognition data corresponding to the title of a music piece recorded as the metadata of each music data when the name of a music piece for example is selected as the category) and the lip movement data acquired is calculated. In a branch process S1606, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), whether there are plural candidate numbers, namely whether there are plural data in which the maximum probability becomes a predetermined value or more, is determined, and a branch process is executed (S1607). When the candidate number is one (Yes), the lip movement recognition flag is set to OK (S1608), an operation object that gives the maximum probability is determined as the recognition result (S1609), and the process is finished. On the other hand, when it is determined to be a case of plural candidate numbers in the branch process S1607(No), an operation object selection process (S1610) is executed, and the process is finished. On the other hand, when it is determined to be No in the branch process S1606, the lip movement recognition flag is set to NG (S1611), and the process is finished. The operation object selection process S1610will be described using a flowchart ofFIG.17. InFIG.17, first, plural candidates are displayed on the display118of the touch pad116(S1701). An example of the display is shown inFIG.18. Here, the example is a case there are three music pieces that become the candidate. Also, the lip movement recognition data and the acquired lip movement data corresponding to them are shown inFIGS.19(a)-19(c). In this case, a portion where the lip movement recognition data Xr(t), Yr(t) are almost same to each other is included, and the operation object cannot be determined only by the lip movement. Therefore, characters with different lip shape for selection are added to the name of the music piece and are displayed (“a”, “i”, and “u” here). Next, the image including at least the lip portion of the operator is captured from the imaging unit120through the image processing unit119(S1702). First, the lip movement is detected from the image of the lip movement inputted, and the lip movement data is acquired (S1703). When the lip portion departs from the imaging range, the lip cannot be detected and acquisition of the lip movement data has failed in the lip movement data acquisition process S1703, it is determined to be No in a branch process1704, the lip detection flag and the lip movement recognition flag are set to NG (S1705, S1711), and the process is finished. On the other hand, when the lip movement data has been successfully acquired from the image inputted in the lip movement data acquisition process S1703, it is determined to be Yes in the branch process S1704, and the lip detection flag is set to OK (S1706). Next, the likelihood of the lip movement data corresponding to this inputted image and the lip movement recognition data corresponding to the characters added for selection (“a”, “i”, and “u” in the example ofFIG.18) is calculated (S1707). In a branch process S1708, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the lip movement recognition flag is set to OK (S1709), an option that gives the maximum probability is determined as the operation object (S1710), and the process is finished. On the other hand, when it is determined to be No in the branch process S1708, the lip movement recognition flag is set to NG (S1711), and the process is finished. As described above, even when there are plural candidates whose lip movement is generally same each other, by adding characters or a character string of different lip shape, the operation object can be determined. FIG.20is a flowchart showing an example of a voice recognition operation object determination process S1211. InFIG.20, first, the voice inputted from the microphone112through the audio processor113is analyzed, and the time series pattern of the characteristic parameter of the input voice is extracted (S2001). With respect to the description of the attribute portion of the data such as music and photo (when the name of the music piece is selected for example as the category, the title of the music piece stored as the metadata of respective music data), the likelihood for the voice recognition data stored as the acoustic model in the memory area104ais calculated (S2002). In a branch process S2003, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the voice recognition flag is set to OK (S2004), an operation option that gives the maximum probability is determined as the recognition result (S2005), and the process is finished. On the other hand, when it is determined to be No in the branch process S2003, the voice recognition flag is set to NG, and the process is finished. Another embodiment of the lip movement recognition operation determination process S405is shown inFIG.21. In the present embodiment, the lip shape is made to correspond to the vowel, and the lip movement is recognized as a sequence of the vowels. InFIG.21, first, the syllable number N is determined from the image of the lip movement inputted (S2101). Next, to which vowel the lip shape corresponding to each syllable corresponds is determined by a syllable and vowel sequence conversion process, and the lip movement is converted to a vowel sequence corresponding to syllables of N pieces (S2102). The likelihood of this vowel sequence corresponding to the inputted image and the lip movement recognition data expressed by the vowel sequence corresponding to the operation option stored in the memory104bis calculated (S2103). In a branch process S2104, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the recognition flag is set to OK (S2105), an operation option that gives the maximum probability is determined as the recognition result (S2106), and the process is finished. On the other hand, when it is determined to be No in the branch process S2104, the recognition flag is set to NG (S2107), and the process is finished. An example of the table of the vowel sequence corresponding to the operation options stored beforehand in the memory104bis shown inFIG.22. For example, the vowel sequence corresponding to “ongakusaisei (music reproduction)” becomes “o-a-u-a-i-e-i”. The likelihood of this vowel sequence corresponding to the operation option and the vowel sequence corresponding to the inputted image is calculated, and an operation option with the largest likelihood is determined as the recognition result. By differentiating the vowel sequence of the row of the characters of each operation option, the operation option and the vowel sequence can make one-to-one correspondence, and the operation option can be determined by the vowel sequence. When recognition is made by the vowel sequence, the vowel sequence for the operation option is stored, therefore the temporal change of the lip size X and Y is not required to be stored as the lip movement recognition data for the operation option shown inFIGS.10(a) and10(b), and the use amount of the memory104bcan be reduced. InFIG.23, an example of a flowchart of a syllable and vowel sequence conversion process (S2102) is shown. InFIG.23, first, a loop process is started with a parameter I that designates the syllable to be compared with the vowel lip shape being made from 1 to the syllable number N (S2301), and a process of S2302is repeated to a loop finishing process of S2303. In S2302, the lip shape corresponding to the Ith syllable of the inputted image and the lip shape corresponding to the vowel in the vowel recognition data stored in the memory104bare compared to each other, and the vowel corresponding to the Ith syllable is determined. By the processes described above, the syllables of N pieces corresponding to the inputted image are converted to a vowel sequence. InFIG.24, an example of the lip shapes corresponding to the vowels is shown. Here, the lip shapes that correspond to the vowels of “a”, “i”, “u”, “e”, and “o” of the Japanese language are shown. For example, as shown in the table ofFIG.25, the size of the vertical width X and the lateral width Y of the lips is expressed in three steps, and are related to each vowel. Thereby, the vertical width X and the lateral width Y of the lip shape of the inputted image can be obtained, and the corresponding vowel can be determined according to the table ofFIG.25. The recognition method by the vowel sequence described above may be applied not only to the lip movement recognition operation determination process S405but also to the lip movement recognition category determination process S1104and the lip movement recognition operation determination process S1109. FIG.26is a flowchart showing another embodiment of the process for capturing a voice and an image including the lip portion. InFIG.26, the point different from the voice/image capturing process ofFIG.5is that an option display process S510is added. InFIGS.27(a)-27(b), an example of the display in the process for capturing the voice and the image including the lip portion is shown.FIG.27(a)is the display of an operation option in a state of home in the operation determination process, andFIG.27(b)is the display of a category option in reproducing music in the operation object determination process. Characters with different vowel or a character string for selection are added and displayed, and an operation selection process is executed by voice recognition or lip movement recognition with respect to the characters or character string portion added. Thereby, because recognition can be executed by short character or character string with different vowel, recognition can be executed easily and surely. As described above, by displaying the option on the display118, the operation option or the category option is not required to be remembered each time, and can be surely selected. However, it may be configured that whether the option is to be displayed or not can be set so that the options are not displayed when the operator is accustomed to operation of the portable terminal and so on. Embodiment 2 FIG.28is a block diagram showing a configuration example of the second embodiment of the portable terminal device100, a same reference sign will be given to a portion having a function same to that of the configuration example ofFIG.1, and description thereof will be omitted. In comparison with the configuration example ofFIG.1, the present embodiment is provided with an operator recognition unit122and operator recognition data with the latter being provided in a predetermined area104dof the memory104, and is coped with a case there are plural operators who use the portable terminal device100. FIG.29is a flowchart showing an example of an operation determination process S201according to Embodiment 2, and a same reference sign is given to a process same to that in the flowchart ofFIG.4. InFIG.29, first, a voice is captured from the microphone112through the audio processor113, and an image including at least the lip portion of the operator is captured from the imaging unit120through the image processing unit119(S401). Next, an operator N who operates is recognized by the operator recognition unit122based on the voice and/or image captured in the voice/image capturing process S401and the operator recognition data stored in the memory area104d(S420). As the operator recognition data, for example, voice recognition data or face recognition data registered beforehand for logging in to the portable terminal device100can be used. After recognizing the operator, a voice recognition operation determination process S402is executed by the voice recognition unit102, and a lip movement recognition operation determination process S403is executed by the lip movement recognition unit103. In a branch process S404, whether the voice recognition has been successful in the voice recognition operation determination process S402is determined by the voice recognition flag. When the voice recognition has been successful (Yes), which operation is to be executed is determined (S405) based on the result recognized in the voice recognition operation determination process S402. Next, in a branch process406, the image is captured without that the lip portion departs from the imaging range, and whether lip movement data has been successfully acquired is determined by the lip detection flag. When the lip movement data has been successfully acquired (Yes), the lip movement recognition data corresponding to the operator N of the memory area104bis renewed corresponding to the voice recognition result (S421), the manner mode is released (S408), and the process is finished. In operations after the manner mode release, operation guide by voice from the speaker114(or from the earphone200through the external voice output unit115when the earphone200has been connected), incoming call guiding by sound and the like are executed. On the other hand, when it is determined that acquisition of the lip movement data has failed by the lip detection flag in the branch process S406(No), the lip movement recognition data of the memory area104bis not renewed, the manner mode is released (S408), and the process is finished. When it is determined by the voice recognition flag that the voice recognition has failed in the branch process S404(No), whether the recognition has been successful in the lip movement recognition operation determination process S403is determined by the lip movement recognition flag in a branch process409. When the lip movement recognition has been successful (Yes), which operation is to be executed is determined based on the result recognized in the lip movement recognition operation determination process S403(S410), the manner mode is set (S411), and the process is finished. In the manner mode, the output from the speaker114is put off, and operation guide or incoming call guiding or the like by the screen display without the sound is executed. On the other hand, when it is determined that the lip movement recognition has failed by the lip movement recognition flag in the branch process S409(No), the process returns again to the process for acquiring the voice and image (S401). By the processes described above, the lip movement recognition data corresponding to the operator is renewed for each operator, and the lip movement recognition data dealing with the individual difference of the lip movement can be obtained. Therefore, because the lip movement recognition is executed using the lip movement recognition data renewed corresponding to the operator in the lip movement recognition operation determination process S403, even when plural persons use a portable terminal device, the accuracy of lip movement recognition can be improved. Further, the lip movement recognition data renewal process corresponding to the operator may be applied not only to the operation determination process S201but also to the operation object determination process S203in a similar manner. Embodiment 3 FIG.30is a block diagram showing a configuration example of the third embodiment of the portable terminal device100, a same reference sign will be given to a portion having a function same to that of the configuration example ofFIG.28, and description thereof will be omitted. Also,FIG.31is a block diagram showing a schematic configuration of a voice/lip movement recognition information processing system including the portable terminal device100of the present embodiment and a voice/lip movement recognition server700. In comparison with the configuration example ofFIG.28, in the portable terminal device100of the present embodiment, the voice recognition unit102, the lip movement recognition unit103, and the voice recognition data area and the lip movement recognition data area of the memory104are not arranged, and portions corresponding to them are arranged in the voice/lip movement recognition server700. InFIG.31, the voice/lip movement recognition server700includes a controller701, a voice recognition unit702, a lip movement recognition unit703, a memory704, a storage705, and a communication unit706, and each of them are connected to a bus710each other. The communication unit706is an interface for connecting to the external network600, and is connected to the portable terminal device100through the base station400or the wireless router500. The controller701is formed of a CPU and the like, and controls respective constituting units and executes various processes by executing programs stored in the memory704. The voice recognition unit702is for recognizing the voice data of the operator of the portable terminal device100obtained through the communication unit706, and converting the same to a character string corresponding to the voice data. Also, the lip movement recognition unit703is for recognizing the lip movement from the image data of the operator of the portable terminal device100obtained through the communication means706, and converting the same to a character string corresponding to the image data. The controller701transmits the result recognized from the voice of the operator or the result recognized from the lip movement of the operator to the portable terminal device100through the communication unit706. The memory704is a flush memory and the like, and stores programs, data, and the like. The storage705is an SSD (Solid State Device) or a hard disk, and the data used for recognition in the voice recognition unit702and the lip movement recognition unit703described above is stored in predetermined areas705aand705bof the storage705. FIG.32is a flowchart showing an example of the process of the controller101and the controller701in the information processing system that includes the portable terminal device100and the voice/lip movement recognition server700ofFIG.31. InFIG.32, first, in the portable terminal device100, the voice is captured from the microphone112through the audio processor113, and the image including at least the lip portion of the operator is captured from the imaging unit120through the image processing unit119(S3201). The operator N who operates is recognized by the operator recognition unit122based on the voice and/or image captured in the voice/image capturing process S3201and the operator recognition data stored in the memory area104d(S3202). Next, the data of the voice and image captured is transmitted to the voice/lip movement recognition server700through the base station communication unit110or the wireless communication unit111(S3203). In the voice/lip movement recognition server700, a voice and lip movement recognition process S3204is executed based on the data of the voice and image received, and the recognition result is transmitted to the portable terminal device100through the communication unit706(S3205). In S3206, a branch process is executed based on the voice recognition flag and the lip movement recognition flag of the recognition result transmitted from the voice/lip movement recognition server700, and, when the voice recognition and the lip movement recognition have failed (No), the process returns to the voice/image capturing process S3201. When the voice recognition or the lip movement recognition has been successful (Yes), an operation determination process S3207is executed, which operation is to be executed is determined based on the likelihood of the option data on the operations shown inFIG.3stored in the memory104cand the recognition result, and, when there exists an operation option corresponding to the recognition result, a determination success/failure flag is set to OK. In a branch process3408, a branch process is executed by the determination success/failure flag, and, when an operation option corresponding to the recognition result does not exist (No), the process returns to the voice/image capturing process S3201. When there exists an operation option corresponding to the recognition result (Yes), a branch process is executed based on the voice recognition flag in a branch process S3209. The manner mode is released (S3210) when the voice recognition has been successful (Yes), and the manner mode is set (S3211) when the voice recognition has failed (No). Next, a branch process is executed according to whether selection of the object of the operation is needed or not (S3212). For example, when “music reproduction” is to be executed as the operation, selection of the object (music piece and the like) of operation (music reproduction and the like) becomes necessary such as which music piece is to be reproduced. Also, when “stop” is to be executed as the operation during music reproduction, selection of the object of the operation (stop and the like) is not necessary. When there is not selection of the operation object (No), the determined operation is executed (S3228). When it is determined in the branch process S3212that there is an operation selection (Yes), a voice/image capturing process S3213is executed, and the data of the voice and image captured is transmitted to the voice/lip movement recognition server700(S3214). In the voice/lip movement recognition server700, a voice and lip movement recognition process S3215is executed based on the data of the voice and image received, and the recognition result is transmitted to the portable terminal device100(S3216). In S3217, a branch process is executed based on the voice recognition flag and the lip movement recognition flag of the recognition result transmitted from the voice/lip movement recognition server700, and, when the voice recognition and the lip movement recognition have failed (No), the process returns to the voice/image capturing process S3213. When the voice recognition or the lip movement recognition has been successful (Yes), an operation category determination process S3218is executed, the category of the operation object is determined based on the likelihood of category option data corresponding to the operation as shown inFIG.13stored in the memory104cand the recognition result, and, when there exists a category option corresponding to the recognition result, the determination success/failure flag is set to OK. In a branch process3418, a branch process is executed by the determination success/failure flag, and, when the category option corresponding to the recognition result does not exist (No), the process returns to the voice/image capturing process S3213. When there exists the category option corresponding to the recognition result (Yes), a voice/image capturing process S3220is executed, and the data of the voice and image captured is transmitted to the voice/lip movement recognition server700(S3221). In the voice/lip movement recognition server700, a voice and lip movement recognition process S3222is executed based on the data of the voice and image received, and the recognition result is transmitted to the portable terminal device100(S3223). In S3224, a branch process is executed based on the voice recognition flag and the lip movement recognition flag of the recognition result transmitted from the voice/lip movement recognition server700, and, when the voice recognition and the lip movement recognition have failed (No), the process returns to the voice/image capturing process3220. When the recognition has been successful (Yes), an operation object determination process S3225is executed. In a memory104eof the portable terminal100, history data of each operator has been stored, and the history of the words and the like of the retrieval object when internet retrieval was performed by voice recognition has been stored. Also, to the data such as music and photo stored in the storage105of the portable terminal device100, metadata on the attribute such as the title, artist, and filming date has been imparted. The operation object is determined based on the likelihood of the description of the history data stored in the memory104eand the attribute portion of the category determined by the operation category determination process S3218(for example, the description corresponding to the title of the music piece recorded as the metadata of each music data when the name of the music piece has been selected as the category) and the recognition result, and, when there exists an operation object corresponding to the recognition result, the determination success/failure flag is set to OK. In a branch process3426, a branch process is executed by the determination success/failure flag, and, when an operation object corresponding to the recognition result does not exist (No), the process returns to the voice/image capturing process S3220. When there exists an operation object corresponding to the recognition result (Yes), the determined operation object is added/renewed with respect to the history data corresponding to the operator N stored in the memory104e(S3227), and operation is executed for the determined operation object (S3228). An example of the flowchart of the voice/lip movement recognition processes S3204, S3215, and S3222is shown inFIG.33. InFIG.33, first, a voice recognition process S3301is executed by the voice recognition unit702based on the voice data of the operator and the image data including at least the lip portion acquired through the communication unit706, and a lip movement recognition process S3302is executed by the lip movement recognition unit703. In a branch process S3303, whether the voice recognition has been successful is determined by the voice recognition flag in the voice recognition process S3301. When the voice recognition has failed (No), the process is finished. When the voice recognition has been successful (Yes), whether the image has been captured without that the lip portion departs from the imaging range and acquisition of the lip movement data has been successful is determined by the lip detection flag in a branch process S3304. When acquisition of the lip movement data has been successful (Yes), a branch process is executed by whether or not there is the lip recognition data corresponding to the character string obtained by the voice recognition. When there is the lip recognition data corresponding to the character string obtained by the voice recognition (Yes), the lip recognition data of the storage area705bcorresponding to the character string is renewed (S3306), and, when the lip recognition data does not exist (No), the lip recognition data corresponding to the character string obtained by the voice recognition is added to the storage area705b(S3307), and the process is finished. On the other hand, when it is determined in the branch process S3304that acquisition of the lip movement data has failed (No), the lip movement recognition data is not renewed, and the process is finished. By the processes described above, when the voice recognition has been successful and acquisition of the lip movement data has been successful, renewal and addition of the lip movement recognition data corresponding to the voice recognition result are executed. An example of the flowchart of the voice recognition process S3301is shown inFIG.34. InFIG.34, first, the voice analysis is executed, and the time series pattern of the characteristic parameter of the input voice is extracted (S3401). Next, the likelihood for the voice recognition data stored in the predetermined area705aof the storage as an acoustic model by HMM is calculated (S3402). In a branch process S3403, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the voice recognition flag is set to OK (S3404), the voice recognition data that gives the maximum probability is made the recognition result, and the process is finished. On the other hand, when it is determined to be No in the branch process S3403, it is determined that the voice recognition has failed due to the noise and the like, the voice recognition flag is set to NG (S3405), and the process is finished. Next, the lip movement recognition process S3302will be described using an example ofFIG.35. In the flowchart ofFIG.35, first, the lip movement is detected from the image inputted, and the lip movement data is acquired (S3501). When the lip portion has departed from the imaging range, the lip portion cannot be detected from the image inputted, and acquisition of the lip movement data has failed in the lip movement data acquisition process S3501, it is determined to be No in a branch process3702, the lip detection flag and the lip movement recognition flag are set to NG (S3503, S3508), and the process is finished. On the other hand, when acquisition of the lip movement data from the image inputted has been successful, it is determined to be Yes in a branch process S3502, and the lip detection flag is set to OK (S3504). Next, the likelihood of the lip movement data acquired and the lip movement recognition data stored in the predetermined area705bof the storage is calculated (S3505). In a branch process S3506, when the maximum probability (maximum likelihood) of the result of the likelihood calculation is equal to or greater than a predetermined value (here, the value is set to 0.6 as an example) (Yes), the lip movement recognition flag is set to OK (S3507), and the lip movement recognition data that gives the maximum probability is made the recognition result, and the process is finished. On the other hand, when it is determined to be No in the branch process S3506, the lip movement recognition flag is set to NG (S3508), and the process is finished. Although it is configured in the embodiment described above that the voice and the image data captured in the portable terminal device100are transmitted to the voice/lip movement recognition server700, it may also be configured that the voice analysis for extracting the time series pattern of the characteristic parameter of the input voice is executed in the portable terminal device100and the result of detecting the lip movement from the image inputted and acquiring the lip movement data is transmitted to the voice/lip movement recognition server700. Thereby, the data amount transmitted from the portable terminal device100to the voice/lip movement recognition server700can be reduced, and the processing time can be reduced. In the embodiments described above, by executing the voice recognition and the lip movement recognition by the voice/lip movement recognition server700, the lip movement recognition data is renewed based on the data of the voice and lip movement of a number of operators, and therefore the accuracy of the lip movement recognition can be further improved. Also, by arranging the history data for each operator, adding the words and the like newly used in the voice recognition as the history data, and utilizing the history data in the lip movement recognition, lip movement recognition of the words with high use frequency for each operator of the portable terminal device becomes possible. Also, the embodiments described above were described in detail in order to facilitate easy understanding of the present invention, and the present invention is not necessarily limited to those including all configurations described. For example, although the voice recognition data, lip movement recognition data, and option data were stored in the memory104in the first and second embodiments, they may be stored in the storage105. Further, a part of the configuration of an embodiment can be replaced by a configuration of another embodiment, and a configuration of an embodiment can be added with a configuration of another embodiment. Furthermore, with respect to a part of the configuration of each embodiment, it is possible to effect addition, deletion, and replacement of other configurations. Also, with respect to each configuration, function, processor, processing means and the like described above, a part or all thereof may be achieved by hardware by designing by an integrated circuit and so on for example. Further, each configuration, function, and the like described above may be achieved by software by that a processor interprets and executes a program that achieves the function of them. Information such as a program, table, and file achieving each function can be placed in the memory104and the storage105. Also, with respect to the control line and information line, those considered to be necessary for explanation have been shown, and all control lines and information lines applicable to products have not necessarily been shown. Practically, it may be thought that almost all configurations are connected to each other. REFERENCE SIGNS LIST 100: portable terminal device,101: controller,102: voice recognition unit,103: lip movement recognition unit,104: memory,105: storage,110: base station communication unit,111: wireless communication unit,112: microphone,113: audio processor,114: speaker,115: external voice output unit,116: touch panel,117: operation input unit,118: display,119: image processing unit,120: imaging unit,122: operator recognition unit,400: base station,500: wireless router,600: external network,700: voice/lip movement recognition server,701: controller,702: voice recognition unit,703: lip movement recognition unit,705: storage,706: communication unit | 63,098 |
11861265 | DETAILED DESCRIPTION The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. FIGS.1A and1Bdepict exemplary system100for providing audio information to a user, according to various embodiments. In some embodiments, as illustrated inFIG.1A, system100includes device100a. Device100aincludes various components, such as processor(s)102, RF circuitry(ies)104, memory(ies)106, image sensor(s)108, orientation sensor(s)110, microphone(s)112, location sensor(s)116, speaker(s)118, display(s)120, and touch-sensitive surface(s)122. These components optionally communicate over communication bus(es)150of device100a. In some embodiments, elements of system100are implemented in a base station device (e.g., a computing device, such as a remote server, mobile device, or laptop) and other elements of the system100are implemented in an auxiliary device (such as an audio playback device, television, monitor, or head-mounted display (HMD) device), where the auxiliary device is in communication with the base station device. In some embodiments, device100ais implemented in a base station device or an auxiliary device. As illustrated inFIG.1B, in some embodiments, system100includes two (or more) devices in communication, such as through a wired connection or a wireless connection. First device100b(e.g., a base station device) includes processor(s)102, RF circuitry(ies)104, memory(ies)106. These components optionally communicate over communication bus(es)150of device100b. Second device100c(e.g., an auxiliary device) includes various components, such as processor(s)102, RF circuitry(ies)104, memory(ies)106, image sensor(s)108, orientation sensor(s)110, microphone(s)112, location sensor(s)116, speaker(s)118, display(s)120, and touch-sensitive surface(s)122. These components optionally communicate over communication bus(es)150of device100c. System100includes processor(s)102and memory(ies)106. Processor(s)102include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some embodiments, memory(ies)106are one or more non-transitory computer-readable storage mediums (e.g., flash memory, random access memory) that store computer-readable instructions configured to be executed by processor(s)102to perform the techniques described below. System100includes RF circuitry(ies)104. RF circuitry(ies)104optionally include circuitry for communicating with electronic devices, networks, such as the Internet, intranets, and/or a wireless network, such as cellular networks and wireless local area networks (LANs). RF circuitry(ies)104optionally includes circuitry for communicating using near-field communication and/or short-range communication, such as Bluetooth®. System100includes display(s)120. In some embodiments, display(s)120include a first display (e.g., a left eye display panel) and a second display (e.g., a right eye display panel), each display for displaying images to a respective eye of the user. Corresponding images are simultaneously displayed on the first display and the second display. Optionally, the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the displays. In some embodiments, display(s)120include a single display. Corresponding images are simultaneously displayed on a first area and a second area of the single display for each eye of the user. Optionally, the corresponding images include the same virtual objects and/or representations of the same physical objects from different viewpoints, resulting in a parallax effect that provides a user with the illusion of depth of the objects on the single display. In some embodiments, system100includes touch-sensitive surface(s)122for receiving user inputs, such as tap inputs and swipe inputs. In some embodiments, display(s)120and touch-sensitive surface(s)122form touch-sensitive display(s). System100includes image sensor(s)108. Image sensors(s)108optionally include one or more visible light image sensor, such as charged coupled device (CCD) sensors, and/or complementary metal—oxide—semiconductor (CMOS) sensors operable to obtain images of physical objects from the real environment. Image sensor(s) also optionally include one or more infrared (IR) sensor(s), such as a passive IR sensor or an active IR sensor, for detecting infrared light from the real environment. For example, an active IR sensor includes an IR emitter, such as an IR dot emitter, for emitting infrared light into the real environment. Image sensor(s)108also optionally include one or more event camera(s) configured to capture movement of physical objects in the real environment. Image sensor(s)108also optionally include one or more depth sensor(s) configured to detect the distance of physical objects from system100. In some embodiments, system100uses CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around system100. In some embodiments, image sensor(s)108include a first image sensor and a second image sensor. The first image sensor and the second image sensor are optionally configured to capture images of physical objects in an environment from two distinct perspectives. In some embodiments, system100uses image sensor(s)108to receive user inputs, such as hand gestures. In some embodiments, system100uses image sensor(s)108to detect the position and orientation of system100and/or display(s)120in the environment. For example, system100uses image sensor(s)108to track the position and orientation of one or more objects in the environment. In some embodiments, system100includes microphones(s)112. System100uses microphone(s)112to detect sound from the user and/or the environment of the user. In some embodiments, microphone(s)112includes an array of microphones (including a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the environment. System100includes orientation sensor(s)110for detecting orientation and/or movement of system100and/or display(s)120. For example, system100uses orientation sensor(s)110to track changes in the position and/or orientation of system100and/or display(s)120, such as with respect to physical objects in the environment. Orientation sensor(s)110optionally include one or more gyroscopes and/or one or more accelerometers. In some embodiments, system100implements a digital assistant. The digital assistant interprets natural language input in spoken and/or textual form and determines one or more instructions based on the input. The digital assistant then performs actions based on the instructions. In some embodiments, the actions include providing audio information and/or performing tasks responsive to the instructions. The term “digital assistant” can refer to any information processing system capable of interpreting natural language input and performing actions responsive to the input. Typically, the natural language input requests either an informational answer or performance of a task by the digital assistant. The digital assistant responds to the input by providing requested information in an audio format and/or by performing the requested task. For example, when a user asks the digital assistant “What is the weather forecast for tomorrow?”, the digital assistant may respond with the audio answer of “Tomorrow is forecast to be sunny, with a high of 75 degrees and a low of 60 degrees”. As another example, when a user requests “Set an alarm for 6:00 am tomorrow”, the digital assistant performs the task of setting a respective alarm and provides an audio confirmation of “An alarm has been set for 6:00 am tomorrow”. In some embodiments, visual information is provided in addition to or instead of audio information (e.g., text, video, animations, etc.). Furthermore, in some embodiments, the provided information includes media content (e.g., music or video content) and the digital assistant controls playback of the media content (e.g., starting and stopping the music or video content). In some cases, it would be advantageous to interrupt the provision of audio information by the digital assistant. For example, if a user begins speaking to another person while the digital assistant is providing audio information, then the user may not hear the information being provided by the digital assistant. In this case, system100stops providing the audio information until the conversation between the user and the other person has concluded. In this way, system100provides audio information with the digital assistant in a more polite manner. Furthermore, in some embodiments, before providing audio information (or resuming the provision of stopped audio information), system100detects visual characteristics that indicate it is appropriate for the audio information to be provided by the digital assistant. For example, when a user provides a request but stops speaking to think (e.g., “Schedule a meeting for Monday at 9:00 am with Tom and also . . . ”), system100detects that additional speech is expected and waits to provide audio information. FIG.2depicts an example of electronic device200providing audio information202in an environment210, according to various embodiments. In some embodiments, electronic device200is an embodiment of system100, as described in reference toFIGS.1A-1B. Audio information202is provided using speaker(s)218in response to a received input. In some embodiments, the received input is a natural language input in spoken or textual form that includes one or more instructions for a digital assistant implemented by electronic device200. Electronic device200determines the one or more instructions based on the received input and provides audio information202based on the one or more instructions. In some embodiments, the received input includes a triggering command (e.g., “Hello Computer”) that identifies the input as instructions for the digital assistant. In some embodiments, after the input from the user has stopped, electronic device200determines whether visual characteristics of the user indicate that further input is expected before providing audio information202. Examples of visual characteristics include eye gaze, facial expressions, and/or hand gestures. For example, if electronic device200detects a person's eyes gazing upward after they have stopped speaking, then electronic device200determines that further speech is expected from the person because the upward gaze indicates the person is thinking. In some embodiments, after determining that further input is expected, electronic device200waits for a predetermined time. If no further input is provided during the predetermined time, then electronic device200proceeds with providing audio information202. If the visual characteristics of the user do not indicate that further input is expected, then electronic device200proceeds with providing audio information202after the input from the user has stopped. If electronic device200detects external sound206from external sound source204while providing audio information202, then electronic device200determines whether external sound206warrants stopping the provision of the audio information202based on the type of external sound206. For some types of external sounds206, stopping audio information202is unnecessary. For example, conversational sounds that indicate a person is listening or thinking, such as “hmm”, “um”, “okay”, “uh huh”, “yes”, “I see”, and the like, would not warrant stopping the provision of audio information202. Other types of external sounds206also would not warrant stopping the provision of audio information202, such as external sounds206that are compressed audio (e.g., sounds from media content such as music or video) or speech being reproduced by an electronic device (e.g., lexical utterances emitted by a television). In some embodiments, if electronic device200determines that external sound206has characteristics consistent with compressed audio, then electronic device200continues providing audio information202(e.g., compressed audio is a type of external sound that does not warrant stopping audio information202). In other embodiments, when electronic device200determines that external sound206has characteristics consistent with compressed audio, electronic device200further determines characteristics of the external sound source204and/or the content of the compressed audio. Based on the characteristics of the external sound source204emitting the compressed audio and/or the content of the compressed audio, electronic device200can continue providing audio information202or stop the audio information202. For example, if electronic device200determines external sound source204is a television or other device emitting low-priority audio, then electronic device200continues providing audio information202. Examples of low-priority audio include pre-recorded audio such as music or movies, television programs, or radio broadcasts. However, if electronic device200determines external sound source204is a telephone or other device emitting high-priority audio, then electronic device200can stop providing audio information202so as to not distract from the high-priority audio. Examples of high-priority audio include audio of a person speaking in approximately real-time (e.g., a telephone conversion), an alarm, or a warning message. Generally, external sounds206of a type that convey more substantial information, are conversations between people, or otherwise include high-priority audio warrant stopping the provision of audio information202. These types of external sounds206include directly-vocalized lexical utterances (e.g., external sound206emitted by a person speaking in the environment210). For example, if a person begins speaking to another person in the environment210while audio information202is being provided, then electronic device200can stop the provision of audio information202upon detecting the speech. Stopping the provision of audio information202allows the two people to have a conversation without being distracted or interrupted by audio information202. Similarly, a person in the environment210making a follow-up request to the digital assistant or otherwise conveying substantial speech would also warrant stopping the provision of audio information202. Notably, audio information202is stopped without a user needing to say a silencing or triggering command, such as “stop”, “quiet”, “end”, and the like. In some embodiments, stopping the audio information202includes fading out the audio information202. In some embodiments, electronic device200determines the type of external sound206based at least in part on a location of the external sound source204in the environment210. In some embodiments, the location of the external sound source204is determined using a microphone array capable of detecting a direction and/or distance of a sound source. If the location of external sound source204corresponds to a person (and, optionally, the external sound204is not a conversational sound indicate the person is listening or thinking), then electronic device200determines that external sound204is substantial and stops the provision of audio information202. However, if the location of external sound source204is determined to correspond to an electronic device (e.g., a television or loudspeaker), then electronic device200continues to provide audio information202. In this way, electronic device200does not stop providing audio information202even when the external sound206being emitted by the electronic device sounds like human speech (e.g., a lexical utterance being spoken in a television program). In some embodiments, after stopping the provision of audio information202, electronic device200waits to resume the audio information202until an appropriate time. For example, if a person is speaking to another person in the environment210, electronic device200waits to resume audio information202until further communication between the two people is no longer expected. In some embodiments, electronic device200detects that further communication is expected based on visual characteristics of one or more people making the external sounds206, such as eye gaze, facial expressions, and/or hand gestures. For example, if electronic device200detects a person's eyes gazing upward after they have stopped speaking, then electronic device200determines that further speech is expected from the person because the upward gaze indicates the person is thinking. Once electronic device200determines it is appropriate for the audio information202to continue, electronic device200provides resumed audio information202. In some embodiments, electronic device200determines it is appropriate for audio information202to continue based on visual characteristics of one or more people, such as eye gaze, facial expressions, and/or hand gestures. For example, if system detects a person's eyes are gazing in the direction speaker(s)218, then electronic device200determines that providing resumed audio information is appropriate. In some embodiments, the audio information202is divided into predefined segments, and the resumed audio information begins with the segment where the audio information202was stopped. In this way, the resumed audio information can begin with a full phrase or word. In some embodiments, the resumed audio information includes a rephrased version of a previously provided segment of audio information202. The rephrased version of the previously provided segment of audio information202reminds the listener of where the audio information202was stopped without repeating the same (e.g., verbatim) audio information. Turning now toFIG.3, a flow chart of exemplary process300for providing audio information is depicted, according to various embodiments. Process300can be performed using an electronic device (e.g.,100a,200). The electronic device is, for example, a desktop computer, a laptop computer, a handheld mobile device, an audio playback device, a television, a monitor, a head-mounted display (HMD) device, or a heads-up display device. It should be recognized that, in other embodiments, process300is performed using two or more electronic devices, such as a user device that is communicatively coupled to another device, such as a base device. In these embodiments, the operations of process300are distributed in any manner between the user device and the other device. Although the blocks of process300are depicted in a particular order inFIG.3, it should be appreciated that these blocks can be performed in other orders. Further, one or more blocks of process300can be partially performed, optionally performed, combined with another block(s), and/or additional blocks can be performed. At block302, audio information (e.g.,202) responsive to received input is provided using a speaker (e.g.,118,218). In some embodiments, the received input includes a triggering command. At block304, while providing the audio information, an external sound (e.g.206) is detected. At block306, in accordance with a determination that the external sound is a communication of a first type, the provision of the audio information is stopped. In some embodiments, stopping the provision of the audio information includes fading out the audio information. In some embodiments, the communication of the first type includes a directly-vocalized lexical utterance. Optionally, the directly-vocalized lexical utterance excludes silencing commands. In some embodiments, the external sound is determined to be a directly-vocalized lexical utterance by determining a location corresponding to a source of the external sound (e.g.,204). In some embodiments, the location corresponding to the source of the external sound is determined with a directional microphone array. At block308, after stopping the provision of the audio information, one or more visual characteristics associated with the communication of the first type are detected. The one or more visual characteristics include eye gaze, facial expression, hand gesture, or a combination thereof. At block310, the communication of the first type is detected to have stopped. At block312, in response to detecting the communication of the first type has stopped, a determination is made whether the one or more visual characteristics indicate that further communication of the first type is expected. At block314, in accordance with a determination that further communication of the first type is not expected, resumed audio information is provided. In some embodiments, the resumed audio information is provided after stopping the provision of the audio information and in accordance with a determination that the communication of the first type has stopped. In some embodiments, the audio information is divided into predefined segments, and the resumed audio information begins with the segment where the audio information was stopped. In some embodiments, the resumed audio information includes a rephrased version of a previously provided segment of the audio information. At block316, in accordance with a determination that further communication of the first type is expected, the provision of the audio information continues to be stopped. At block318, in accordance with a determination that the external sound is a communication of a second type, the provision of the audio information is continued. In some embodiments, the communication of the second type includes conversational sounds (e.g., sounds that indicate a person is listening or thinking, such as “hmm”, “um”, “okay”, “uh huh”, “yes”, “I see”, and the like). In some embodiments, the communication of the second type includes compressed audio. In some embodiments, the communication of the second type includes a lexical utterance (e.g., speech) reproduced by an electronic device. In some embodiments, the external sound is determined to be a lexical utterance reproduced by an electronic device by determining a location corresponding to a source of the external sound (e.g.,204). In some embodiments, the location of the source of the external sound is determined with a directional microphone array. Turning now toFIG.4, a flow chart of exemplary process400for providing audio information is depicted, according to various embodiments. Process400can be performed using an electronic device (e.g.,100a,200). The electronic device is, for example, a desktop computer, a laptop computer, a handheld mobile device, an audio playback device, a television, a monitor, a head-mounted display (HMD) device, or a heads-up display device. It should be recognized that, in other embodiments, process400is performed using two or more electronic devices, such as a user device that is communicatively coupled to another device, such as a base device. In these embodiments, the operations of process400are distributed in any manner between the user device and the other device. Although the blocks of process400are depicted in a particular order inFIG.4, it should be appreciated that these blocks can be performed in other orders. Further, one or more blocks of process400can be partially performed, optionally performed, combined with another block(s), and/or additional blocks can be performed. At block402, speech input including one or more instructions is received from a source. At block404, one or more visual characteristics associated with the source of the speech input are detected. The one or more visual characteristics include eye gaze, facial expression, hand gesture, or a combination thereof. At block406, the speech input is detected to have stopped. At block408, in response to detecting the speech input has stopped, a determination is made whether the one or more visual characteristics associated with the source indicate that further speech input from the source is expected. At block410, in accordance with a determination that further speech input from the source is not expected, a response to the one or more instructions is provided. At block412, in accordance with a determination that further speech input from the source is expected, a response to the one or more instructions is not provided. In some embodiments, in accordance with the determination that further speech input from the source is expected, the response to the one or more instructions is not provided for a predetermined time. After the predetermined time, and in accordance with a determination that the speech input from the source has not resumed, a response to the to the one or more instructions is provided. Executable instructions for performing the features of methods300and/or400described above are, optionally, included in a transitory or non-transitory computer-readable storage medium (e.g., memory(ies)106) or other computer program product configured for execution by one or more processors (e.g., processor(s)102). Further, some operations in method300are, optionally, included in method400and some operations in method400are, optionally, included in method300. The foregoing descriptions of specific embodiments have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed, and it should be understood that many modifications and variations are possible in light of the above teaching. | 25,836 |
11861266 | DETAILED DESCRIPTION OF THE DISCLOSURE The illustrative embodiments provide a system, method, wireless earpieces, and personal area network for providing a virtual assistant. In one embodiment, the wireless earpieces may independently execute a virtual assistant available to the user with or without a connection to another wireless device, such as a cell phone. In another embodiment, the virtual assistant may be accessed through a separate wireless device with the wireless earpieces acting as an input/output device for providing voice, gesture, touch, or other input to control, manage, or interact with the virtual assistant. The virtual assistant may operate actively or passively to perform any number of tasks, features, and functions based on a user request, user preferences, or so forth. The virtual assistant may represent hardware, software, firmware, or a combination thereof that may include systems of the wireless earpieces that may be utilized to implement the embodiments herein described. The virtual assistant may also be an integrated part of a virtual reality or augmented reality system. The virtual assistant of the wireless earpieces may be utilized to play music or audio, track user biometrics, perform communications (e.g., two-way, alerts, etc.), provide feedback/input, or any number of other tasks. The virtual assistant may manage execution of software or sets of instructions stored in an on-board memory of the wireless earpieces to accomplish numerous tasks. The virtual assistant may also be utilized to control, communicate, manage, or interact with a number of other computing, communications, or wearable devices, such as smart phones, laptops, personal computers, tablets, holographic displays, virtual reality systems, gaming devices, projection systems, vehicles, smart glasses, helmets, smart glass, watches or wrist bands, chest straps, implants, displays, clothing, or so forth. In one embodiment, the virtual assistant of the wireless earpieces may be integrated with, control, or otherwise communicate with a personal area network. A personal area network is a network for data transmissions among devices, such as personal computing, communications, camera, vehicles, entertainment, and medical devices. The personal area network may utilize any number of wired, wireless, or hybrid configurations and may be stationary or dynamic. For example, the personal area network may utilize wireless network protocols or standards, such as INSTEON, IrDA, Wireless USB, near field magnetic induction (NFMI), Bluetooth, Z-Wave, ZigBee, Wi-Fi, ANT+ or other applicable radio frequency signals. In one embodiment, the personal area network may move with the user. Any number of conditions, factors, and so forth may be utilized to determine a response or implementation of a command that is communicated to one or more of the wireless earpieces. The virtual assistant may provide a hands free way of receiving information (e.g., applicable to the user, user's environment, wireless earpieces, connected devices, etc.) and implementing and controlling functions and features. The wireless earpieces may include any number of sensors for reading user biometrics, such as pulse rate, blood pressure, blood oxygenation, temperature, orientation, calories expended, blood or sweat chemical content, voice and audio output, impact levels, and orientation (e.g., body, head, etc.). The sensors may also determine the user's location, position, velocity, impact levels, and so forth. The sensors may also receive user input and convert the user input into commands or selections made across the personal devices of the personal area network. For example, the user input detected by the wireless earpieces may include voice commands, head motions, finger taps, finger swipes, motions or gestures, or other user inputs sensed by the wireless earpieces. The user input may be received, parsed, and converted into commands associated with the input that may be utilized internally by the wireless earpieces or sent to one or more external devices, such as a tablet computer, smart phone, or so forth. The wireless earpieces may perform sensor measurements for the user to read any number of user biometrics. The user biometrics may be analyzed including measuring deviations or changes of the sensor measurements over time, identifying trends of the sensor measurements, and comparing the sensor measurements to control data for the user. The wireless earpieces may also measure environmental conditions, such as temperature, location, barometric pressure, humidity, radiation, wind speed, and other applicable environmental data. The wireless earpieces may also communicate with external devices to receive additional sensor measurements. The wireless earpieces may communicate with external devices to receive available information, which may include information received through one or more networks, such as the Internet. FIG.1is a pictorial representation of a communications environment100in accordance with an illustrative embodiment. The wireless earpieces102may be configured to communicate with each other and with one or more wireless devices, such as a wireless device104or a personal computer118. The wireless earpieces102may be worn by a user106and are shown both as worn and separately from their positioning within the ears of the user106for purposes of visualization. A block diagram of the wireless earpieces102if further shown inFIG.2to further illustrate components and operation of the wireless earpieces102including the virtual assistant. In one embodiment, the wireless earpieces102includes a frame108shaped to fit substantially within the ears of the user106. The frame108is a support structure that at least partially encloses and houses the electronic components of the wireless earpieces102. The frame108may be composed of a single structure or multiple structures that are interconnected. An exterior portion of the wireless earpieces102may include a first set of sensors shown as infrared sensors109. The infrared sensors109may include emitter and receivers that detects and measures infrared light radiating from objects in its field of view. The infrared sensors109may detect gestures, touches, or other user input against an exterior portion of the wireless earpieces102that is visible when worn by the user106. The infrared sensors109may also detect infrared light or motion. The infrared sensors109may be utilized to determine whether the wireless earpieces102are being worn, moved, approached by a user, set aside, stored in a smart case, placed in a dark environment, or so forth. The frame108defines an extension110configured to fit substantially within the ear of the user106. The extension110may include one or more speakers or vibration components for interacting with the user106. The extension110may be removable covered by one or more sleeves. The sleeves may be changed to fit the size and shape of the user's ears. The sleeves may come in various sizes and have extremely tight tolerances to fit the user106and one or more other users that may utilize the wireless earpieces102during their expected lifecycle. In another embodiment, the sleeves may be custom built to support the interference fit utilized by the wireless earpieces102while also being comfortable while worn. The sleeves are shaped and configured to not cover various sensor devices of the wireless earpieces102. In one embodiment, the frame108or the extension110(or other portions of the wireless earpieces102) may include sensors112for sensing pulse, blood oxygenation, temperature, voice characteristics, skin conduction, glucose levels, impacts, activity level, position, location, orientation, as well as any number of internal or external user biometrics. In other embodiments, the sensors112may be positioned to contact or be proximate the epithelium of the external auditory canal or auricular region of the user's ears when worn. For example, the sensors112may represent various metallic sensor contacts, optical interfaces, or even micro-delivery systems for receiving, measuring, and delivering information and signals. Small electrical charges or spectroscopy emissions (e.g., various light wavelengths) may be utilized by the sensors112to analyze the biometrics of the user106including pulse, blood pressure, skin conductivity, blood analysis, sweat levels, and so forth. In one embodiment, the sensors112may include optical sensors that may emit and measure reflected light within the ears of the user106to measure any number of biometrics. The optical sensors may also be utilized as a second set of sensors to determine when the wireless earpieces102are in use, stored, charging, or otherwise positioned. The sensors112may be utilized to provide relevant information that may be communicated through the virtual assistant. As described, the sensors112may include one or more microphones that may be integrated with the frame108or the extension of the wireless earpieces102. For example, an external microphone may sense environmental noises as well as the user's voice as communicated through the air of the communications environment100. An ear-bone or internal microphone may sense vibrations or sound waves communicated through the head of the user102(e.g., bone conduction, etc.). In some applications, temporary adhesives or securing mechanisms (e.g., clamps, straps, lanyards, extenders, etc.) may be utilized to ensure that the wireless earpieces102remain in the ears of the user106even during the most rigorous and physical activities or to ensure that if they do fall out they are not lost or broken. For example, the wireless earpieces102may be utilized during marathons, swimming, team sports, biking, hiking, parachuting, or so forth. In one embodiment, miniature straps may attach to the wireless earpieces102with a clip on the strap securing the wireless earpieces to the clothes, hair, or body of the user. The wireless earpieces102may be configured to play music or audio, receive and make phone calls or other communications, determine ambient environmental conditions (e.g., temperature, altitude, location, speed, heading, etc.), read user biometrics (e.g., heart rate, motion, temperature, sleep, blood oxygenation, voice output, calories burned, forces experienced, etc.), and receive user input, feedback, or instructions. The wireless earpieces102may also execute any number of applications to perform specific purposes. The wireless earpieces102may be utilized with any number of automatic assistants, such as Siri, Cortana, Alexa, Google, Watson, or other smart assistants/artificial intelligence systems. The communications environment100may further include the personal computer118. The personal computer118may communicate with one or more wired or wireless networks, such as a network120. The personal computer118may represent any number of devices, systems, equipment, or components, such as a laptop, server, tablet, medical system, gaming device, virtual/augmented reality system, or so forth. The personal computer118may communicate utilizing any number of standards, protocols, or processes. For example, the personal computer118may utilize a wired or wireless connection to communicate with the wireless earpieces102, the wireless device104, or other electronic devices. The personal computer118may utilize any number of memories or databases to store or synchronize biometric information associated with the user106, data, passwords, or media content. The wireless earpieces102may determine their position with respect to each other as well as the wireless device104and the personal computer118. For example, position information for the wireless earpieces102and the wireless device104may determine proximity of the devices in the communications environment100. For example, global positioning information or signal strength/activity may be utilized to determine proximity and distance of the devices to each other in the communications environment100. In one embodiment, the distance information may be utilized to determine whether biometric analysis may be displayed to a user. For example, the wireless earpieces102may be required to be within four feet of the wireless device104and the personal computer118in order to display biometric readings or receive user input. The transmission power or amplification of received signals may also be varied based on the proximity of the devices in the communications environment100. In one embodiment, the wireless earpieces102and the corresponding sensors112(whether internal or external) may be configured to take a number of measurements or log information and activities during normal usage. This information, data, values, and determinations may be reported to the user or otherwise utilized as part of the virtual assistant. The sensor measurements may be utilized to extrapolate other measurements, factors, or conditions applicable to the user106or the communications environment100. For example, the sensors112may monitor the user's usage patterns or light sensed in the communications environment100to enter a full power mode in a timely manner. The user106or another party may configure the wireless earpieces102directly or through a connected device and app (e.g., mobile app with a graphical user interface) to set power settings (e.g., preferences, conditions, parameters, settings, factors, etc.) or to store or share biometric information, audio, and other data. In one embodiment, the user may establish the light conditions or motion that may activate the full power mode or that may keep the wireless earpieces102in a sleep or low power mode. As a result, the user106may configure the wireless earpieces102to maximize the battery life based on motion, lighting conditions, and other factors established for the user. For example, the user106may set the wireless earpieces102to enter a full power mode only if positioned within the ears of the user106within ten seconds of being moved, otherwise the wireless earpieces102remain in a low power mode to preserve battery life. This setting may be particularly useful if the wireless earpieces102are periodically moved or jostled without being inserted into the ears of the user106. The user106or another party may also utilize the wireless device104to associate user information and conditions with the user preferences. For example, an application executed by the wireless device104may be utilized to specify the conditions that may “wake up” the wireless earpieces102to automatically or manually communicate information, warnings, data, or status information to the user. In addition, the enabled functions (e.g., sensors, transceivers, vibration alerts, speakers, lights, etc.) may be selectively activated based on the user preferences as set by default, by the user, or based on historical information. In another embodiment, the wireless earpieces102may be adjusted or trained over time to become even more accurate in adjusting to habits, requirements, requests, activations, or other processes or functions performed by the virtual assistant. The wireless earpieces102may utilize historical information to generate default values, baselines, thresholds, policies, or settings for determining when and how the virtual assistant performs various communications, actions, and processes. As a result, the wireless earpieces102may effectively manage the automatic and manually performed processed of the wireless earpieces based on automatic detection of events and conditions (e.g., light, motion, user sensor readings, etc.) and user specified settings. The wireless earpieces102may include any number of sensors112and logic for measuring and determining user biometrics, such as pulse rate, skin conduction, blood oxygenation, temperature, calories expended, blood or excretion chemistry, voice and audio output, position, and orientation (e.g., body, head, etc.). The sensors112may also determine the user's location, position, velocity, impact levels, and so forth. Any of the sensors112may be utilized to detect or confirm light, motion, or other parameters that may affect how the wireless earpieces102manage, utilize, and initialize the virtual assistant. The sensors112may also receive user input and convert the user input into commands or selections made across the personal devices of the personal area network. For example, the user input detected by the wireless earpieces102may include voice commands, head motions, finger taps, finger swipes, motions or gestures, or other user inputs sensed by the wireless earpieces. The user input may be determined by the wireless earpieces102and converted into authorization commands that may be sent to one or more external devices, such as the wireless device104, the personal computer118, a tablet computer, or so forth. For example, the user106may create a specific head motion and voice command that when detected by the wireless earpieces102are utilized to send a request to the virtual assistant (implemented by the wireless earpiece or wireless earpieces102/wireless device104) to tell the user106her current heart rate, speed, and location. Any number of actions may also be implemented by the virtual assistant in response to specified user input. The sensors112may make all of the measurements with regard to the user106and communications environment100or may communicate with any number of other sensory devices, components, or systems in the communications environment100. In one embodiment, the communications environment100may represent all or a portion of a personal area network. The wireless earpieces102may be utilized to control, communicate, manage, or interact with a number of other wearable devices or electronics, such as smart glasses, helmets, smart glass, watches or wrist bands, other wireless earpieces, chest straps, implants, displays, clothing, or so forth. A personal area network is a network for data transmissions among devices, components, equipment, and systems, such as personal computers, communications devices, cameras, vehicles, entertainment/media devices, and medical devices. The personal area network may utilize any number of wired, wireless, or hybrid configurations and may be stationary or dynamic. For example, the personal area network may utilize wireless network protocols or standards, such as INSTEON, IrDA, Wireless USB, Bluetooth, Z-Wave, ZigBee, Wi-Fi, ANT+ or other applicable radio frequency signals. In one embodiment, the personal area network may move with the user106. In other embodiments, the communications environment100may include any number of devices, components, or so forth that may communicate with each other directly or indirectly through a wireless (or wired) connection, signal, or link. The communications environment100may include one or more networks and network components and devices represented by the network120, such as routers, servers, signal extenders, intelligent network devices, computing devices, or so forth. In one embodiment, the network120of the communications environment100represents a personal area network as previously disclosed. The virtual assistant herein described may also be utilized for any number of devices in the communications environment100with commands or communications being sent to and from the wireless earpieces102, wireless device104, personal computer118or other devices of the communications environment100. Communications within the communications environment100may occur through the network120or a Wi-Fi network or may occur directly between devices, such as the wireless earpieces102and the wireless device104. The network120may communicate with or include a wireless network, such as a Wi-Fi, cellular (e.g., 3G, 4G, 5G, PCS, GSM, etc.), Bluetooth, or other short range or long range radio frequency networks, signals, connections, or linkes. The network120may also include or communicate with any number of hard wired networks, such as local area networks, coaxial networks, fiber-optic networks, network adapters, or so forth. Communications within the communications environment100may be operated by one or more users, service providers, or network providers. The wireless earpieces102may play, display, communicate, or utilize any number of alerts or communications to indicate that the actions, activities, communications, mode, or status in use or being implemented by the virtual assistant. For example, one or more alerts may indicate when virtual assistant processes automatically or manually selected by the user are in process, authorized, and/or changing with specific tones, verbal acknowledgements, tactile feedback, or other forms of communicated messages. For example, an audible alert and LED flash may be utilized each time the wireless earpieces102activate the virtual assistant to receive user input. Verbal or audio acknowledgements, answers, and actions utilized by the wireless earpieces102are particularly effective because of user familiarity with such devices in standard smart phone and personal computers. The corresponding alert may also be communicated to the user106, the wireless device104, and the personal computer118. In other embodiments, the wireless earpieces102may also vibrate, flash, play a tone or other sound, or give other indications of the actions, status, or process of the virtual assistant. The wireless earpieces102may also communicate an alert to the wireless device104that shows up as a notification, message, or other indicator indicating changes in status, actions, commands, or so forth. The wireless earpieces102as well as the Wireless device104may include logic for automatically implementing the virtual assistant in response to motion, light, user activities, user biometric status, user location, user position, historical activity/requests, or various other conditions and factors of the communications environment100. The virtual assistant may be activated to perform a specified activity or to “listen” or be prepared to “receive” user input, feedback, or commands for implementation by the virtual assistant. The logic may provide for natural language processing so that when the virtual assistant is listening, the virtual assistance may determine what is a command or to collect context which may be used in interpreting or executing a future command. The wireless device104may represent any number of wireless or wired electronic communications or computing devices, such as smart phones, laptops, desktop computers, control systems, tablets, displays, gaming devices, music players, personal digital assistants, vehicle systems, or so forth. The wireless device104may communicate utilizing any number of wireless connections, standards, or protocols (e.g., near field communications, NFMI, Bluetooth, Wi-Fi, wireless Ethernet, etc.). For example, the wireless device104may be a touch screen cellular phone that communicates with the wireless earpieces102utilizing Bluetooth communications. The wireless device104may implement and utilize any number of operating systems, kernels, instructions, or applications that may make use of the available sensor data sent from the wireless earpieces102. For example, the wireless device104may represent any number of android, iOS, Windows, open platforms, or other systems and devices. Similarly, the wireless device104or the wireless earpieces102may execute any number of applications that utilize the user input, proximity data, biometric data, and other feedback from the wireless earpieces102to initiate, authorize, or process virtual assistant processes and perform the associated tasks. As noted, the layout of the internal components of the wireless earpieces102and the limited space available for a product of limited size may affect where the sensors112may be positioned. The positions of the sensors112within each of the wireless earpieces102may vary based on the model, version, and iteration of the wireless earpiece design and manufacturing process. FIG.2is a block diagram of a wireless earpiece system200in accordance with an illustrative embodiment. As previously noted, the wireless earpieces202may be referred to or described herein as a pair (wireless earpieces) or singularly (wireless earpiece). The description may also refer to components and functionality of each of the wireless earpieces202collectively or individually. In one embodiment, the wireless earpiece system200may enhance communications and functionality of the wireless earpieces202. In one embodiment, the wireless earpieces202may operate a virtual assistant independently. In another embodiment, the wireless earpieces202and a computing device204may implement a virtual assistant jointly or separate instances that work together as part of the wireless earpiece system200. As shown, the wireless earpieces202may be wirelessly linked to the computing device204. For example, the computing device204may represent a wireless tablet computer. The computing device204may also represent a gaming device, cell phone, vehicle system (e.g., GPS, speedometer, pedometer, entertainment system, etc.), media device, smart watch, laptop, smart glass, or other electronic devices. User input and commands may be received from either the wireless earpieces202or the computing device204for implementation on either of the devices of the wireless earpiece system200(or other externally connected devices). In some embodiments, the computing device204may act as a logging tool for receiving information, data, or measurements made by the wireless earpieces202. For example, the computing device204may download data from the wireless earpieces202in real-time. As a result, the computing device204may be utilized to store, display, and synchronize data for the wireless earpieces202. For example, the computing device204may display pulse, proximity, location, oxygenation, distance, calories burned, and so forth as measured by the wireless earpieces202. The computing device204may be configured to receive and display an interface, selection elements, and alerts that indicate conditions to implement the virtual assistant. For example, the wireless earpieces202may utilize factors, such as changes in motion or light, distance thresholds between the wireless earpieces202and/or computing device204, signal activity, user orientation, user speed, user location, environmental factors (e.g., temperature, humidity, noise levels, proximity to other users, etc.) or other automatically determined or user specified measurements, factors, conditions, or parameters to implement various features, functions, and commands. The computing device204may also include a number of optical sensors, touch sensors, microphones, and other measurement devices that may provide feedback or measurements that the wireless earpieces202may utilize to determine an appropriate mode, settings, or enabled functionality to be utilized by the virtual assistant. The wireless earpieces202and the computing device204may have any number of electrical configurations, shapes, and colors and may include various circuitry, connections, and other components. In one embodiment, the wireless earpieces202may include a battery208, a logic engine210, a memory212, a user interface214, a physical interface215, a transceiver216, sensors217, and a virtual assistant218. The computing device204may have any number of configurations and include components and features similar to the wireless earpieces202as are known in the art. The virtual assistant218may be implemented as part of the logic engine210, user interface, or other hardware, software, or firmware of the wireless earpieces and/or computing device204. The battery208is a power storage device configured to power the wireless earpieces202. In other embodiments, the battery208may represent a fuel cell, thermal electric generator, piezo electric charger, solar charger, ultra-capacitor, or other existing or developing power storage technologies. The logic engine210preserve the capacity of the battery208by reducing unnecessary utilization of the wireless earpieces202in a full-power mode when there is little or no benefit to the user (e.g., the wireless earpieces202are sitting on a table or temporarily lost). The battery208or power of the wireless earpieces are preserved for when being worn or operated by the user. As a result, user satisfaction with the wireless earpieces202is improved and the user may be able to set the wireless earpieces202aside at any moment knowing that battery life is automatically preserved by the logic engine210and functionality of the wireless earpieces202. The logic engine210is the logic that controls the operation and functionality of the wireless earpieces202. The logic engine210may include circuitry, chips, and other digital logic. The logic engine210may also include programs, scripts, and instructions that may be implemented to operate the logic engine210. The logic engine210may represent hardware, software, firmware, or any combination thereof. In one embodiment, the logic engine210may include one or more processors. The logic engine210may also represent an application specific integrated circuit (ASIC) or field programmable gate array (FPGA). In one embodiment, the logic engine210may execute instructions to manage the virtual assistant218including interactions with the components of the wireless earpieces202, such as the user interface214and sensors217. The logic engine210may utilize measurements from two or more of the sensors217to determine whether the virtual assistant is being requested or is otherwise needed. The logic engine210may control actions implemented the virtual assistant218in response to any number of measurements from the sensors217, the transceiver216, the user interface214, or the physical interface215as well as user preferences220that may be user entered or default preferences. For example, the logic engine210may initialize or otherwise use the virtual assistant218in response to any number of factors, conditions, parameters, measurements, data, values, or other information specified within the logic engine210or by the user preferences220. The logic engine210may also determine whether the wireless earpieces202are actively performing any user-requested functions that may require that activation of the virtual assistant or that the virtual assistant be ready to receive a request. For example, the logic engine may determine whether music is being played, communications being received, processed, or sent, noise-cancellation is being performed and so forth. Utilizing the user preferences, the logic engine210may execute instructions to initiate and implement the virtual assistant. If user input, feedback, or communications are detected or received, the logic engine210may initiate the virtual assistant to perform a task associated with the input. For example, the virtual assistant may implement wireless earpieces202to answer questions, provide user biometrics, answer activity related questions (e.g., how fast am I going, what is my average speed, where is the closest McDonalds, etc.) manage features, functions, or components, answer general questions, and so forth. The wireless earpieces202may be configured to work together or completely independently based on the needs of the user. For example, the wireless earpieces202may be used by two different users at one time. The logic engine210may also process user input to determine commands implemented by the wireless earpieces202or sent to the wireless earpieces204through the transceiver216. Specific actions may be associated with user input (e.g., voice, tactile, orientation, motion, gesture, etc.). For example, the logic engine210may implement a macro allowing the user to associate frequently performed actions with specific commands/input implemented by the virtual assistant218. In one embodiment, a processor included in the logic engine210is circuitry or logic enabled to control execution of a set of instructions. The processor may be one or more microprocessors, digital signal processors, application-specific integrated circuits (ASIC), central processing units, or other devices suitable for controlling an electronic device including one or more hardware and software elements, executing software, instructions, programs, and applications, converting and processing signals and information, and performing other related tasks. The processor may be configured to perform natural language processing (NLP) for the earpiece in order to map user voice input into executable commands. The processor may also implement any number of artificial intelligence techniques including machine learning algorithms. The memory212is a hardware element, device, or recording media configured to store data or instructions for subsequent retrieval or access at a later time. The memory212may represent static or dynamic memory. The memory212may include a hard disk, random access memory, cache, removable media drive, mass storage, or configuration suitable as storage for data, instructions, and information. In one embodiment, the memory212and the logic engine210may be integrated. The memory may use any type of volatile or non-volatile storage techniques and mediums. The memory212may store information related to the status of a user, wireless earpieces202, computing device204, and other peripherals, such as a wireless device, smart glasses, a smart watch, a smart case for the wireless earpieces202, a wearable device, and so forth. In one embodiment, the memory212may display instructions, programs, drivers, or an operating system for controlling the user interface214including one or more LEDs or other light emitting components, speakers, tactile generators (e.g., vibrator), and so forth. The memory212may also store thresholds, conditions, signal or processing activity, proximity data, and so forth. The transceiver216is a component comprising both a transmitter and receiver which may be combined and share common circuitry on a single housing. The transceiver216may communicate utilizing Bluetooth, Wi-Fi, ZigBee, Ant+, near field communications, wireless USB, infrared, mobile body area networks, ultra-wideband communications, cellular (e.g., 3G, 4G, 5G, PCS, GSM, etc.), infrared, or other suitable radio frequency standards, networks, protocols, or communications. The transceiver216may also be a hybrid or multi-mode transceiver that supports a number of different communications. For example, the transceiver216may communicate with the computing device204or other systems utilizing wired interfaces (e.g., wires, traces, etc.), NFC, or Bluetooth communications as well as with the other wireless earpiece utilizing NFMI. The transceiver216may also detect amplitudes and signal strength to infer distance between the wireless earpieces202as well as the computing device204. The components of the wireless earpieces202may be electrically connected utilizing any number of wires, contact points, leads, busses, wireless interfaces, or so forth. In addition, the wireless earpieces202may include any number of computing and communications components, devices or elements which may include busses, motherboards, printed circuit boards, circuits, chips, sensors, ports, interfaces, cards, converters, adapters, connections, transceivers, displays, antennas, and other similar components. The physical interface215is hardware interface of the wireless earpieces202for connecting and communicating with the computing device204or other electrical components, devices, or systems. The physical interface215may include any number of pins, arms, or connectors for electrically interfacing with the contacts or other interface components of external devices or other charging or synchronization devices. For example, the physical interface215may be a micro USB port. In one embodiment, the physical interface215is a magnetic interface that automatically couples to contacts or an interface of the computing device204. In another embodiment, the physical interface215may include a wireless inductor for charging the wireless earpieces202without a physical connection to a charging device. The physical interface215may allow the wireless earpieces202to be utilized when not worn as a remote microphone and sensor system (e.g., seismometer, thermometer, light detection unit, motion detector, etc.). For example, measurements, such as noise levels, temperature, movement, and so forth may be detected by the wireless earpieces even when not worn. The wireless earpieces202may be utilized as a pair, independently, or when stored in a smart case. Each of the wireless earpieces202may provide distinct sensor measurements as needed. The user interface214is a hardware interface for receiving commands, instructions, or input through the touch (haptics) of the user, voice commands, or predefined motions. The user interface214may further include any number of software and firmware components for interfacing with the user. In one embodiment, the virtual assistant218may be integrated with the virtual assistant218. The user interface214may be utilized to manage and otherwise control the other functions of the wireless earpieces202. The user interface214may include the LED array, one or more touch sensitive buttons or portions, a miniature screen or display, or other input/output components (e.g., the user interface214may interact with the sensors217extensively). The user interface214may be controlled by the user or based on commands received from the computing device204or a linked wireless device. For example, the user may turn on, reactivate, or provide feedback for the virtual assistant218or other features, functions, and components of the wireless earpieces202utilizing the user interface214. In one embodiment, the user may provide user input for the virtual assistant218by tapping the user interface214once, twice, three times, or any number of times. Similarly, a swiping motion may be utilized across or in front of the user interface214(e.g., the exterior surface of the wireless earpieces202) to implement a predefined action. Swiping motions in any number of directions or gestures may be associated with specific virtual assistant controlled activities or actions, such as play music, pause, fast forward, rewind, activate a virtual assistant, listen for commands, report sports measurements or biometrics, and so forth. The swiping motions may also be utilized to control actions and functionality of the computing device204or other external devices (e.g., smart television, camera array, smart watch, etc.). The user may also provide user input by moving his head in a particular direction or motion or based on the user's position or location. For example, the user may utilize voice commands, head gestures, or touch commands to change the processes implemented by the virtual assistant218as well as the content displayed by the computing device204. The user interface214may also provide a software interface including any number of icons, soft buttons, windows, links, graphical display elements, and so forth. In one embodiment, the sensors217may be integrated with the user interface214to detect or measure the user input. For example, infrared sensors positioned against an outer surface of the wireless earpieces202may detect touches, gestures, or other input as part of a touch or gesture sensitive portion of the user interface214. The outer or exterior surface of the user interface214may correspond to a portion of the wireless earpieces202accessible to the user when the wireless earpieces are worn within the ears of the user. In addition, the sensors217may include pulse oximeters, accelerometers, thermometers, barometers, radiation detectors, gyroscopes, magnetometers, global positioning systems, beacon detectors, inertial sensors, photo detectors, miniature cameras, and other similar instruments for detecting user biometrics, environmental conditions, location, utilization, orientation, motion, and so forth. The sensors217may provide measurements or data that may be utilized to select, activate, or otherwise utilize the virtual assistant218. Likewise, the sensors217may be utilized to awake, activate, initiate, or otherwise implement actions and processes utilizing conditions, parameters, values, or other data within the user preferences220. For example, the optical biosensors within the sensors217may determine whether the wireless earpieces202are being worn and when a selected gesture to activate the virtual assistant218is provided by the user. The computing device204may include components similar in structure and functionality to those shown for the wireless earpieces202. The computing device may include any number of processors, batteries, memories, busses, motherboards, chips, transceivers, peripherals, sensors, displays, cards, ports, adapters, interconnects, and so forth. In one embodiment, the computing device204may include one or more processors and memories for storing instructions. The instructions may be executed as part of an operating system, application, browser, or so forth to implement the features herein described. In one embodiment, the wireless earpieces202may be magnetically or physically coupled to the computing device204to be recharged or synchronized or to be stored. In one embodiment, the computing device204may include a virtual assistant that is compatible with the virtual assistant218. As a result, the separate instances may function as a single virtual assistant to enhance functionality. In addition, the seamless integration may appear to the user as a single virtual assistant (even though multiple instances may be involved across a number of different wireless and wired electronic devices). In another embodiment, the wireless earpieces202and computing device204may still communicate effectively to perform the methods and processes herein described even if a virtual assistant for the computing device204may be different from the virtual assistant218. For example, distinct virtual assistants may still communicate and interact based on developing interfaces, protocols, or standards from different service providers, manufacturers, and developers. For example, the wireless earpieces202or the computing device204may utilize data mashup technologies to interface with 3rdparty web services, such as Google, Microsoft, Facebook, Yelp, Twitter, and others to perform actions, search requests, look up information, question answering, and other relevant services. The virtual assistant may also transform output from 3rdparty web services back into natural language (e.g., heart bpm 80 to “your heart rate is 80 beats per minute”, or based on the weather report “the weather will be sunny today”). Virtual assistants of the wireless earpieces204or the computing device204may utilize text-to-speech (TTS) technologies or logic to transform natural language or to parse text as is herein described. The computing device204may also execute a virtual assistant that may utilize information, data, and resources from the wireless earpieces202and virtual assistant204to implement user requested actions. The computing device204may be utilized to adjust the user preferences220including settings, thresholds, activities, conditions, environmental factors, and so forth utilized by the virtual assistants of both the wireless earpieces202and the computing device204. For example, the computing device204may utilize a graphical user interface that allows the user to more easily specify any number of conditions, values, measurements, parameters, and factors that are utilized. In another embodiment, the computing device204may also include sensors for detecting the location, orientation, and proximity of the wireless earpieces202to the computing device204. The wireless earpieces202may turn off communications to the computing device204in response to losing a status or heart beat connection to preserve battery life and may only periodically search for a connection, link, or signal to the computing device204. The wireless earpieces202may also turn off components, enter a low power or sleep mode, or otherwise preserve battery life in response to no interaction with the user for a time period, no detection of the presence of the user (e.g., touch, light, conductivity, motion, etc.), or so forth. As originally packaged, the wireless earpieces202and the computing device204may include peripheral devices such as charging cords, power adapters, inductive charging adapters, solar cells, batteries, lanyards, additional light arrays, speakers, smart case covers, transceivers (e.g., Wi-Fi, cellular, etc.), or so forth. In one embodiment, the wireless earpieces202may include a smart case (not shown). The smart case may include an interface for charging the wireless earpieces202from an internal battery as well as through a plugged connection. The smart case may also utilize the interface or a wireless transceiver to log utilization, biometric information of the user, and other information and data. FIG.3is a pictorial representation of some of the sensors301of the wireless earpieces302in accordance with illustrative embodiments. As previously noted, the wireless earpieces302may include any number of internal or external sensors. In one embodiment, the sensors301may be utilized to determine whether the virtual assistant is activated, utilized, or listening for user input. Similarly, any number of other components or features of the wireless earpieces302may be managed based on the measurements made by the sensors301to preserve resources (e.g., battery life, processing power, etc.). The sensors301may make independent measurements or combined measurements utilizing the sensory functionality of each of the sensors to measure, confirm, or verify sensor measurements. In one embodiment, the sensors301may include optical sensors304, contact sensors306, infrared sensors308, and microphones310. The optical sensors304may generate an optical signal that is communicated to the ear (or other body part) of the user and reflected back. The reflected optical signal may be analyzed to determine blood pressure, pulse rate, pulse oximetry, vibrations, blood chemistry, and other information about the user. The optical sensors304may include any number of sources for outputting various wavelengths of electromagnetic radiation and visible light. Thus, the wireless earpieces302may utilize spectroscopy as it is known in the art and developing to determine any number of user biometrics. The optical sensors304may also be configured to detect ambient light proximate the wireless earpieces302. For example, the optical sensors304may detect light and light changes in an environment of the wireless earpieces302, such as in a room where the wireless earpieces302are located. The optical sensors304may be configured to detect any number of wavelengths including visible light that may be relevant to light changes, approaching users or devices, and so forth. In another embodiment, the contact sensors306may be utilized to determine that the wireless earpieces302are positioned within the ears of the user. For example, conductivity of skin or tissue within the user's ear may be utilized to determine that the wireless earpieces are being worn. In other embodiments, the contact sensors306may include pressure switches, toggles, or other mechanical detection components for determining that the wireless earpieces302are being worn. The contact sensors306may measure or provide additional data points and analysis that may indicate the biometric information of the user. The contact sensors306may also be utilized to apply electrical, vibrational, motion, or other input, impulses, or signals to the skin of the user. The wireless earpieces302may also include infrared sensors308. The infrared sensors308may be utilized to detect touch, contact, gestures, or other user input. The infrared sensors308may detect infrared wavelengths and signals. In another embodiment, the infrared sensors308may detect visible light or other wavelengths as well. The infrared sensors308may be configured to detect light or motion or changes in light or motion. Readings from the infrared sensors308and the optical sensors304may be configured to detect light or motion. The readings may be compared to verify or otherwise confirm light or motion. As a result, virtual assistant decisions regarding user input, biometric readings, environmental feedback, and other measurements may be effectively implemented in accordance with readings form the sensors301as well as other internal or external sensors and the user preferences. The wireless earpieces310may include microphones310. The microphones310may represent external microphones as well as internal microphones. The external microphones may positioned exterior to the body of the user as worn. The external microphones may sense verbal or audio input, feedback, and commands received from the user. The external microphones may also sense environmental, activity, and external noises and sounds. The internal microphone may represent an ear-bone or bone conduction microphone. The internal microphone may sense vibrations, waves, or sound communicated through the bones and tissue of the user's body (e.g., skull). The microphones310may sense content that is utilized by the virtual assistant of the wireless earpieces302to implement the processes, functions, and methods herein described. The audio input sensed by the microphones310may be filtered, amplified, or otherwise processed before or after being sent to the logic of the wireless earpieces302. In another embodiment, the wireless earpieces302may include chemical sensors (not shown) that perform chemical analysis of the user's skin, excretions, blood, or any number of internal or external tissues or samples. For example, the chemical sensors may determine whether the wireless earpieces302are being worn by the user. The chemical sensor may also be utilized to monitor important biometrics that may be more effectively read utilizing chemical samples (e.g., sweat, blood, excretions, etc.). In one embodiment, the chemical sensors are non-invasive and may only perform chemical measurements and analysis based on the externally measured and detected factors. In other embodiments, one or more probes, vacuums, capillary action components, needles, or other micro-sampling components may be utilized. Minute amounts of blood or fluid may be analyzed to perform chemical analysis that may be reported to the user and others. The sensors301may include parts or components that may be periodically replaced or repaired to ensure accurate measurements. In one embodiment, the infrared sensors308may be a first sensor array and the optical sensors304may be a second sensor array. FIG.4is a flowchart of a process for utilizing a virtual assistant for wireless earpieces in accordance with an illustrative embodiment. The process ofFIG.4may be implemented by one or more wireless earpieces, such as the wireless earpieces102ofFIG.1. The process ofFIG.4may be implemented by a virtual assistant of the wireless earpieces. The virtual assistant may operate independently from the virtual assistants of other wireless or computing devices. In an alternative embodiment, one or more steps or portions of the process ofFIG.4may be implemented by a wireless device, computing device, wearable devices, or any number of other devices communicating directly or through a network with the wireless earpieces. The processes and steps ofFIGS.4-7may be combined as well as performed in any order. In one embodiment, the process may begin with the wireless earpieces receiving a request to be implemented by wireless earpieces (step402). The request may represent a command, input, feedback, or measurements indicating that instructions, commands, or input are forthcoming to the virtual assistant. For example, the request may specify that a reporting command for the virtual assistant of the wireless earpieces is immediately or subsequently forthcoming. The request may also put the virtual assistant in a “listen” mode. In another embodiment, the request may represent the actual instructions, commands, or input the user is communicating for implementation by the virtual assistant of the wireless earpieces. For example, the user may ask, “what is my heart rate and average heart rate for the last 20 minutes?” The request may be received in any number of ways associated with the components of the wireless earpieces. In one embodiment, the request may be a verbal request, such as “tell me my current speed.” In another embodiment, the request may be a tactile request, such as a tap, swipe, or other input detected by the wireless earpieces. In another embodiment, the request may be a gesture sends by the wireless earpieces, such as a hand motion or shape made proximate the wireless earpieces, a head nod, or so forth. In another embodiment, the request may be a position, location, or orientation of the user. For example, in response to determining the user is oriented to ride a bike, the virtual assistant of the wireless earpieces may be configured to receive commands reporting biometric or cycling information to the user without delay. Next, the wireless earpieces execute a virtual assistant (step404). In one embodiment, the virtual assistant may be activated as requested by the user. For example, the request may be converted into a command succeeded by the logic or processor of the wireless earpieces to activate the virtual assistant. In other embodiments, the virtual assistant may always run as a background program. Next, the wireless earpieces implement an action to fulfill the request utilizing the virtual assistant of the wireless earpieces (step406). The virtual assistant may implement any number of commands, input, or feedback. In one embodiment, the virtual assistant may implement the actions without requiring a connection to one or more networks, communications connections, signals, or other devices. The autonomous operation of the virtual assistant of the wireless earpieces may be particularly useful when the user is without a network or device connection, actively engaged in a sport or other activity, or so forth. The virtual assistant may provide sports, biometric, environmental, and other information to the user. The virtual assistant may also initiate, open, close, control, or execute any number of applications, logic, components, features, and functions of the wireless earpieces. For example, a sports application specific to running may be opened in response to the user saying open “I jog.” The virtual assistant retrieves the applicable information from the logic, sensors, memory, and other components of the wireless earpieces to immediately provide the answer to the user. In additional embodiments, the wireless earpieces may have databases, logic, or additional sensors that allow the wireless earpieces to independently answer questions, related to location, fitness, sports activities, proximity to users and locations, and general knowledge questions (e.g., the types of answers that existing smart assistants provide”. In one embodiment, the user may specify types of databases or information available through the virtual assistant. In one embodiment, the action of step406may implement a process that requires additional feedback, steps, or so forth. Although not specifically shown, the wireless earpieces may be linked with communications devices. The wireless earpieces may be linked with the communications device, such as a smart phone, utilizing any number of communications, standards, or protocols. For example, the wireless earpieces may be linked with a cell phone by a Bluetooth connection. The process may require that the devices be paired utilizing an identifier, such as a passcode, password, serial number, voice identifier, radio frequency, or so forth. The wireless earpieces may be linked with the communications device and any number of other devices directly or through one or more networks, such as a personal area network. The wireless earpieces may be linked so that actions or commands implemented by the wireless earpieces may also implemented or communicated across one or more wireless device(s) (e.g., for reporting, synchronization, process management, etc.). In addition, any number of alerts, messages, or indicators may be sent between the two devices to present information to the user. The information utilized by the wireless earpieces may come from any number of sensor components, arrays, or aspects of the wireless earpieces. Any number of optical, infrared, touch, motion, orientation, and location sensors may be utilized whether internally or externally positioned (e.g., when the wireless earpieces are worn by the user). The sensor measurements may be processed or otherwise evaluated by the wireless earpieces for implementing various processes. For example, one or more processors of the wireless earpieces may process the incoming data measurements from first and second sensor arrays so that sport reporting may be quickly reported to the user when asked (e.g., how fast am I going, how long have I been running, etc.). The wireless earpieces may utilize predictive logic to determine the most common requests received by the wireless earpieces so that the applicable data, measurements, or processing are already completed or ready to be completed without delay based on a request received by the virtual assistant. Additional, optical, chemical, mechanical, and/or electrical sensors of the wireless earpieces or a connected wireless device may also be utilized. The sensor measurements are processed for subsequent analysis, determinations, or decisions, implemented by the wireless earpieces. FIG.5is a flowchart of a process for utilizing a virtual assistant for wireless earpieces and a wireless device in accordance with an illustrative embodiment. In one embodiment, the process ofFIG.5may be implemented by wireless earpieces502in communication with the wireless device504(jointly the “system”). The wireless earpieces502and wireless device504may represent devices, such as those shown inFIGS.1&2. The method ofFIG.5also be performed independently by each the left wireless earpiece or the right wireless earpiece. The process may begin with the wireless earpieces502or the wireless device504activating a virtual assistant (step506). The virtual assistant may be automatically or manually activated based on a request from the user, user preferences, location, activity, or any number of other factors, conditions, parameters, feedback, or so forth. As noted, the wireless earpieces502and the wireless device504may individually or collectively implement or execute a virtual assistant. The virtual assistant may represent a single instance executed across both devices, common or similar virtual assistants, or distinct virtual assistants. Next, the wireless earpieces502determine whether a request received by the virtual assistant is implementable by the wireless earpieces502(step508). The wireless earpieces502determine whether the request is implementable based on the information, applications, processes, and methodologies available to the wireless earpieces502. In one embodiment, the request may be received audibly from the user. In other embodiments, the request may be automatically or manually received alphanumerically, tactilely, based on historical requests, based on user preferences, or so forth. Reception of the request may be received as part of step506or may alternatively represent a different step altogether. In response to determining the request is implementable by the wireless earpieces502during step508, the wireless earpieces502retrieve information and data to fulfill the request (step510). In one embodiment, the virtual assistant of the wireless earpieces502may retrieve the information. In other embodiments, additional applications, databases, logic, processes, or methods may be utilized by the wireless earpieces5022fulfill the request. In one embodiment, the wireless earpieces502may request additional information, clarification, or input in order to fulfill the request. Next, the wireless earpieces502implement and action to fulfill the request utilizing the virtual assistant (step512). As noted, the action may be performed by the virtual assistant or other components, modules, functions, or other portions of the wireless earpieces502. The sensors of the wireless earpieces502may be utilized to provide biometric, user, and environmental measurements applicable to the request. In response to determining the request is not implementable (e.g., entirely) by the wireless earpieces502during step510, the request is processed by the virtual assistant of the wireless device504(step514). In one embodiment, some requests made by the user may require processing power, information, connections, signals, and networks, or other resources that may be beyond those available to the wireless earpieces502alone. As a result, the request may be implemented in part by the wireless device504with or without additional communications with the wireless earpieces502. Next, the wireless device504retrieves information and data from the wireless earpieces to fulfill the request (step510). In one embodiment, the wireless device504may send a request for applicable information to the wireless earpieces502. For example, the wireless device504may request user biometrics and sports information that may be communicated from the wireless earpieces502to the wireless device at least in part to respond to the request. If information is not required from the wireless earpieces502, the wireless device504may process the request without retrieving information as is described in step510. For example, biometric data may be periodically communicated or synchronized between the wireless earpieces502and the wireless device504, and, as a result, the wireless device504may not require additional information or communications with the wireless earpieces502. Next, the wireless device implements an action to fulfill the request utilizing the virtual assistant (step516). FIG.6is a flowchart of a process for utilizing automatically implementing a virtual assistant in accordance with an illustrative embodiment. In one embodiment, the process ofFIGS.6and7may be implemented by wireless earpieces, individually, or as a set. The wireless earpieces may be utilized as stand-alone devices or may communicate with one or more devices (e.g., a smart phone) through a connection, signal, or network. The process may begin by receiving user preferences associated with the wireless earpieces (step602). In one embodiment, the user preferences may be provided directly through the wireless earpieces. For example, an interactive audio menu may audibly present a number of options to a user in order to receive various selections or feedback. The information may be presented by one or more speakers and user input may be received through one or more microphones of the wireless earpieces. The user may also provide the user preferences utilizing free form text, such as “track my heart rate at all times” or “automatically prepare biking information when I am biking.” In another embodiment, the user preferences may be selected utilizing a graphical user interface, web interface, or other interface available through a smart case, wireless device (e.g., app in communication with the wireless earpieces), a computing device, or other electronics configured to communicate with the wireless earpieces through a physical or wireless connection. Any number of menus, pages, icons, menus, scroll options, radio buttons, and so forth may be utilized to provide the user preferences. User preferences received through a separate device may be synchronized to the wireless earpieces. Next, the wireless earpieces capture data and information about the user and the environment of the user based on the user preferences (step604). The wireless earpieces include a number of sensors for measuring user biometrics, the user's environment, and other applicable information. The user preferences may specify when the distinct sensor arrays are activated to perform measurements. For example, the user preferences may specify that pulse information, including applicable statistics and other mathematical analysis, are available to the user anytime the wireless earpieces are worn by the user. The user preferences may also set the wireless earpieces to monitor the user's words and actions to anticipate potential needs. The data and information may be utilized to perform analysis or calculations to provide valuable information, suggestions, recommendations, alerts, or other information to the user before even being requested. In one embodiment, the wireless earpieces may specifically monitor the health condition of the user. Next, the wireless earpieces determine whether to provide automatic assistance through the virtual assistant (step606). In one embodiment, the determination of step606may be performed automatically in response to the user preferences provided by the user. In another embodiment, the wireless earpieces may prompt the user with a question whether the user would like assistance from the virtual assistant. User input may also be received through tactile input, gestures near the wireless earpieces, or so forth. In one embodiment, the user preferences may specify a user location, orientation, determine action/activity, or user input that may be detected by the sensors of the wireless earpieces to automatically provide assistance through the virtual assistant of the wireless earpieces. In one embodiment, the wireless earpieces may detect that the user is jogging in a part close to his home. As a result, the virtual assistant may have a specific user biometrics, such as time jogging, heart rate, average heart rate, cadence, and steps for minute ready should the user provide a specified keyword, such as “work out status.” The user preferences may specify any number of keywords, gestures, head movements, or tactile input that may be utilized to provide the specified user biometrics. The user preferences may also include a timer or time period, such as every 10 minutes when the user's heart rate is over 120 bpm to provide the specified user biometrics regardless of other selections that may be made utilizing the wireless earpieces or a connected wireless device. In another embodiment, the wireless earpieces may have an order for hot chocolate ready for electronic transfer to a nearby restaurant/shop based on the previous behavior of the user. In another embodiment, the wireless earpieces may detect the user is swimming or performing yoga and may automatically begin playing a preselected playlist of music while reporting user specified biometrics. In another embodiment, the wireless earpieces may automatically prepare a message, such as a text message indicating “I am on my way home” in response to the location of the user (e.g., at the end of a jog or bike ride, or when leaving the gym, etc.). The user preferences may be utilized to provide enhanced communication as well as a safety measure for the user. For example, the wireless earpieces may also text or post the user's last known location and activity for specified individuals that are trusted with that information e.g., immediate family, friends, etc.). If the wireless earpieces determine to not provide automatic assistance through the virtual assistant during step606, the wireless earpieces continue to capture data and information about the user the environment or the user based on the user preferences (step604). Updated user preferences establishing how and when the virtual assistant of the wireless earpieces are utilized may be updated at any time as shown in step602. If the wireless earpieces determine to provide automatic assistance through the virtual assistant during step606, the wireless earpieces generate automatic assistance through the virtual assistant utilizing the data and information (step608). The virtual assistant may function in accordance with the user preferences previously established by the user. Next, the wireless earpieces communicate the automatic assistance to the user through the virtual assistant (step610). In one embodiment, the virtual assistant may automatically report sports statistics (e.g., distance travelled, steps, current heart rate, average heart rate, maximum heart rate, average speed, etc.) in response to determining the user has stopped or changed speeds (e.g., changes from jogging to running). The virtual assistant may also periodically report custom information to the user based on the user preferences. For example, the custom information may include a timer, user's temperature, and an environmental temperature. In one embodiment, the virtual assistant of the wireless earpieces may interject to provide warnings based on determined user biometrics that are associated with a user health condition. For example, if the virtual assistant determines based on the user's biometrics, she may be overheating, the virtual assistant may provide a warning to the user and encourage that the user rest, cool down, drink lots of water and seek out medical attention as needed/available. In other embodiments, the wireless earpieces may be utilized to provide marketing or business information to the user. For example, in response to the user approaching a retail mall, applicable coupons, discounts, promotions, incentives, or other communications may be played to the user for at least the user may be made aware that such information is available. The wireless earpieces may include default preferences controlling how such information may be communicated to the user. The user preferences may also specify when, how, and where the user may be alerted of such information. The user may also allow the wireless earpieces to “listen” to applicable conversations to suggest potential shopping, marketing, or business information. In other embodiments, the wireless earpieces may implement an action or provide automatic assistance to address a health or medical status issue associated with the user. The sensors may read various user biometrics that may be utilized by the logic (e.g., processing and comparison against user supplied or predefined thresholds) to determine the health or medical status of the user. For example, the wireless earpieces may determine the user is overheating, passed out, lethargic, drunk, slurring speech, in shock, hypertensive, in diabetic shock, dehydrated, in pain, stressed, or so forth. Any number of health or medical conditions or states may be detected by the wireless earpieces based on the applicable health factors and parameters that may be ascertained by the sensors (e.g., pulse rate, respiration rate, temperature, position, orientation, voice characteristics, blood pressure, blood chemical content, skin measurements, impact/force levels, and associated statistics, trends, etc.). The sensors of the wireless earpieces (e.g., microphone, blood monitor, optical scanners, accelerometer, gyroscope, potentiometer, heart rate monitor, or other monitoring device. The wireless earpieces may identify warning signs as well as conditions to notify the user, guardians, administrators, caregivers, or so forth. FIG.7is a passive process for utilizing a virtual assistant in accordance with an illustrative embodiment. The process ofFIG.7may begin by executing a virtual assistant for the wireless earpieces (step702). The virtual assistant may represent common virtual or digital assistants, such as Siri, Alex, Cortana, OK Google, Watson, or so forth provider by any number of service providers or companies. In one embodiment, the virtual assistant may run as a background process on the wireless earpieces that may be utilized at any time. The virtual assistant may also be activated based on user input, such as a voice command, tactile input, gesture, user movement, user preferences, or so forth. In other embodiments, the virtual assistant may be integrated with an operating system, kernel, or set of instructions available to the wireless earpieces. The virtual assistant may also represent an application executed by the wireless earpieces. Next, the wireless earpieces passively collect information and data utilizing sensors of the wireless earpieces (step702). The wireless earpieces may collect information in accordance with user preferences, settings, or other permissions of the wireless earpieces. As a result, the user may not feel that the wireless earpieces are invading the privacy of the user. The user may also specify how the information and data is saved, archived, or otherwise communicated with a wireless device or other applicable devices or systems. In one embodiment, the wireless earpieces may analyze the speech patterns of the user. For example, the wireless earpieces may be utilized to provide feedback for users that are learning a new language, trying to improve their grammar, vocabulary, or accent, or otherwise trying to enhance their speech and language characteristics. The wireless earpieces may also be utilized for medical purposes, such as helping a disabled user develop new speech or motor skills. Similarly, the wireless earpieces may be utilized to help a user regain speech and motor functions after a stroke, heart attack, or other medical condition. For example, the user may be prompted to say a number of words, phrases, or sentences and may then be coached, corrected, or otherwise guided to make improvements based on the voice input read from the user by the microphones of the wireless earpieces. In another embodiment, the wireless earpieces may analyze the speech of the user to determine applicable questions that the user may have. The applicable virtual assistant may utilize automatic speech recognition to transcribe human speech (e.g., commands, questions, dictation, etc.) into text or other formats for subsequent analysis. The virtual assistant may also perform natural language processing (e.g., speech tagging, noun-phrase chunking, dependency and constituent parsing, etc.) to translate transcribed text into parsed text. During step704, the virtual assistant may also perform question and intent analysis to analyze parsed text. For example, parsed text may be associated with particular user commands and actions (e.g., “Tell me my heart rate”, “How far have I ran?”, “Set a timer for five minutes”, “Tell me when I have swam 500 meters”, etc.). Next, the wireless earpieces provide feedback to the user utilizing the virtual assistant (step706). In one embodiment, the feedback may be provided in response to a user input or request. In another embodiment, the feedback may be automatically provided to the user. In one example, the feedback of step706may be applicable to the language analysis performed during step404. For example, the virtual assistant may indicate that the correct saying in English is “for all intents and purposes” and not “for all intensive purposes” as it is commonly misstated. Similarly, the user may receive audible instructions on how to roll “r”s when speaking in Spanish, such as “in Spanish the word arriba sounds almost like uh-rd-rd-rd-iba” or other grammatic, vocabulary, phonetic, accent, or pronunciation instructions. A phonetic spelling may also be sent to a wireless device in communication with the wireless earpieces (e.g., (ré-b). In another example, if the user asks in conversation, “Where is Julie?”, the virtual assistant may look up applicable mapping information during step404that may have been previously shared with the user by Julie (e.g., Find Friends, Glympse, Google Maps, Waze, etc.) for communication to the user, such as Julie is 2.3 miles away and headed in your direction at 35 mph. The illustrative embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments of the inventive subject matter may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium. The described embodiments may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computing system (or other electronic device(s)) to perform a process according to embodiments, whether presently described or not, since every conceivable variation is not enumerated herein. A machine readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions. In addition, embodiments may be embodied in an electrical, optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.), or wireline, wireless, or other communications medium. Computer program code for carrying out operations of the embodiments may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user” computer through any type of network, including a local area network (LAN), a personal area network (PAN), or a wide area network (WAN), or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider). FIG.8depicts a computing system800in accordance with an illustrative embodiment. For example, the computing system800may represent a device, such as the wireless device204ofFIG.2. The computing system800includes a processor unit801(possibly including multiple processors, multiple cores, multiple nodes, and/or implementing multi-threading, etc.). The computing system includes memory807. The memory807may be system memory (e.g., one or more of cache, SRAM, DRAM, zero capacitor RAM, Twin Transistor RAM, eDRAM, EDO RAM, DDR RAM, EEPROM, NRAM, RRAM, SONOS, PRAM, etc.) or any one or more of the above already described possible realizations of machine-readable media. The computing system also includes a bus803(e.g., PCI, ISA, PCI-Express, HyperTransport®, InfiniBand®, NuBus, etc.), a network interface806(e.g., an ATM interface, an Ethernet interface, a Frame Relay interface, SONET interface, wireless interface, etc.), and a storage device(s)809(e.g., optical storage, magnetic storage, etc.). The system memory807embodies functionality to implement all or portions of the embodiments described above. The system memory807may include one or more applications or sets of instructions for implementing a virtual assistant to communicate with one or more wireless earpieces. The virtual assistant may be stored in the system memory807and executed by the processor unit802. As noted, the virtual assistant may be similar or distinct from a virtual assistant utilized by the wireless earpieces. Code may be implemented in any of the other devices of the computing system800. Any one of these functionalities may be partially (or entirely) implemented in hardware and/or on the processing unit801. For example, the functionality may be implemented with an application specific integrated circuit, in logic implemented in the processing unit801, in a co-processor on a peripheral device or card, etc. Further, realizations may include fewer or additional components not illustrated inFIG.8(e.g., video cards, audio cards, additional network interfaces, peripheral devices, etc.). The processor unit801, the storage device(s)809, and the network interface805are coupled to the bus803. Although illustrated as being coupled to the bus803, the memory807may be coupled to the processor unit801. The computing system800may further include any number of optical sensors, accelerometers, magnetometers, microphones, gyroscopes, temperature sensors, and so forth for verifying user biometrics, or environmental conditions, such as motion, light, or other events that may be associated with the wireless earpieces or their environment. The features, steps, and components of the illustrative embodiments may be combined in any number of ways and are not limited specifically to those described. In particular, the illustrative embodiments contemplate numerous variations in the smart devices and communications described. The foregoing description has been presented for purposes of illustration and description. It is not intended to be an exhaustive list or limit any of the disclosure to the precise forms disclosed. It is contemplated that other alternatives or exemplary aspects are considered included in the disclosure. The description is merely examples of embodiments, processes or methods of the invention. It is understood that any other 35 modifications, substitutions, and/or additions may be made, which are within the intended spirit and scope of the disclosure. For the foregoing, it can be seen that the disclosure accomplishes at least all of the intended objectives. The previous detailed description is of a small number of embodiments for implementing the invention and is not intended to be limiting in scope. The following claims set forth a number of the embodiments of the invention disclosed with greater particularity. | 82,667 |
11861267 | DETAILED DESCRIPTION Exemplary systems, devices and methods may include an interactive design tool for real-time architectural adaptation. This may include a user device including a hardware processor, physical memory and a user interface. The user device may provide operations to generate a virtual reality (VR) architectural session including a toolbelt with a virtual selection tool for adaptation of at least one of an environment, an object and an avatar. Operations may include to receive or select a selection spot on the object by a projection between the virtual selection tool and the object, receive or select an adaptation relative to at least one of the object, the environment and the avatar; and display the adaptation to the at least one of the object, the environment and the avatar in real-time during the VR architectural session. These may further include to receive a second adaptation relative to the at least one of the object, the environment and the avatar, receive a third adaptation relative to the at least one of the object, the environment and the avatar, change one or more portions of the at least one of the object, the environment and the avatar according to one or more of the first, second and third adaptations, move one or more portions of the at least one of the object, the environment and the avatar according to one or more of the first, second and third adaptations, and build one or more new portions of the at least one of the object, the environment and the avatar according to one or more of the first, second and third adaptations. FIG.1illustrates an exemplary system100, for example, an interactive design tool including a system for real-time architectural adaptation. System100may take many different forms and include multiple and/or alternate components, structures, and arrangements. While an exemplary system100is shown, the exemplary components are not intended to be limiting, and additional or alternative components and/or implementations may be used and are contemplated. FIG.1illustrates an exemplary system100, e.g., a network system for an interactive design tool. System100may include user101(e.g., admin user101aand observer users101b,c,d,e), user device102(e.g., devices102a,b,c,d,e), processor103(e.g., hardware processor), memory105(e.g., physical memory), user interface107(e.g., devices107a,b,c,d,e), server108(e.g., external modeling tools108aand/or cloud server108b), program109, database111(e.g., databases111a,b), export113(e.g., digital model export), toolbelt115, authenticate119(e.g., check local license key against license key database and permit use of application if authenticated, environment121, avatar123(e.g., avatars123a,b,c), and object125(e.g., objects125a,bsuch as, but not limited to, architectural elements like architectural finishes, architectural geometry (walls, ceilings, floors, and doors) or furnishings like windows, doors, chairs, lighting fixtures, or kitchen equipment, to name a few. System100, by way of user interface107and program109in communication with one or more device102, processor103, memory105, server108, and/or database111, may include one or a combination of input-output, display and/or hardware devices such as a mobile, headset, handheld and/or touchscreen device for providing a virtual session (e.g., architectural session) using virtual reality, augmented reality, audiovisual, and/or tactile inputs and outputs. System100may adapt, by user interface107, program109and/or processor102(should be103), one or more portions of the operations, user interfaces, and modeling information (e.g., one or more of environment121, avatar123, and object125). System100may adapt to include additional plug-ins for other modeling software to facilitate additional connectivity between System100and other software. FIG.2illustrates an exemplary system200including, for example, a back-end architecture for providing operations of an interactive design tool190. System200may take many different forms and include multiple and/or alternate components, structures, and arrangements. While an exemplary system is shown, the exemplary components are not intended to be limiting, and additional or alternative components and/or implementations may be used. System200may include program109configured to provide operations of an interactive design tool190. System200may include program109stored on memory105to provide the operations, user interfaces and modeling information by way of execution by processor103, display by user interface107, transfer by network117, and/or adaptation by any systems or components herein. System200may include plugin manager device201, toolbelt importer device215, toolbelt logic device217, front end controls/input device241, a user interface device249, canvas controller master class device251, and external libraries280. Plugin device manager201may receive and perform operations on modeling information of server108. Plugin device manager201may perform operations including, but not limited to, group objects203(e.g., by identifiers), divide meshes205(e.g., divide meshes by material identifiers including object id, material id, object type, or material type, clone textures207(e.g., clone material textures by finding their source location in the object's metadata and duplicating the textures to a user-specified file-save location, pair209(e.g., pair textures with meshes by binding geometry properties to effect and material properties via the mesh's unique object id and the material's unique material id, and export format211(e.g., an interchange file format for interactive and/or 3D applications). Plugin device manager201may transfer modeling information by way of exporter213. Rather than export each polymesh individually, the mesh exporter213batches polymesh's by their object id, making the process more efficient and allowing for objects to be moved and otherwise transformed more easily in the system200. Toolbelt importer device215may receive and perform operations on modeling information that are received from exporter213. Tool belt importer device215may provide operations including, but not limited to, analyze material references217, cache material references219, import meshes221(e.g., with material references and other related properties stored in the meshes metadata), pair223(e.g., textures with meshes), and export225(e.g., in an interchange file format). Toolbelt importer device215may transfer adapted modeling information with front end controls, or other input devices241. Devices201and215are operable to collect object properties such as, but not limited to, name, address, manufacturer, price, dimensions and other parameters of an object. Toolbelt logic device217may exchange modeling information218with front end controls/input241. Toolbelt logic device217may provide operations including to exchange modeling information218between virtual reality master controller class219and master shared asset class239. Toolbelt logic device217may provide operations on modeling information218including to teleport221(e.g., moves the user avatar through the environment), move223(e.g., moves objects around the user using user inputs), measure225(e.g., adds measurements and annotations as a 3D line to geometry), camera227(e.g., takes a screenshot of the user's view for later documentation), place object229(e.g., adds new geometry from a library of objects), draw231(e.g., adds 3D lines in a freeform style), create object223(e.g., creates new geometry in a freeform style), change material235(e.g., changes the object's finish material from a library of materials), and build or modify wall237(e.g., adds a wall geometry from a library of walls). Toolbelt logic device217may exchange adapted modeling information218with front end controls/input241. Front end controls241may exchange inputs and outputs to adapt modeling information218. Front end controls241may be in communication with and/or include user device102, user interface107, server108, database111, environment121, or a combination thereof. Front end controls241may exchange information with toolbelt logic device217and user interface device249. User interface device249may include canvas controller251(e.g., canvas controller master class) and provide operations including, but not limited to, import253, message post255, change camera257, layers259(e.g., allowing users to turn on or off objects by category, for example turn off all floors, or all furniture, or isolate mechanical geometry, etc.), settings261(e.g., allows users to change visual appearance such as bloom, lighting intensity, sun position, saturation, and contrast), and network263(e.g., controls how to join an online multi-player session or host a new session for others to join). The external libraries280include a library manager282for sorting, sourcing, storing, and making available data and other information that a user may desire to access to aid them in the design process. Examples of the information that is available in the library manager282includes, but is not limited to, place object library284, build all library286and material library288. It will be appreciated that the system200may have other libraries. The system200also permits multiple users to interact with the same computer using various output screens. If the computer102is connected to a VR headset107b(107user interface), a user may engage with the avatar123a, environment121, or object125through the VR Master Controller Class219user interface249while simultaneously another user101c,101dand/or101emay interact with the avatar123, environment121, or object125through the Canvas Controller Device251user interface via a keyboard102b, mouse, and standard monitor. In other iterations this may also involve multiple users101c,101d,101ewith remotely connected devices such as an AR headset or other. Thus, the present system200is operable to provide more than one user to simultaneously interact with other users to collaborate in a design session. FIG.3illustrates an exemplary system300including, for example, a front-end architecture302for providing operations of an interactive design tool190. System300may take many different forms and include multiple and/or alternate components, structures, and arrangements. While an exemplary system300is shown, the exemplary components are not intended to be limiting, and additional or alternative components and/or implementations may be used. System300may include program109configured to provide operations of an interactive design tool190. System300may include program109stored on memory105to provide the operations, user interfaces and modeling information by way of execution by processor103, display by user interface107, transfer by network117, and/or adaptation by any systems or components herein. System300may include observer component301and administrator component303to generate one or more sessions of environment121by way of one or more processor103, memory105, user interface107and environment121. Observer component301may include inputs, outputs, buttons and/or selections such as, but not limited to, a first observer user input305(e.g., which is operable to permit the user to click a “selection button,” “menu buttons,” “undo buttons,” or “scroll buttons” to engage with virtual content around them), ray clickable tool selection306(e.g., a three dimensional user interface comprised of buttons hovering around the VR User in a toolbelt-like orientation, following their avatar123position), a second observer user input307(e.g., which may be operable to allow another observer provide inputs to engage the virtual content around them), and a tool specific settings and options308(e.g., when the material tool is selected, the material library editor appears on the controller wand in the user's hand. When the place object tool229is selected, the place object library284editor appears in the user's hand. Also, different “help” flags appear on the controllers depending on the tool selected. For example, when the tape measure tool is selected, a “tool tip” appears on the controller that says “click once to begin a measurement, click again to save the measurement”), tool execution309, selection tool311, and environment implementation and response313(e.g., environment121). In this iteration we show a VR headset and controllers with specific button mappings, however the invention described here could include any future iterations of VR and AR hardware implementing clickable tool selection with physical controller buttons that can be used to implement our ray clickable tool selection. Administrator component303may include inputs, outputs, buttons and/or selections such as administrator inputs321(e.g., camera modes, camera settings, import tools, other menu-based controls of the visual, network or import environment, visualization settings/controls323, observer inputs325(e.g., tools for importing new geometries and saving or loading notes drawn by the users. These notes serve as ‘meeting minutes’ for any changes that a user wants to make or notes they want to leave a client or other architect). The administrator inputs321may also include 3D input buttons317, network settings327, observer view preferences329(e.g., fly-through mode, VR mode, third person VR mode (where the camera is behind the VR users avatar), or shoulder VR mode (where camera is located right behind the VR user's head), a fly-through mode331(e.g., which allows users that don't have a VR headset to move through the space using the WASD keyboard inputs, mouse inputs, or arrow keyboard inputs), and a tool selection335that is accessible to an observer101. The administrator inputs321may also include “override” commands for the virtual reality user in which the “administrator” can change the observer's tool with a 2D button in case they are lost. Thus, the system300is operable to provide a virtual reality tool override. The administrator component303further may include user interfaces107, and server108(e.g., cloud server108b). This system300is designed to work through any combination of the following user groups: administrator only, observer only, or administrator and observer working together. FIG.4illustrates an exemplary process400of program109including, for example, export401with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including, but not limited to, viewport403, toolbelt VR plug-in405, export dialog407, save as dialog409, exported file411, or a combination thereof. These may include toolbelt115to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. The export process401automatically packages all image textures used on materials in the architectural model and places them in a folder that correlates to the exported geometry file's name. This is automatically parsed through during the import process in order to reconstruct every material and properly match it to the geometry it is attached to within our software. FIG.5illustrates an exemplary process500of program109including, for example, toolbelt VR import501with operations provided by way of one or more processor103, memory105, user interface107and environment121. Process500may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including, but not limited to, toolbelt VR viewport503, import dialog505, import file507, import file509, import file511, import object layers513, or a combination thereof. These may include toolbelt115to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. Besides the manual import button, an introductory user prompt may be displayed on startup asking the user if they would like to begin their project by importing a file. FIG.6illustrates an exemplary process600of program109including, for example, VR tools teleport601with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute VR tools teleport601to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including select object603(e.g., select object to teleport), select location605(e.g., destination for object), or a combination thereof. These may include toolbelt115with a virtual selection tool607for providing projection609onto a selection spot611to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. Teleport “regions” may be locked and unlocked to control where users go. For example, if objects “types” are tagged as “floor” or “stair” the teleport function is turned on when the user points at them. If the user tries to teleport into a wall, they will not be able to do so. The process600controls the user experience and restricts them from having an unrealistic or uncomfortable experience in the VR headset. FIG.7illustrates an exemplary process700of program109including, for example, VR tools move701with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate a plurality of operations, user interfaces and modeling information for display by way of user interface107including select move operations703, move object705, or a combination thereof. These may include toolbelt115with first and second virtual-in-virtual (VIV) user interfaces707a,bfor controlling virtual selection tool607, projection609and selection spot611to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. Additionally, moving an object125will register its previous position to an “undo” index. If control+z on the keyboard or the “undo buttons” on the controller are clicked, the object125will move back to its last position. This list of previous positions is additive, so the user could move an object125ten times and hit undo ten times to return it sequentially back to its previous positions until it returns to its original position. FIG.8illustrates an exemplary process800of program109including, for example, VR tools geometry creator801with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including select origin803, define x, y and z-axis sizes/dimensions805, or a combination thereof. These may include toolbelt115with first and second VIV user interfaces707a,bfor controlling virtual selection tool607, projection609and selection spot611for moving, changing, adapting and resizing a first footprint origin area807to a second footprint destination area809to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. If a user mistake occurs during the creation of a new geometry, the operation can be cancelled by pressing the undo button. If an invalid geometry is created, such as one with a height, width, or length dimension of zero, the geometry creation operation is cancelled. If a geometry is properly created, the geometry is added to the “undo” register, as described above for the move tool. FIG.9illustrates an exemplary process900of program109including, for example, VR tools material painter901with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process900is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including, but not limited to, select material903, object paint905, or a combination thereof. These may include toolbelt115with first, second and third VIV user interfaces707a,b,cfor controlling object paint907by way of virtual selection tool607, projection609and selection spot611for moving, changing, adapting and resizing a first object paint area907to a second object paint area911to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. A library288of materials which is tied to architectural finishes may be provided in the cloud108b, however users may also create their own materials by uploading images and changing properties. The system and process900also allows outside vendors to include material packages as plug-ins. FIG.10illustrates an exemplary process1000of program109including, for example, computer user interface camera tools1001with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, including but not limited to, user interfaces and modeling information for display by way of user interface107including toolbelt115, third person camera1003, shoulder camera105, first person VR camera1007, fly-through camera1009, or a combination thereof. These may include environment121, avatar123, object125, and adaptation menu1110including corresponding first, second, third and fourth view selections1111a,b,c,dto provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. The fly-through camera1009can be used by an administrator while the vr camera1007may be used by an operator. If no VR headset is detected on the user's computer, the fly-through camera1009will be the default camera. FIG.11illustrates an exemplary process1100aof program109including, for example, computer user interface interaction tools1101with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including toolbelt115, 3D pointer tool1103, layer management dialog1105, 3D pointer tool1111, import geometry dialog1113or a combination thereof. Toolbelt115may provide associated indictor menus1104,1106,1112, and1114to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. The process1100ais operable to provide a multi-user/multiplayer mode. When in multiplayer mode, the avatar123positions and orientations, as well as all avatar tools such as pointer, drawing tool, etc. are shared among all users on the network. Every multiplayer user can see the position, gestures, and tools of all other users. Any changes to geometry, notes, or drawings are also translated in their proper position to every other user. FIG.12illustrates an exemplary process1100bof program109including, for example, computer user interface interaction tools1101with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including projection609, selection spot611, place note dialog1107, place note in space1109, network connect dialog1115, plan view window1117or a combination thereof. This may provide associated indictor menus1108,1110,1116, and1118to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. The network button brings up a dialog that allows users to “host” or “join” a network session. Creating a session hosts the users external and internal IP address on a server and facilitates the connection to the host's device from any number of external users. Joining a session connects your computer to the user that hosted an already existing session. If the session name is incorrect or a user is unable to connect, details about the connection problem are displayed to the user. The host is informed of any users that join their session and can see their names in the network dialog and their positions in VR space. FIG.13illustrates an exemplary process1200of program109including, for example, VR user interface toolbelt and controllers1201with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process is illustrated through a first-person perspective of the VR user looking down at the virtual tools hovering around their waist as three dimensional buttons that are ray clickable through controller input or override commands entered by the administrator at the computer terminal. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107for providing toolbelt115with first and second virtual selection tools607a,b, projection609, selection spot611, VIV selector orientation viewers1205a,b,c, selector buttons1203a-j, and VIV selection spot viewers1207a,bto provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. FIG.14illustrates an exemplary process1300of program109including, for example, VR tools place object1301with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including toolbelt115for providing select object1303including projection609, selection spot611, VIV user interfaces707a,b,cto provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. This is a library of pre-loaded objects that comes with the toolbelt software. It is categorized into object types such as small seating, large seating, tables, kitchen equipment, storage, etc. The system is able to expand to accommodate objects supplied by outside vendors through plug-ins to the software. FIG.15illustrates an exemplary process1400of program109including, for example, VR tools tape measurer1401with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including set origin point1403providing toolbelt115with virtual selection tool607, projection609and selection spot611to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. In addition to playing a measurement and viewing its annotation, there are settings accessible using the VR controller's scroll buttons. These settings switch the “snap” mode of the measurer to snap to inches or feet. There is also an “orthographic” or freeform measurement option where you either draw lines on the xy, xz, or zy planes, or you draw between any two points. These settings functions are very useful for understanding architectural spaces through aligned or unaligned measurements and snapping to different scales. FIG.16illustrates an exemplary process1500of program109including, for example, VR tools camera1501with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including click-to-take snapshot1503providing toolbelt115, virtual selection tool607and VIV user interface707to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. This snapshot tool gives a real-time preview of the image you are going to take through the 3D camera object, like a virtual camera in the user's hand with a view finder. The user can change visual settings for the 3D camera snapshot only and preview them inside the 3D camera object. Once the user hits the snapshot button, it automatically encodes the snapshot and saves it onto a predefined folder location. FIG.17illustrates an exemplary process1600of program109including, for example, VR tools build wall1601with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including set origin point1603for building and modifying object125(e.g., a wall or doorway) using toolbelt115with virtual selection tool607to provide user inputs, outputs and corresponding operations for real-time feedback, movement, adaptation, measurement, viewing and image capture of object125, avatar123and/or environment121. The build wall tool1601also has a library of objects broken down by category and viewable through a controller-based interface. These include for example, glass partition systems, solid walls, doorways, windows, etc. The system is able to expand to accommodate wall types supplied by outside vendors through plug-ins to the software. FIG.18illustrates an exemplary process1700of program109including, for example, VR tools doodle1701with operations provided by way of one or more processor103, memory105, user interface107and environment121. This process may take many different forms and include multiple and/or alternate steps, components, and arrangements. While an exemplary process is shown, the exemplary steps are not intended to be limiting, and additional or alternative steps, components and/or implementations may be used. For example, processor103may execute program109to generate one or a plurality of operations, user interfaces and modeling information for display by way of user interface107including doodle tool 3D mouse position1703and 3D drawings made with the doodle tool's 3D drawing that is made with 3D mouse position1705. In order to create drawings, the user positions the 3D mouse position of the doodle tool1707in space and clicks the trigger on the controller. The user then moves the 3D mouse position1707to another position in space and a 3D drawing1709will be drawn in between the two positions. Users may use this tool to sketch out more gestural 3D notes in a space for documentation, communication, and later correction. With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments and should in no way be construed so as to limit the claims. Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation. All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary is made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. | 38,847 |
11861268 | DETAILED DESCRIPTION Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the embodiments are merely described below, by referring to the figures, to explain aspects of the present description. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Hereinafter, it will be described in detail with reference to the drawings so that those of ordinary skill in the art to which the present disclosure belongs can easily understand and reproduce. FIG.1illustrates the internal configuration of an apparatus for auto-generating an AutoCAD® drawing according to an embodiment. An apparatus100for auto-generating an AutoCAD® drawing and a preprocessor110may use different terminals or may be integrated to use a single terminal. Terminals may include computers, laptop computers, smartphones, tablets, handheld devices, and wearable devices. Terminals may refer to devices that may run applications or programs, including a processor and a display. The apparatus100for auto-generating an AutoCAD® drawing may include a receiver120, a loading unit130, a display unit140, and an AutoCAD® drawing automatic creation interface150. The apparatus100for auto-generating an AutoCAD® drawing may receive input data110bfrom the preprocessor110. The preprocessor110may extract only the input data110bfrom strength calculation data110aon equipment provided by a strength calculation program. In the present disclosure, the input data110bmay refer to data required for drawing all components that constitute the equipment. In the apparatus100for auto-generating an AutoCAD® drawing, the loading unit130may load the input data received by the receiver120, and the loaded input data may be displayed on the display unit140and then, the AutoCAD® drawing automatic creation interface150may be activated to automatically generate the AutoCAD® drawing about the equipment. Referring toFIG.2, the preprocessor110may extract only the input data110bfrom the strength calculation data110areceived from the strength calculation program. To this end, the preprocessor110may include a data extracting unit112, a mapping table generating unit114, and an input data generating unit116. Hereinafter, each component of the preprocessor110will be described with reference toFIGS.3through6and then, the receiver120, the loading unit130, the display unit140, and the AutoCAD® drawing automatic creation interface150will be described with reference toFIGS.7through16. The data extracting unit112may receive strength calculation data (see320ofFIG.3) on certain equipment provided by the strength calculation program (see310ofFIG.3). An example of the strength calculation program310may include a compress program. The strength calculation data320may have an XML format, but this is only an example, and various modifications may be made. The data extracting unit112may extract input data in the format shown in an embodiment ofFIGS.4and5from strength calculation data320,320a, and320breceived. In this case, input data330aand330bmay be a data sheet having an XLS format, but one or more embodiments are not limited thereto, and various modifications may be made. The input data330aand330bmay be a data sheet including at least one of general information (see330cofFIG.6), design data330aand330b, a material list, a nozzle list, a nozzle load, foundation loading data, head data, shell data, cone data, girth flange data, stiffener ring data, skirt data, baseblock data, support lug data, support leg data, and saddle data. The general information (see330cofFIG.6) may include vessel position information, support base elevation information, C. O. Ginformation, and information such as a pressure unit, a temperature unit, a length unit, a weight unit, and a velocity unit. An example of the design data330aand340amay be one by referring toFIGS.3and4. The material list may include information about a shell, a head, a skirt, a support lug, a support leg, a saddle, a nozzle neck, a nozzle flange, a nozzle blind, and nozzle fitting. Referring back toFIG.2, the data extracting unit112may extract a data value at a preset position of the strength calculation data320ato extract input data330ahaving an XLS format, as shown in an example ofFIG.4. The data extracting unit112may extract data values at a plurality of preset positions of the strength calculation data320b, combine the extracted data values and extract the input data330bhaving an XLS format. Referring toFIG.6, the mapping table generating unit114may generate a mapping table340so as to provide the general information (see330cofFIG.6) including unit information corresponding to a preset item for drawing each of all components that constitute equipment. The input data generating unit116may generate input data by using the input data330aand330bextracted by the data extracting unit112in the manner as inFIGS.4and5and the mapping table340ofFIG.6. Referring back toFIG.1, the receiver120may receive the input data110bfrom the preprocessor110, and the loading unit130may load the input data110binto the apparatus100for auto-generating an AutoCAD® drawing.FIG.7illustrates an example in which loading is performed by a loading unit700. Referring toFIG.7, before loading of the input data110bis completed, the apparatus100for auto-generating an AutoCAD® drawing has no information on components constituting the equipment for which the AutoCAD® drawing is to be generated, and may provide only an icon bar (see810ofFIG.8) and an AutoCAD® drawing automatic creation interface (see840ofFIG.8). The AutoCAD® drawing automatic creation interface may be activated only after loading is completed. When loading of the input data110bis completed, the shape of the equipment and information about each component constituting the equipment may be displayed on the display unit140. Referring toFIG.8, when loading of the input data110bis completed, a display unit800may include a shape display unit850for displaying the shape of the equipment, a component icon bar820for displaying a list of components of the equipment, in which all components are each displayed as an icon, and a nozzle icon bar830for displaying a list of nozzles of the equipment, in which all the nozzles are each displayed as an icon. According to an embodiment, the list of components and the list of nozzles respectively displayed on the component icon bar820and the nozzle icon bar830may be linked with the input data110b, and information provided by information providing units850aand850bon components and nozzles included in each of the list of components and the list of nozzles may be also linked with the input data110band automatically generated. Referring toFIG.12, input data330dmay be linked with data1210described in AutoCAD® drawings1220and1230. For example, when the input data330dis a nozzle list the input data330dmay be linked with the Auto CAD drawing1220(a side view) in which the data1210about a nozzle list displayed on the AutoCAD® drawing1220and the AutoCAD® drawing1230(a cross-sectional view) in which nozzles1221to1225and1231to1234displayed on the nozzle list are displayed. A list of all components constituting the equipment shown in the shape display unit850may be displayed on the component icon bar820. The user may check the component icon bar820and may check whether there is a list of components that are omitted or included in error. A list of nozzles constituting the equipment shown in the shape display unit850may be displayed on the nozzle icon bar830, and the user may check the nozzle icon bar830and check whether there is a list of nozzles that are omitted or included in error. The display unit140may further include information providing units850aand850bthat load an identifier, type, diameter, size, thickness or material of a component or nozzle corresponding to the selected component icon or nozzle icon or material information from the input data so as to provide the input data through a separate window when an arbitrary component icon in the component icon bar820or an arbitrary nozzle icon in the nozzle icon bar830is selected. FIG.9illustrates an example in which, when an icon “Top Head”821in the component icon bar820is selected and activated, the information providing unit850aprovides information including information860such as an identifier, type, material, inner radius, used thickness, minimum thickness, and length of the icon “Top Head”821and shape information870of the icon “Top Head”821. In an embodiment, when there is an error in each item provided by the information providing unit850a, the user may manually correct and store the error. FIG.10illustrates an example in which, when a nozzle “N1”831in the nozzle icon bar830is selected and activated, the information providing unit850bprovides information such as a mark, service usage, code, Flange Type, Flange Rating, Flange Size, Flange Face, Flange Material, nozzle neck, Schedule, Material information, information such as radius, thickness, and the like and information including first shape information870aand second shape information870b. In an embodiment, when there is an error in each item provided by the information providing unit850b, the user may manually correct and store the error. FIG.11illustrates an example in which, when the AutoCAD® drawing automatic creation interface840is activated, an AutoCAD® drawing on the equipment shown in the shape display unit850is generated. The AutoCAD® drawing automatic creation interface840may be activated at any time after loading of the input data is completed. The user may check whether there is an error or omission in information regarding components constituting the equipment displayed on the shape display unit (see850ofFIG.8) or nozzle information after the procedures ofFIGS.7through10are performed, the AutoCAD® drawing automatic creation interface840may be activated so that the AutoCAD® drawing on the equipment shown in the shape display unit850is generated, which is shown inFIG.11, may be generated. The AutoCAD® drawing on the equipment shown in the shape display unit850is generated, may have an AutoCAD® format. Also, the AutoCAD® drawing on the equipment shown in the shape display unit850is generated, which is generated, may simultaneously display a nozzle list1110that constitutes the equipment generated by using at least one of a front view, a side view, a rear view and a cross-sectional view of the equipment and input data, design data1120preset for each of components constituting the equipment, a material specification1130, and detail drawing information1140. Also, the AutoCAD® drawing on the equipment shown in the shape display unit850is generated, which is generated, may further include general note1150information. FIG.13illustrates an example of the nozzle list1110displayed on the AutoCAD® drawing,FIG.14illustrates an example of the design data1120displayed on the AutoCAD® drawing,FIG.15illustrates an example of the material specification1130displayed on the AutoCAD® drawing, andFIG.16illustrates an example of the detail drawing information1140displayed on the AutoCAD® drawing. FIG.17is a flowchart illustrating a method for auto-generating an AutoCAD® drawing according to an embodiment. A preprocessor may obtain strength calculation data about equipment having a first format from a strength calculation program (S1710) and may extract only input data required for drawing all of components constituting the equipment from the strength calculation data to convert the extracted input data into a second format (S1720). After the loading unit loads input data having the second format (S1730), the display unit may display the shape of the equipment and information about each of components constituting the equipment based on the loaded input data (S1740). Subsequently, when the AutoCAD® drawing automatic creation interface is activated, the AutoCAD® drawing may be automatically generated by using the input data (S1570). Methods according to an embodiment of the present disclosure may be implemented in the form of program instructions that can be executed through various computer units and recorded in a computer-readable recording medium. The computer-readable recording medium may include program instructions, data files, data structures, and the like singly or in combination. The program instructions stored in the computer-readable recording medium may be specially designed and configured for the present invention, or may be known and usable to those skilled in the art of computer software. In the method and apparatus for auto-generating an AutoCAD® drawing according to one or more embodiments, a problem that a user directly checks strength calculation data using a document or electronic file and manually creates drawings by directly inputting the confirmed strength calculation data using a drawing tool provided by an AutoCAD® program, may be solved, and a problem that an error occurs due to omitted data in a process of manually creating a drawing, may be solved. In addition, a problem that the user extracts and inputs data required for drawings one by one so that a design period is delayed, may be reduced so that the total manufacturing time may be reduced. It should be understood that embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments. While one or more embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the following claims. | 14,357 |
11861269 | DETAILED DESCRIPTION The present invention provides for a spatially self-verifying array of Nodes. Specifically, Nodes may include devices capable of wireless communication transmission in logical communication with a processor and a digital storage. A position for each Node may be generated based upon values for position determination variables. By comparing the values for position determination variables between a single Node and multiple disparate Nodes, a position of respective Nodes in the array may be determined and verified. In some embodiments, Nodes are co-located with Sensors to quantify conditions within or proximate to Structures. Such Structures use Sensor groups periodically and/or continuously quantify and transmit a current condition of the Structure. Sensor readings may be associated with a time index. Various embodiments include methods and apparatus for construction, Deployment and maintenance of a Structure with Intelligent Automation (device, system, machine or equipment item) engaged in logical processes and Structural Messaging to communicate conditions within or proximate to the Structure. Structural Messaging includes logical communications generated by the Intelligent Automation (such as a Sensor or machine) incorporated into, affixed to, or operated within or proximate to a Structure. In some aspects, a Sensor cluster (or a Sensor gateway, which may be a Sensor cluster connected to a communications array) may be embedded into a wall or other surface, such as an architectural aspect (e.g., a baseboard). The Sensors may be capable of quantifying a condition by generating a digital value based upon an environment in which the Sensor is placed. For example, the Sensors may detect vibration patterns, chemicals, temperatures, water, light waves or other indicia of a condition present. A remedial action device may, based upon a reading from the Sensors, be actuated in response to a quantified condition. In general, various embodiments of the present invention enable a Structure, such as a building or infrastructure, to be active as opposed to the former passive state. The active state enables the Structure to generate data descriptive of one or more of: a condition within a Structure; a condition proximate to the Structure; and an event experienced by the Structure; and in some embodiments an active state Structure is enabled to execute an action via automation based upon a Structural Message. The action based upon a Structural Message may be executed independent of a user intervention, or based upon approval of a user, such as via an app on a Smart Device. The present invention references prior applications and issued patents owned by the applicant relating to automated apparatus and methods for generating improved Augmented Virtual Models (sometimes referred to herein as an “AVM”) of a Structure. The AVM of the Property may include a conceptual model and progress through one or more of: a) a design stage; b) a build stage; c) a Deployment stage; d) a service stage; e) a modification stage; and f) a dispensing stage. As discussed more fully herein, an AVM according to the present invention include original design data matched to As Built data captured via highly accurate geolocation, direction and elevation determination. As Built data is matched with a time and date of data acquisition and presented in two-dimensional (2D) and three-dimensional (3D) visual representations of the Property. The augmented models additionally include data relating to features specified in a Property design and data collected during building, Deployment, maintenance and modifications to the Property. In some embodiments, a fourth dimension of time may also be included. An AVM includes a three- or four-dimensional model in a virtual environment that exists parallel to physical embodiments modeled in the Augmented Virtual Model. Details of one or more physical Structures and other features within a real estate parcel are generated and quantified and represented in the Augmented Virtual Model. The AVM exists in parallel to a physical Structure in that the AVM includes virtual representations of physical Structures and additionally receives and aggregates data relevant to the Structures over time. The aggregation of data may be one or more of: a) according to an episode (e.g., onsite inspection, repair, improvement etc.); b) periodic; and c) in real time (without built in delay). The experience of the physical Structure is duplicated in the virtual Augmented Virtual Model. The AVM may commence via an electronic model generated via traditional CAD software or other design type software. In addition, the AVM may be based upon values for variables, including one or more of: usage of a Structure; usage of components within the Structure; environmental factors encountered during a build stage or Deployment stage; and metrics related to Performance of the Structure. The metrics may be determined, for example, via measurements performed by Sensors located in and proximate to Structures located on the Property. In some embodiments, a technical library specific to a particular Property and location within the Property may be maintained for each Property and made accessible to an onsite technician and/or remote expert. The library may include but is not limited to details descriptive of: a Structure design, utilities, architectural and structural history, equipment/machinery manuals; repair bulletins, and repair/maintenance. Appropriate how-to videos may also be made available based upon an AVM with As Built and Experiential Data. In another aspect, a parts ordering function may be included in the Augmented Virtual Model. Augmented parts ordering may allow a technician to view an ordered part and view a virtual demonstration of the part in use and procedures for replacing the part. Aspects of the AVM may be presented via a user interface that may display on a tablet or other flat screen, or in some embodiments be presented in a virtual reality environment, such as via a virtual reality headset. Some exemplary embodiments may include updates to an AVM that include changes to: items or persons within the Structure, architectural or structural aspects; time and date notation of a change in location specific data; a location of an item or person updated according to coordinates such as X,Y,Z and distance data and/or an angle and distance data (or other information pertinent to a chosen coordinate system); X,Y data may include high level location designation within the street address via triangulation (e.g., a street address) and highly specific position designation (e.g., particular room and wall); combination of two types of position data; GPS, Differential GPS; references used during triangulation; aggregate data across multiple Structures for reference; designs that perform well; designs that fail; popularity of various aspects; access to and/or generation of, multiple Augmented Virtual Models. In some preferred embodiments, the geographic location will be provided with accurately placed location reference points. The location reference points may be accessed during activities on a Property within or close to a Structure. (While accuracy may degrade outside the Property, the location reference points maintain accuracy within the Property.) Preferred embodiments may also include reference points accurately placed within a Structure located on the Property. As further discussed below, the reference points may include, by way of non-limiting example, a wireless transmission data transmitter operative to transmit an identifier and location data; a visual identifier, such as a hash code, bar code, color code or the like; an infrared transmitter; a reflective surface, such as a mirror; or other means capable of providing a reference point to be utilized in a triangulation process that calculates a precise location within the Structure or other Structure. Highly accurate location position may be determined via automated apparatus and multiple levels of increasingly accurate location determination. A first level may include use of a GPS device providing a reading to first identify a Property. A second level may use position transmitters located within, or proximate to, the Property to execute triangulation processes in view of on-site location references. A GPS location may additionally be associated with a high-level general description of a Property, such as, one or more of: an address, a unit number, a lot number, a tax map number, a county designation, Platte number or other designator. On-site location references may include one or more of: near field radio communication beacons at known X-Y position reference points; line of sight with physical reference markers; coded via ID such as bar code, hash tag, and alphanumeric or other identifier. In some embodiments, triangulation may calculate a position within a boundary created by the reference points to within millimeter range. In some embodiments, Differential GPS may be used to accurately determine a location of a Smart Device with a sub centimeter accuracy. In addition to a position determination, such as latitude and longitude, or other Cartesian Coordinate (which may sometimes be indicated as an “X” or “Y” coordinate), Polar Coordinate, or GPS coordinate, the present invention provides for a direction (sometimes referred to herein as a “Z” direction and elevation or “r”) of a feature for which As Built data is captured and imported into the AVM. According to the present invention, a direction dimension may be based upon a movement of a device. For example, a device with a controller and an accelerometer, such as mobile Smart Device, may include a user display that allows a direction to be indicated by movement of the device from a determined location acting as a base position towards an As Built feature in an extended position. In some implementations, the Smart Device may first determine a first position based upon triangulation with the reference points and a second position (extended position) also based upon triangulation with the reference points. The process of determination of a position based upon triangulation with the reference points may be accomplished, for example via executable software interacting with the controller in the Smart Device, such as, for example via running an app on the Smart Device. In combination with, or in place of directional movement of a device utilized to quantify a direction of interest to a user, some embodiments may include an electronic and/or magnetic Directional Indicator that may be aligned by a user in a direction of interest. Alignment may include, for example, pointing a specified side of a device, or pointing an arrow or other symbol displayed upon a user interface on the device towards a direction of interest. In a similar fashion, triangulation may be utilized to determine a relative elevation of the Smart Device as compared to a reference elevation of the reference points. It should be noted that although a Smart Device is generally operated by a human user, some embodiments of the present invention include a controller, accelerometer, data storage medium, Image Capture Device, such as a Charge-Coupled Device (“CCD”) capture device or an infrared capture device being available in a handheld or unmanned vehicle or other Agent. An unmanned vehicle may include for example, an unmanned aerial vehicle (“UAV”) or ground level unit, such as a unit with wheels or tracks for mobility and a radio control unit for communication. In some embodiments, multiple unmanned vehicles may capture data in a synchronized fashion to add depth to the image capture and/or a three-dimensional and four-dimensional (over time) aspect to the captured data. In some implementations, UAV position will be contained within a perimeter and the perimeter will have multiple reference points to help each UAV (or other unmanned vehicle) determine a position in relation to static features of a building within which it is operating and also in relation to other unmanned vehicles. Still other aspects include unmanned vehicles that may not only capture data but also function to perform a task, such as paint a wall, drill a hole, cut along a defined path, or other function. As stated throughout this disclosure, the captured data may be incorporated into the virtual model of a Structure. In another aspect, captured data may be compared to a library of stored data using image recognition software to ascertain and/or affirm a specific location, elevation and direction of an image capture location and proper alignment with the virtual model. Still other aspects may include the use of a compass incorporated into a Smart Device. In still other implementations, a line of sight from a Smart Device, whether user operated or deployed in an unmanned vehicle, may be used to align the Smart Device with physical reference markers and thereby determine an X,Y position as well as a Z position. Electronic altitude measurement may also be used in place of, or to supplement, a known altitude of a nearby reference point. This may be particularly useful in the case of availability of only a single reference point. Reference points may be coded via identifiers, such as a UUID (Universally Unique Identifier), or other identification vehicle. Visual identifiers may include a bar code, hash tag, Alphanumeric or other symbol. Three-dimensional markers may also be utilized. By way of non-limiting example, on site data capture may include designation of an X,Y,Z reference position and one or more of: image capture; infrared capture; temperature; humidity; airflow; pressure/tension; electromagnetic reading; radiation reading; sound readings (e.g., level of noise, sound pattern to ascertain equipment running and/or state of disrepair), and other vibration or Sensor readings (such as an accelerometer or transducer). In some embodiments, vibration data may be used to profile use of the Structure and/or equipment and machinery associated with the Structure. For example, vibration detection may be used to determine a presence of a person or vehicle, a type of activity taking place; machine operation, including automated determination between proper operation of a piece of equipment and/or machinery and faulty operation of the equipment and/or machinery. Glossary “Agent” as used herein refers to a person or automation capable of supporting a Smart Device at a geospatial location relative to a Ground Plane. “Ambient Data” as used herein refers to data and data streams captured in an environment proximate to a Vantage Point and/or an equipment item that are not audio data or video data. Examples of Ambient Data include, but are not limited to, Sensor perception of: temperature, humidity, particulate, chemical presence, gas presence, light, electromagnetic radiation, electrical power, Moisture and mineral presence. “Analog Sensor” and “Digital Sensor” as used herein include a Sensor operative to quantify a state in the physical world in an analog or digital representation, respectively. “As Built” as used herein refers to details of a physical Structure associated with a specific location within the physical Structure or parcel and empirical data captured in relation to the specific location. “As Built Features” as used herein refers to a feature in a virtual model or AVM that is based at least in part upon empirical data captured at or proximate to a correlating physical location of the feature. Examples of As Built Features include placement of structural components such as a wall, doorway, window, plumbing, electrical utility, machinery and/or improvements to a parcel, such as a well, septic, electric or water utility line, easement, berm, pond, wet land, retaining wall, driveway, right of way and the like. “As Built Imagery” (Image Data) as used herein means image data generated based upon a physical aspect. “Augmented Virtual Model” (sometimes referred to herein as “AVM”) as used herein means a digital representation of a real Property parcel including one or more three-dimensional representations of physical Structures suitable for use and As Built data captured that is descriptive of the real Property parcel. An AVM includes As Built Features of the Structure and may include improvements and features contained within a Structure. “Bluetooth” as used herein means the Wireless Personal Area Network (WPAN) standards managed and maintained by Bluetooth Special Interest Group (SIG). Unless otherwise specifically limited to a subset of all Bluetooth standards, the Bluetooth will encompass all Bluetooth standards (including, without limitation, Bluetooth 4.0; 5.0; 5.1 and BLE versions). “Deployment” as used herein means the placement into operation of one or more of: a Structure machinery and an equipment item. “Deployment Performance” as used herein means one or both of: objective and subjective quantification of how one or more of: Structure, machinery and an equipment item operated, which may be depicted in an AVM. “Design Feature” as used herein, means a value for a variable descriptive of a specific portion of a Property. A Design Feature may include, for example, a size and shape of a structural element or other aspect, such as a doorway, window, or beam; a material to be used; an electrical service; a plumbing aspect; a data service; placement of electrical and data outlets; a distance, a length, a number of steps; an incline; or other discernable value for a variable associated with a Structure or Property feature. “Digital Sensor” as used herein includes a Sensor operative to quantify a state in the physical world in a digital representation. “Directional Indicator” as used herein means a quantification of a direction generated via one or both of: analog and digital indications. “Experiential Data” as used herein means data captured on or proximate to a subject Structure, such data descriptive of a condition realized by the Structure. Experiential Data is generated by one or more of: Digital and/or Analog Sensors, transducers, Image Capture Devices, microphones, accelerometers, compasses and the like. “Experiential Sensor Reading” as used herein means a value of a Sensor output generated within or proximate to a subject Structure, such output descriptive of a condition realized by the Structure. An Experiential Sensor Reading may be generated by one or more of: digital and/or Analog Sensors, transducers, Image Capture Devices, microphones, accelerometers, compasses and the like. “Ground Plane” as used herein refers to a locally horizontal (or nearly horizontal) plane from which a direction of interest may be projected. An example of a Ground Plane is a floor of a Structure. “Image Capture Device” or “Scanner” as used herein refers to apparatus for capturing digital or analog image data. An Image Capture Device may be one or both of: a two-dimensional camera or a three-dimensional camera. In some examples an Image Capture Device includes a charge-coupled device (“CCD”) camera. “Intelligent Automation” as used herein refers to a logical processing by a device, system, machine or equipment item (such as data gathering, analysis, artificial intelligence, and functional operation) and communication capabilities. “Moisture” as used herein means a quantity of water, which may also mean a quantity of water relative to a larger volume (e.g., amount of water relative to air). “Multi-modal” as used herein refers to the ability of a device to communicate using multiple protocols and/or bandwidths. Examples of multimodal may include being capable of communication using two to more of: Bluetooth; Bluetooth Low Energy; WiFi; WiFi RT; GPS; ultrasonic; infrared protocols and/or mediums. “Node” as used herein means a device including at least a processor, a digital storage and a wireless transceiver. “Performance” as used herein may include a metric of an action or quantity. Examples of Performance may include metrics of: number of processes completed, energy efficiency; length of service; cost of operation; quantity of goods processed or manufacture; quality of goods processed or manufacture; yield; and human resources required. “Performance Level” as used herein means one or both of a quantity of actions executed and a quality of actions. “Property” as used herein shall mean one or more real estate parcels suitable for a deployed Structure that may be modeled in an AVM. “Ray” as used herein refers to a straight line including a starting point and extending indefinitely in a direction. “Sensor” as used herein refers to one or more of a solid state, electro-mechanical, and mechanical device capable of transducing a physical condition or Property into an analogue or digital representation and/or metric. “Smart Device” as used herein includes an electronic device including, or in logical communication with, a processor and digital storage and capable of executing logical commands. “Structure” as used herein refers to a manmade assembly of parts connected in an ordered way. Examples of a Structure in this disclosure include a building; a sub-assembly of a building; a bridge, a roadway, a train track, a train trestle, an aqueduct; a tunnel a dam, and a retainer berm. “Structural Message” as used herein refers to a logical communication generated by automation (such as a Sensor or machine) incorporated into, affixed to or operated within or proximate to a Structure. “Structural Messaging” as used herein refers to an action that generates and/or transmits a Structural Message. “Total Resources” as used herein shall mean an aggregate of one or more types of resources expended over a time period. “Transceive” as used herein refers to an act of transmitting and receiving data. “Transceiver” as used herein refers to an electronic device capable of one or both of transmitting and receiving data. “Vantage Point” as used herein refers to a specified location which may be an actual location within a physical Structure or a virtual representation of the actual location within a physical Structure. “Vector” as used herein refers to a magnitude and a direction as may be represented and/or modeled by a directed line segment with a length that represents the magnitude and an orientation in space that represents the direction. “Virtual Structure” (“VS”): as used herein shall mean a digital representation of a physical Structure suitable for use. The VS may include Design Features and As Built Features. The VS may be included as part of an AVM. According to the present invention, multiple Nodes are deployed in or proximate to a Structure to provide data quantifying positions of the Nodes relative to each other and/or aspects of a Structure. In addition, Sensors may be deployed with known positions relative to one or more Nodes, the Sensors are operative to quantify conditions in an environment available to the sensor. The data quantifying respective conditions registered by the Sensors may referenced to generate a status and/or condition of one or more of: a deployed Structure, a Structure in the process of being built; and/or a Structure in the process of being retrofitted with a position of quantified conditions determined based upon use of a self-verifying array of Nodes. In some embodiments, a location of one or more Sensors may be generated according to the methods herein. The location may be in relation to one or more of: a home position; a position of an Agent; and a position of one or more Reference Position Transceivers. An Agent may be guided to a Sensor and/or an area of interest based upon a Sensor reading using orienteering methods and apparatus presented herein. For example, a controller may receive Sensor data quantifying temperature and humidity that exceed an optimal range of temperature and humidity (e.g., the data quantifying temperature and humidity may indicate an environment conducive to termites in the Structure, or simply inefficient insulation from an outside environment). Using Orienteering, an Agent may be guided to one or both of the Sensors that generated the data and an area of interest indicated by the measured data. A user interface may include human ascertainable indications of the conditions quantified and/or the location of the conditions quantified. Additional examples may include guiding an Agent to a Sensor to replace a power source, such as a battery or battery pack. Other exemplary power sources include an antenna or array of antennas tuned to receive ambient energy and recharge an energy storage device (such as a battery). Referring now toFIG.1Aa block diagram illustrates various aspects of the present invention and interactions between the respective aspects. The present invention includes an AVM111of a Structure that includes As Built Features as well as the generation and inclusion of As Built Features, based upon location and direction-specific data capture, is discussed more fully below. Data may be transmitted and received via one or both of digital and analog communications, such as via a wireless communication medium117. According to the present invention, one or more Deployment Performance Metrics112are entered into automated apparatus in logical communication with the AVM111. The Deployment Performance Metric112may essentially include a purpose to be achieved during Deployment of a modeled Structure. By way of non-limiting example, a Deployment Performance Level may include one or more of: a production or quantity; quality; yield; scalability; a level of energy efficiency; a level of water consumption; mean time between failure for equipment included in the Structure; mean time between failure for machinery installed in the Structure; a threshold period of time between repairs on the Structure; a threshold period of time between upgrades of the Structure; a target market value for a Property; a target lease or rental value for a Property; a cost of financing for a Property; Total Cost of Ownership of a Property; Total Cost of Deployment of a Property or other quantifiable aspect. In some embodiments, Deployment Performance Metrics may be related to a fungible item, such as a measurement of energy (e.g., kWh of electricity, gallon of fuel oil, cubic foot of gas, etc.); man-hours of work; trade medium (e.g., currency, bitcoin, stock, security, option etc.); parts of manufactured volume of material processed or other quantity. Relating multiple disparate Deployment Performance Metrics to a fungible item allows disparate Performance Metrics to be compared for relative value. Modeled Performance Levels113may also be entered into the automated apparatus in logical communication with the AVM111. The Modeled Performance Levels113may include an appropriate level of Performance of an aspect of the Structure in the AVM affected by the Deployment Performance Metric112. For example, a Performance Level113for energy efficiency for a Structure modeled may include a threshold of kilowatt-hours of electricity consumed by the Structure on a monthly basis. Similarly, a target market value or lease value may be a threshold pecuniary amount. In some embodiments, a pecuniary amount may be according to a period of time, such as monthly, or a term of years. Empirical Metrics Data114may be generated and entered into the automated apparatus on an ongoing basis. The Empirical Metrics Data114will relate to one or more of the Deployment Performance Metrics and may be used to determine compliance with a Deployment Performance Level and/or a Performance Levels. Empirical Metrics Data114may include, by way of non-limiting example, one or more of: a unit of energy; a unit of water; a number of service calls; a cost of maintenance; a cost of upgrades; equipment details, design details, machinery details, identification of human resources deployed; identification of organizations deployed; number of human resources; demographics of human resources (e.g., age, gender, occupations, employment status, economic status, requiring assistance with basic living necessities; and the like); percentage of time Structure is occupied; purpose of occupancy (e.g., primary residence, secondary residence, short-term rental, long-term lease, etc.); Sensor readings (as discussed more fully below); man-hours required for Structure repair, maintenance, or upgrades; and total currency (or other fungible pecuniary amount) expended on behalf of a Structure or Property. In addition to Empirical Metrics Data114, Lead Actions and expected Lag Benefits115that may cause an effect on one or both of a Deployment Performance Level112and a Performance Level113, may be entered into the automated apparatus. A Lead Action may include an action expected to raise, maintain or lower an Empirical Metrics Data114. For example, an action to install water efficient plumbing fixtures may be scheduled in order to improve water consumption metrics. Similar actions may relate to electrically efficient devices, or automatic electric switches being installed; preventive maintenance being performed; Structure automation devices being installed and the like. Other Lead Actions may include limiting a demographic of occupants of a Structure to a certain demographic, such as senior citizens. An expected benefit may be measured in Lag Benefit measurements, such as those described as Empirical Metrics Data114, or less tangible benefits, such as occupant satisfaction. The automated apparatus may also be operative to calculate Future Performance116based upon one or more of: AVM Model with As Built Data111; Deployment Performance Metrics112; Modeled Performance Levels113and Empirical Metrics Data114. Future Performance may be calculated in terms of an appropriate unit of measure for the aspect for which Performance is calculated, such as, for example: an energy unit; man hours; mean time between failures and dollar or other currency amount. Calculation of Future Performance116may be particularly useful to calculate Total Resources calculated to be required to support a particular Structure, group of Structures, properties and/or group of properties over a term of years (“Total Resources Calculated”). Total Resources Calculated may therefore be related to calculations of Future Performance116and include, for example, one or more of: energy units; water units; man hours; equipment; machinery and dollars (or other currency or fungible item). In some embodiments, calculations of Future Performance may include a Total Cost of Ownership for a term of years. For example, a Total Cost of Ownership for a Property may include a purchase amount and amounts required for maintenance, repair and upgrades from day one of Deployment through twenty years of Deployment (a shorter or longer term of years may also be calculated). Accordingly, some embodiments may include a calculation of Total Resources required that includes a purchase price of a Property with a Structure that incorporates a total cost associated with the Property over a specified term of years. The total cost will be based upon the AVM with As Built Data111; Deployment Performance Metrics112; Modeled Performance Levels113and Empirical Metrics Data114. Moreover, Total Resources required may be aggregated across multiple properties and. Structures. Aggregation of Properties may be organized into Property pools to mitigate risk of anomalies in the Calculation of Future Performance. Of course, the benefits of Property ownership and/or management may also be pooled and compared to the Total Resources required. In various embodiments, different aspects of calculated Future Performance116may be aggregated and allocated to disparate parties. For example, first aggregation may relate to man hours of technician time for Structure repair and maintenance and the fulfillment of obligations related to the aggregation may be allocated to a first party. A second aggregation may relate to machinery Performance and obligations allocated to a second party. A third aggregation may relate to equipment Performance and obligations allocated to a third party. Other aggregations may similarly be allocated to various parties. In some embodiments, financial obligations incorporating one or both of acquisition cost and ongoing Deployment costs may be allocated and financed as a single loan. Other embodiments include a calculated Future Performance cost being incorporated into a purchase price. An important aspect of the present invention includes definition and execution of Lead Actions based upon one or more of: the AVM Model with As Built Data111; Deployment Performance Metrics112; Modeled Performance Levels113; Empirical Metrics Data114and Calculations of Future Performance116. Referring now toFIG.1B, an AVM is generally associated with a Property that includes real estate parcels140-143. In some embodiments, one or more of the following are performed on the Property: monitoring; service call; an improvement, a repair, maintenance and an upgrade. The Property is identified according to an automated determination of a location and a particular position, elevation and direction are further determined automatically within the Property. Smart Devices may be used to access data records stored in an AVM according to a unique identifier of a physical location of the real estate parcels140-143. As illustrated, a map of real estate parcels140-143is shown with icons140A-142A indicating parcels140-142that have virtual Structures140A-142A included in a virtual model associated with the parcels. Other parcels143have an indicator143A indicating that a virtual model is in process of completion. In some methods utilized by the present invention, data in an AVM may be accessed via increasingly more accurate determinations. A first level of geospatial location determinations may be based upon the real estate parcels140-143themselves and a second geospatial determination may be made according to Reference Position Transceivers (discussed more fully below) included within the boundaries of the real estate parcels140-143. Still more accurate location position may be calculated according to one or both of a direction determination and an accelerometer or other location determination technology. Accordingly, it is within the scope of the present invention to access a record of a design model for a specific wall portion within a Structure based upon identification of a particular parcel of real estate parcels140-143and a location within a Structure situated within the real estate parcels140-143and height and direction. Likewise, the present invention provides for accessing As Built data and the ability to submit As Built data for a specific portion of a Structure based upon an accurate position and direction determination. For example, in some embodiments, a first level of location identification may include a real estate parcel140-143identified based upon a first wireless communication modality, such as a GPS communication or a sub-GHz wavelength communication. A second level of location identification may include a Structure141A-143A identified via one or more of GPS; UWB; Wi-Fi; sonic communications; a sub-GHz wavelength communication and Bluetooth communications. A third level of location identification may include an Agent position within a Structure (or Property) based upon logical communications via one or more of: UWB; Wi-Fi; sonic communications; and Bluetooth communications. A fourth level of location identification may include a determination of a distance from an item to a Smart Device borne by an Agent, the distance determination may be based upon transceiving in a SVAN operating in a Bluetooth wavelength, a WiFi wavelength or a sub-GHz wavelength. In some implementations of the present invention, a Property-unique identifier may be assigned by the AVM and adhere to a standard for universally unique identifiers (UUID), other unique identifiers may be adopted from, or be based upon, an acknowledged standard or value. For example, in some embodiments, a unique identifier may be based upon Cartesian Coordinates, such as global positioning system (GPS) coordinates. Other embodiments may identify a Property according to one or both of: a street address and a tax map number assigned by a county government or other authority. In some embodiments, an AVM may also be associated with a larger group of Properties, such as a manufacturing plant, research and development, assembly, a complex, or other defined arrangement. As illustrated, in some preferred embodiments, an electronic record correlating with a specific Property may be identified and then accessed based upon coordinates generated by a GPS device, or other electronic location device. The GPS device may determine a location and correlate the determined location with an AVM record listing model data, As Built data, improvement data, Performance data, maintenance data, cost-of-operation data, return-on-investment data and the like. In another aspect, data generated by Sensors deployed in a Structure may be aggregated and analyzed according to a Property location and/or Structure location associated with the Sensor/Sensor Cluster/Sensor Gateway. In this manner, an event may be tracked in a larger geographic area with numerous data points. For example, an event such as the launch of a rocket may cause data to be generated by multiple Sensor/Sensor Cluster/Sensor Gateways and tracked across a geographic area. Similarly, a natural event, such as an earthquake, hurricane, wildfire and the like may be tracked with highly accurate Sensor data across tens, hundreds or many thousands of data points. Still other events may include, for example, power usage, power generation, water flow in a hydroelectric system, water management in a reservoir system, flooding, release of toxic components into the environment, etc. Referring now toFIG.1C, a relational view of an AVM100with a VS102B is illustrated, as well as a physical Structure102A. The AVM100includes a virtual model stored in digital form with a design aspect that allows for a physical Structure102A suitable for use to be designed and modeled in a virtual environment. The design aspect may reference Performance data of features to be included in a VS102B and also reference variables quantifying an intended use of the VS102B. The Virtual Structure102B and the AVM100may reside in a virtual setting via appropriate automated apparatus108. The automated apparatus108will typically include one or more computer servers and automated processors as described more fully below and may be accessible via known networking protocols. The Physical Structure102A may include Transceivers120or other type of Node which may incorporate or be co-located with a Sensor or transmitter(s) or receiver(s) that monitor or otherwise quantify one or more conditions in a specified area, which may include, for example an area of ingress and egress122, such as a doorway, elevator and/or loading dock. Reference point Transceivers121A may be used as wireless references of a geospatial position. A wireless Node123may also link logical infrastructure within the Physical Structure102A with a digital communications network. In correlation with the design aspect, the present invention includes an As Built Model101that generates a Virtual Structure102B in the context of the AVM100. The As Built Model101includes virtual details based upon As Built data captured on or proximate to a physical site of a related physical Structure102A. The As Built data may be captured, for example, during construction or modification of a physical Structure102A. The As Built Model101may include detailed data including image captures via one or more Image Capture Devices107and physical measurements of features included in the physical Structure102A. The physical measurements may be during a build phase of the physical Structure; or subsequent to the build phase of the physical Structure. In some embodiments, original As Built measurements may be supplemented with additional data Structure data associated with repairs or improvements are made to the physical Structure. Details of recordable build aspects are placed as digital data on a recordable medium104included in the automated apparatus108. The digital data included on a recordable medium104may therefore include, for example, one or more of: physical measurements capturing Experiential Data; image data (e.g., digital photos captured with a CCD device); laser scans; infrared scans and other measurement mediums. One or more records on the recordable medium104of an As Built Structure may be incorporated into the AVM100thereby maintaining the parallel nature of the AVM100with the physical Structure102A. In some embodiments, As Built data on a recordable medium104may be generated and/or captured via an Image Capture Device107. As the physical Structure is deployed for use, subsequent measurements that generate and/or capture Experiential Data may be made and incorporated into the AVM100. In addition, a user may access and update103the AVM100to ascertain features of the physical Structure102A that have been virtually incorporated into the AVM100. In some examples, a tablet, handheld network access device (such as, for example a mobile phone) or other device with automated location service may be used to determine a general location of a physical Structure102A. For example, a smart phone with global positioning system (GPS) capabilities may be used to determine a physical address of a physical Structure, such as 123 Main Street. Stored records containing data relating to 123 Main Street may be accessed via the Internet or other distributed network. In addition to the use of GPS to determine a location of a User Device, the present invention provides for a real estate parcel with a physical Structure102A that includes more radio frequency (or other mechanism) location identifiers121A. Location identifiers121A may include, for example, radio transmitters at a defined location that may be used to accurately identify via triangulation, a position of a user device106, such as a: tablet, smart phone, or virtual reality device. The position may be determined via triangulation, single strength, time delay determination or other process. In some embodiments, triangulation may determine a location of a user device within millimeters of accuracy. Other location identifiers may include, by way of non-limiting example, RFID chips, visual markings (e.g., a hash tags or barcode), pins, or other accurately placed indicators. Placement of the location identifiers may be included in the AVM and referenced as the location of the physical user device is determined. As described above, specific location identifiers may be referenced in the context of GPS coordinates or other more general location identifiers. Based upon the calculated location of the user device106, details of the physical Structure102A may be incorporated into the Virtual Structure102B and presented to a user via a graphical user interface (GUI) on the user device106. For example, a user may approach a physical Structure and activate an app on a mobile user device106. The app may cause the user device106to activate a GPS circuit included in the user device and determine a general location of the user device106, such as a street address designation. The general location will allow a correct AVM100to be accessed via a distributed network, such as the Internet. Once accessed, the app may additionally search for one or more location identifiers121A of a type and in a location recorded in the AVM. An AVM may indicate that one or more RFID chips are accessible in a kitchen, a living room and each bedroom of a Structure. The user may activate appropriate Sensors to read the RFID chips and determine their location. In another aspect, an AVM100may indicate that location identifiers121A are placed at two or more corners (or other placement) of a physical Structure102A and each of the location identifiers121A may include a transmitter with a defined location and at a defined height. The user device106, or other type of controller, may then triangulate with the location identifiers121A to calculate a precise location and height within the physical Structure. Similarly, a direction may be calculated via a prescribed movement of the user device106during execution of code that will record a change in position relative to the location identifiers121A. For example, a user Smart Device, such as a smart phone or user device106may be directed towards a wall or other Structure portion and upon execution of executable code, the Smart Device may be moved in a generally tangential direction towards the wall. The change in direction of the user device106relative to the location identifiers121A may be used to calculate a direction. Based upon a recorded position within the Physical Structure102A and the calculated direction, a data record may be accessed in the AVM100and a specific portion of the AVM100and/or the Virtual Structure102B may be presented on the user device106. In other embodiments, a direction may be chosen, or verified via a mechanism internal to the Smart Device, such as a compass or accelerometer. In still another aspect of the present invention, in some embodiments, transmissions from one or more location identifiers121A may be controlled via one or more of: encryption; encoding; password protection; private/public key synchronization; or other signal access restriction. Control of access to location identifiers121A may be useful in multiple respects, for example, a location identifier may additionally function to provide access to data, a distributed network and/or the Internet. The Virtual Structure102B may include one or both of: historical data and most current data relating to aspects viewable or proximate to the user device106while the user device is at the calculated location in the physical Structure102A. In this way, the parallel virtual world of the AVM100and the Virtual Structure102B may present data from the virtual world that emulates aspects in the physical world and may be useful to the user accessing the user device106, while the user device is at a particular physical location. As discussed within this document, data presented via the AVM100may include one or more of: design data, As Built data, Experiential Data, Performance data relating to machinery and/or features of the AVM100or physical Structure; maintenance data, and annotations. Annotations may include, for example, a user's or designer's note recorded at a previous time, a service bulletin, maintenance log, operation instructions or a personal note to a subsequent user, such as a virtual “John Smith was here” guest log indicating who had frequented the location. Annotations may include one or both of text and image data. For example, an annotation may include an image of the location captured at a given time and date. The image may be of a personal nature, e.g., the living room while the Smith's owned the Structure, or a professional nature, e.g., the living room after being painted by XYZ Contractor on a recorded date. In some embodiments, annotations may be used to indicate completion of a work order. Recordation of completion of a work order may in turn trigger a payment mechanism for paying an entity contracted to complete the work order. In another aspect, annotations may relate to an AVM or a Virtual Structure as a whole, or to a particular aspect that is proximate to a location of the user device within the Virtual Structure. In some embodiments, details of a proposed use of a Structure and parcel may be input into a design module and used to specify or recommend features to be included in an AVM100. According to the present invention, features of a Structure and parcel are generated within a digital design model and then tracked as the features are implemented in a build process and further tracked in Performance of the Structure as it is placed into use. To the extent available, Performance is tracked in the context of variables relating to use. Variables may include, for example: a use of the Structure, such as manufacturing and/or processing; a number of resources accessing in a Structure; demographics of the human resources; number of months per year the Structure is deployed for use; which months of the year a Structure is deployed for use; which hours of the day the Structure is occupied; and other relevant information. As Experiential Sensor Readings are generated, they may be memorialized to generate Experiential Data associated with a physical Structure102A. The Experiential Data is collected and analyzed via structured queries and may also be analyzed with artificial intelligence processes such as unstructured queries to derive value. In some embodiments, Experiential Data may also be associated with a human and/or an animal interacting with the Physical Structure102A. This may be particularly useful for Structures that are processing plants. Whereas former processing plants were generally designed and built to mitigate against variability in a human118and between disparate humans118, the present invention allows for human variability to be monitored via Sensors within device119. Moreover, the Structure may be modified to optimally interrelate with the values for variables attributable to a human118that will inhabit or otherwise interact with the Physical Structure102A. Human (and/or animal) may be quantified with Sensors within device119installed on or proximate to the Human118. Alternatively, Sensors124located in, or proximate to, a Physical Structure102A may be used to monitor human variability. Biosensors may be used to provide empirical data of humans118interacting with a Structure may be analyzed using structured or unstructured queries to device relationships between Structure Performance and human biometrics. Accordingly, Sensors may be used to quantify interaction between a human118and an As Built Structure101according to physiological and behavioral data, social interactions, and environmental factors within the Structure, actions undertaken, movements, and almost any quantifiable aspect. As Built Features and biometrics may be further utilized to control various Structure automation devices. Structure automation devices may include, by way of non-limiting example one or more of: automated locks or other security devices; thermostats, lighting, heating, chemical processing, cutting, molding, laser shaping, 3D printing, assembly, cleaning, packaging, and the like. Accordingly, a Structure with recorded As Built Design Features and vibration Sensors may track activities in a Structure and determine that a first occupant associated with a first vibration pattern of walking is in the Structure. Recorded vibration patterns may indicate that person one is walking down a hallway and automatically turn on appropriated lighting and adjust one or more of: temperature, sound, and security. Security may include locking doors for which person one is not programmed to access. For example, a first pattern of vibration may be used to automatically ascertain that a person is traversing an area of a Structure for which a high level of security is required or an area that is designated for limited access due to safety concerns. As Built data has been collected. Other Structure automation may be similarly deployed according to As Built data, occupant profiles, biometric data, time of day, or other combination of available Sensor readings. Referring now toFIG.1D, according to the present invention a virtual model is generated that correlates with a physical Structure102A and includes virtual representations of As Built features and Experiential Data. As discussed more fully herein, the virtual model may include an AVM with As Built data, such as image data and measurements, included within the model. In addition, Sensor data may be collected over time and incorporated into the AVM. The AVM may include virtual representations of one or more of: Sensors155; equipment156-158; controls161; infrastructure159, such as HVAC, utilities, such as electric and water, gas lines, data lines, etc. and Vantage Points151. In some implementations, a virtual reality headset may be worn by a user to provide an immersive experience from a Vantage Point151such that the user will experience a virtual representation of what it would be like to be located at the Vantage Point151within the Structure152at a specified point in time. The virtual representation may include a combination of Design Features, As Built data and Experiential data. A virtual representation may therefore include a virtual representation of image data via the visual light spectrum, image data via infrared light spectrum, noise and vibration reenactment. Although some specific types of exemplary Sensor data have been described, the descriptions are not meant to be limiting unless specifically claimed as a limitation and it is within the scope of this invention to include a virtual representation based upon other types of captured Sensor data may also be included in the AVM virtual reality representation. Referring now toFIG.1E, a user131is illustrated situated within an AVM111. The user131will be virtually located at a Vantage Point137and may receive data136, including, but not limited to one or more of: image data134, audio data135and Ambient Data136. The user131may also be provided with controls133. Controls133may include, for example, zoom, volume, scroll of data fields and selection of data fields. Controls may be operated based upon an item of Equipment132within a Field of View138of the User131located at a Vantage Point137and viewing a selected direction (Z axis). The user is presented with Image Data from within the AVM111that includes As Built data and virtual design data. Additional examples may include Sensor arrays, audio capture arrays and camera arrays with multiple data collection angles that may be complete 360 degree camera arrays or directional arrays, for example, in some examples, a Sensor array (including image capture Sensors) may include at least 120 degrees of data capture, additional examples include a Sensor array with at least 180 degrees of image capture; and still other examples include a Sensor array with at least 270 degrees of image capture. In various examples, data capture may include Sensors arranged to capture image data in directions that are planar, oblique, or perpendicular in relation to one another. Referring now toFIG.2, a functional block illustrates various components of some implementations of the present invention. According to the present invention, automated apparatus included in the AVM201are used to generate a model of a Virtual Structure and may also incorporate a model and associated real estate parcel. One or more pieces of equipment that will be deployed in the Property may be included into the AVM201. This equipment may include, for example: machinery211; building support items212, and utilities support213. The AVM201may model operational levels204during deployment of a Structure and associated machinery and equipment included in the AVM201. Machinery211may include, for example, manufacturing tools, robots or other automation, transport tools, chemical processing machine, physical processing machine, assembly machine, heat processing machine, cooling machine, deposition device, etching device, welding apparatus, cutting apparatus, forming tool, drilling tool, shaping tool, transport machine, Structure automation, air purification or filter systems, noise containment device and the like. Utility support equipment may include cabling, dish antennas, Wi-Fi, water softener, water filter, power, chemical supply, gas supply, compressed air supply and the like, as well as uptime and downtime associated with a Structure utility and uptime and down time243of one or more aspects of the Structure. The AVM201calculates a predicted Performance of the AVM and generates Operational Levels204based upon the Performance222, wherein “Performance” may include one or more of: total cost of Deployment214; operational experience203which may include one or both of: objective empirical measurements and satisfaction of operator's use an As Built physical model based upon the AVM; operational expectations204, total maintenance cost206, and residual value of an As Built Structure following a term-of-years of occupation and use of an As Built Structure based upon the AVM. Performance221may also be associated with a specific item of machinery211. In another aspect, actual Operational Experience203may be monitored, quantified and recorded by the AVM201. Data quantifying the Operational Experience203may be collected, by way of non-limiting example, from one or more of: Sensors incorporated into an As Built Structure; maintenance records; utility records indicating an amount of energy202(electricity, gas, heating oil) consumed; water usage; periodic measurements of an As Built Structure, such as an infrared scan of climate containment, air flow through air handlers, water flow, water quality and the like; user surveys and maintenance and replacement records. In still another aspect, a warranty205covering one or both of parts and labor associated with an As Built Structure may be tracked, including replacement materials207. The warranty205may apply to an actual Structure, or one or more of machinery211; building support212item; and utility support item213. The AVM201may consider a proposed usage of a Deployment of a Structure based upon values for Deployment variables and specify aspects of one or more of: machines211; building support212; and utility support213based upon one or both of a proposed usage and values for Deployment variables. Proposed usage may include, for example, how many human resources will occupy a Structure, demographics of the resources that will occupy the Structure; percentage of time that the Structure will be occupied; whether the Structure is a primary residence; whether the Structure is a leased Property and typical duration of leases entered into; and environmental conditions experienced by the Structure, such as exposure to ocean salt, winter conditions, desert conditions, high winds, heavy rain, high humidity, or other weather conditions. In another aspect, Deployment may relate to biometrics or other data associated with specific occupants of a Structure. Accordingly, in some embodiments, Sensors may monitor biologically related variables of occupants and/or proposed occupants. The biometric measurements may be used to determine one or both of Lead Actions and Lag Metrics. Lead Actions may include one or more of: use of specific building materials, selection of design aspects; Deployment of Structure equipment; Deployment of machinery; terms of a lease; length of a lease; terms of a maintenance contract; and Structure automation controls. According to the present invention, design aspects and Structure materials210may also be based upon the proposed usage and values for Deployment variables. For example, a thicker exterior wall with higher insulation value may be based upon a Structure's location in an adverse environment. Accordingly, various demographic considerations and proposed usage of a Structure may be used as input in specifying almost any aspect of a Structure. In still another consideration, a monetary value for one or more of: a Total Cost of Deployment (“TCD”). Total Maintenance Cost (“TMC”) and a desired return on investment (“ROI”) for a Property may be used as input for one or more design aspects included in an AVM System200. Total Cost of Ownership, TCD, TMC, and ROI may be used to determine optimal values of variables202-205,210-213specified in an AVM System200and incorporated into an As Built Structure, and other improvements to a real estate parcel. A Total Cost of Deployment214may change based upon a time period215used to assess the Total Cost of Deployment214. A ROI may include one or more of: a rental value that may produce a revenue stream, a resale value, a cost of operation, real estate taxes based upon Structure specifications and almost any other factor that relates to one or both of a cost and value. Desirable efficiency and Performance may be calculated according to one or more of: established metrics, measurement protocols, and past experience. The AVM201and associated technology and software may be used to support a determination of a TCD. In another aspect, a TCD may be based upon an assembly of multiple individual metrics, procedures to assess metrics, procedures to adjust and optimize metrics and procedures to apply best results from benchmark operations. In the course of managing Total Cost of Ownership, in some examples, initial steps may include design aspects that model an optimal design based upon Total Cost of Ownership metrics. In the following examples, various aspects of Total Cost of Deployment214, Total Maintenance Costs, and associated metrics, are considered in the context of calculating a target Total Cost of Deployment214. Accordingly, the AVM may be used to attempt to optimize TCD based on one or more measured variables. A designed Structure is ultimately built at a site on a real estate parcel. A build process may be specified, which may provide metrics that may be used in a process designed by an AVM201and also used as a physical build proceeds. In some examples, time factors associated with a physical build may be important, and in some examples time factors associated with a physical build may be estimated, measured, and acted upon as they are generated in a physical build process. Examples of time factors may include one or more of: a time to develop and approve site plans; a time to prepare the site and locate community provided utilities or site provided utilities; a time to lay foundations; a time to build Structure; a time to finish Structure; a time to install internal utilities and facilities related aspects; a time to install, debug, qualify and release equipment; and times to start production runs and to certify compliance of production are all examples of times that can be measured by various techniques and sensing equipment on a Structure's site. Various time factors for a build are valuable and may become increasingly valuable as a physical build proceeds since the monetary investment in the project builds before revenue flows and monetary investments have clearly defined cost of capital aspects that scale with the time value of money. Various build steps may include material flows of various types. Material flow aspects may be tracked and controlled for cost and efficiency. Various materials may lower a build materials cost but raise time factors to complete the build. Logical variations may be calculated and assessed in an AVM201and optimal build steps may be generated and/or selected based upon a significance placed upon various benefits and consequences of a given variable value. Physical build measurements or Sensor data on physical build projects may also be used as input in an assessment of economic trade-offs. The equipment deployed may incur a majority of a build cost depending upon user-defined target values. The AVM may model and present alternatives including one or more of: cost versus efficiency, quality240, time to build, life expectancy, market valuation over time. A cost to build may be correlated with cost to deploy and eventual resale. An overall model of a Total Cost of Deployment214may include any or all such aspects and may also include external. In some examples, the nature of equipment trade-offs may be static, and estimations may be made from previous results. In some other examples, changes in technology, strategic changes in sourcing, times of acquisition, and the like may play into models of Total Cost of Deployment214. In some examples, an initial efficiency of design that incurs large costs at early stages of a project may have a dominant impact on Total Cost of Deployment214when time factors are weighted to real costs. In other examples, the ability of a Structure to be flexible in its deployment or build order over time and to be changed in such flexible manners, where such changes are efficiently designed may dominate even if the initial cost aspects may be less efficient due to the need to design-in flexibility. As a Structure is built, and as it is operated the nature of changing customer needs may create dynamic aspects to estimations of Total Cost of Deployment214. Therefore, in some examples, estimates on the expected dynamic nature of demands on a Structure may be modeled against the cost aspects of flexibility to model expectations of Total Cost of Deployment214given a level of change. In some examples, factors that may be less dependent on extrinsic factors, such as product demand and the like may still be important metrics in Total Cost of Deployment214. Included in the As Built factors may be calculations such as HVAC temperature load, in which personnel and seasonal weather implications may be important. AVM models may include a user interface to receive value useful in the AVM models. In addition, electronic monitoring, via Sensors that may determine energy consumption, includes for example monitoring any of: electricity, fuel oil, natural gas, propane and the like. Temperatures may be monitored by thermocouples, semiconductor-junction-based devices or other such direct-measurement techniques. In other examples, temperature and heat flows may be estimated derived from photon-based measurement, such as surveying the Structure with infrared imaging or the like. Utility load may be monitored on a Structure-wide basis and/or at point-of-use monitoring equipment located at hubs or individual pieces of equipment themselves. Flow meters may be inline, or external to, features such as pipes, wires, or conduits. Gas and liquid flows may be measured with physical flow measurements or sound-based measurement. In other examples, electricity may be monitored as direct current measurements or inferred-inductive current measurement. In some examples, the nature and design of standard usage patterns of a Structure and an associated environment may have relevance to Total Cost of Ownership. For example, usage that includes a larger number of ingress and egress will expose an HVAC system to increased load and usage that includes a significant number of waking hours with inhabitants in the building may incur increased usage of one or more of: machinery211; building support devices212; and utilities234. The nature and measurement aspects of vibration in the Structure may also be modeled and designed as the Structure is built. There may be numerous means to measure vibrations from capacitive- and resistive-based measurements to optical-based measurements that measure a subtle change in distance scale as a means of detecting vibration. Vibration may result from a Structure being located proximate to a roadway, train, subway, airport, tidal flow, or other significant source of relatively consistent vibration. Vibration may also be more periodic, such as earthquake activity. In still another aspect, vibration may result from human traffic within the Property. The use of vibration-monitoring Sensors may indicate various activities that take place within the Structure and facilitate more accurate modeling of a life expectancy of various aspects of the Structure as well as machines located within the Structure. Noise levels are another type of vibrational measurement which is focused on transmission through the atmosphere of the Structure. In some cases, noise may emanate from one location after moving through solid Structure from its true source at another location. Thus, measurement of ambient sound with directional microphones or other microphonic sensing types may be used to elucidate the nature and location of noise emanations. In some cases, other study of the noise emanations may lead to establishment of vibrational measurement of different sources of noise. Floors, ceilings, doorways, countertops, windows, and other aspects of a Structure may be monitored in order to quantify and extrapolate noise levels. Noise and vibrational measurement devices may be global and monitor a region of a Structure, or they may be inherently incorporated into or upon individual equipment of the Structure. In some examples, models of a Structure (including original models and As Built models) may include routings of pipes, wires, conduits and other features of a Structure and the installed equipment that have Structure. Together with models of the building Structure and the equipment placed in the building the various routed Structures may be married in a detailed AVM201. In another aspect, an AVM201may include conflicts between the physical Structures may be detected and avoided in the design stage at far improved cost aspects. In some examples, a designer may virtually ascertain a nature of the conflict and alter a design in virtual space to optimize operational aspects. Additionally, in some embodiments, an As Built model may be generated during and after a Structure is built for various purposes. In some examples, a technician may inspect a Structure for conformance of the build to the designed model. In other examples, as an As Built Structure is altered to deal with needed changes, changes will be captured and included in the As Built AVM201. In another aspect of the present invention, the AVM201may be used to generate a virtual reality model of a Property, including one or more Structures that may be displayed via user interface that includes an immersion of the user into a virtual setting. Immersion may be accomplished, for example, via use of a virtual reality headset with visual input other than a display screen is limited. In some embodiments, a virtual setting may be generated based upon a location of the user. For example, GPS coordinates may indicate a Property and a user may wear a headset that immerses the user in a virtual reality setting. The virtual reality setting may display one or more virtual models of Structures that may be potentially constructed on the Property. Embodiments may include models generated using, for example, standard modeling software such as BIM 360™ field which may support the display of a Structure design in a very complete level of detail. Modeling of a Structure in its location or proposed location, or in multiple proposed locations, may be useful from a Total Cost of Ownership perspective, especially from an evaluation of the nature of a site layout including real estate Property parcel options and the like. In some examples, a virtual display observed in the field at the site of an As Built or proposed build may allow for design changes and design evaluations to be viewed in a space before build is completed. For example, a Structure may be completed to the extent that walls, floors, and ceilings are in place. A user may utilize a virtual display to understand the layout difference for different designs. Designs may be iterated from designs with the least flexibility to more flexible (yet more complex) designs. In some examples, the design systems may include various types of features such as building Structure, walls, ducts, utilities, pipes, lighting, and electrical equipment. The design systems are augmented with As Built Data and Experiential Data. The design and modeling systems may be utilized to simulate and project cost spending profiles and budgeting aspects. The modeling systems may therefore be useful during the course of an audit, particularly when comparing actual versus projected spending profiles. The comparison of various spend sequencing may be used to optimize financing costs, maintenance, refurbishing and sequencing. The AVM201may be useful to provide early estimates and for cost tracking against projections. Such tracking may be visualized as displays across a virtual display of the building, facilities and equipment. As described above, facing a Node (e.g., a Smart Device) towards an area in a Structure and moving the mobile device in a particular pattern may be used to ascertain a specific area of the Structure for which AVM201data should be accessed. A combination of one or more of: image, location, orientation, and other Sensors may also be used to identify to the mobile device specifically which wall segment, building aspect, machinery, or equipment the device is pointed towards. A location of mobile device, a height, and an angle of view may also be utilized to determine aspects of the Structure for which a virtual model is being requested. In some embodiments, a user may be presented with various layers of data, including, for example, one or more of: structural aspects of the Structure, plumbing, electrical, data runs, material specifications, or other documentation, including, but not limited to: basic identifying information, installation information, service records, safety manuals, process records, and expected service schedule, among many other possibilities. An additional non-limiting example, data aggregation may include Sensors generating data that is associated with an IoT (Internet of Things)-based identification. Various IoT devices (or Sensors) may include a digital storage, processor, and transmitter for storing and conveying identifying information. Upon request, an IoT device may relay identifying information of itself to a human via a communications device, or to the IoT device's neighbors. It may also possibly convey information received from and/or sent to other internet connected devices as well. As per the above listing, functionality may therefore include modeled and tracked Performance of a Structure and equipment contained within the Structure, including consumables233used and timing of receipt and processing of consumables; modeled and actual maintenance232, including quality of maintenance performed; equipment Performance including yields; Consumables233tracking may include a frequency of replacement and quantity of replaced consumables; Utilities234tracking may include projected and actually units of energy consumed. In one aspect of the present invention, data related to the position and identity of substantial elements of a Structure first as designed and then recorded in their actual placement and installation. This may include locations of building features, such as beams, walls, electrical junctions, plumbing and etc. as the Structure is designed and constructed. As part of the Structure model, laser scanning may be performed on site at various disparate times during construction. An initial scan may provide general information relating to the location of the Structure in relationship to elements on the Property such as roadways, utilizes such as electricity, water, gas, and sewer to identify non-limiting examples. Additional events for scanning may occur during the construction process to capture accurate, three-dimensional As Built point-cloud information. Point cloud may include an array of points determined from image capture and/or laser scanning or other data collection technique of As Built features. In some examples, captured data may be converted into a 3D model, and saved within a cloud-based data platform. In some examples other methods of capturing spatially accurate information may include the use of drones and optical scanning techniques which may include high-resolution imagery obtained from multiple viewpoints. Scanning may be performed with light-based methods such as a CCD camera. Other methods may include infrared, ultraviolet, acoustic, and magnetic and electric-field mapping techniques may be utilized. Structure-related information may include physical features generally associated with an exterior of a Structure such as geolocation, elevation, surrounding trees and large landscaping features, underground utility locations (such as power, water, sewer, sprinkler system, and many other possible underground utility features), paving, and pool or patio areas. Structure-related information may also include features generally related to a Structure such as underground plumbing locations, stud locations, electrical conduit and wiring, vertical plumbing piping, and HVAC systems or other duct work. The acquisition of the data may allow the model system to accurately locate these interior and exterior features. Acquisition of As Built data during different points of the construction completion allows measurements to be taken prior to aspects involved in a measurement process being concealed by concrete, drywall or other various building materials. Data is acquired that is descriptive of actual physical features as the features are built and converted into a 3D model which may be referred to as the “As Built” model. The As Built model will include key components of the Structure and be provided with a level of artificial intelligence that fully describes the key component. In some embodiments, the As Built model may be compared to a design model. In some implementations, intelligent parameters are associated with key components within the 3D model. For example, key components and associated information may further be associated with intelligent parameters. Intelligent parameters for the key components may include the manufacturer, model number, features, options, operational parameters, whether or not an option is installed (and if so, its features and dimensions), any hardware associated with the key component (and its manufacturer and serial number), an owner's manual, and service contract information, as non-limiting examples. Intelligent parameters associated with a functional key component, such as HVAC Equipment, may include the manufacturer name, model number, capacity, efficiency rating, serial number, warranty start date, motor size, SEER rating, an owner's manual associated with the equipment, and service contract information. In another aspect, the AVM system can autonomously and/or interactively obtain, store, and process data that is provided to it by Sensors located in, on or proximate to components of the Structure, as the Structure is built, or when additions are made to the Structure. The generation, modeling, capture, use, and retention of data relating to Performances in specific equipment or in some cases, aspects relating to the design of a Structure, may be monitored by the system. A Structure may be represented by a three-dimensional model, which may be integrated with information related to the key components and laser-scanned location information. This information may be made available to the Structure owner/Structure builder through a computer, an iPad or tablet, or Smart Device. The resulting system may be useful to support virtual maintenance support. The three-dimensional model may support enhancement to the two-dimensional views that are typical of paper-based drawings. Although three-dimensional renderings are within the scope of information delivered in paper format, a three-dimensional electronic model may render dynamic views from a three-dimensional perspective. In some examples, the viewing may be performed with viewing apparatus that allows for a virtual reality viewing. In some examples, a viewing apparatus, such as a tablet or a virtual reality headset, may include orienting features that allow a user such as a Structure owner, Structure builder, inspector, engineer, designer or the like to view aspects of a model based upon a location, a direction, a height and an angle of view. A current view may be supplemented with various other information relating to features presented in the view. In some examples, the interface may be accessible through a virtual reality headset, computer, or mobile device (such as an iPad, tablet, or phone), as non-limiting examples. Utilizing a device equipped with an accelerometer, such as a virtual reality headset or mobile device, as non-limiting examples, a viewable section of the model may be displayed through the viewing medium (whether on a screen, or through a viewing lens), where the viewer's perspective changes as the accelerometer equipped device moves, allowing them to change their view of the model. The viewer's Vantage Point may also be adjusted, through a certain user input method, or by physical movement of the user, as non-limiting examples. The presented view may be supplemented with “hidden information”, which may include for example, depictions of features that were scanned before walls were installed. This hidden information may include information about pipes, conduits, ductwork and the like. Locations of beams, headers, studs and building Structure may be depicted. In some examples, depiction in a view may include a superposition of an engineering drawing with a designed location, in other examples images of an actual Structure may be superimposed upon the image based upon As Built scans or other recordations. In a dynamic sense, display may be used to support viewing of hypothetical conditions such as rerouted utilities and rebuild walls and other such Structure. In some examples, graphical- or text-based data may be superimposed over an image and be used to indicate specifications, Performance aspects, or other information not related to location, shape and size of features in the image. As presented above, an image may allow for a user to “see through walls” as the augmented reality viewing device simulates a section of a model associated with a space displayed via the virtual reality viewing device. The viewer's perspective may change as an accelerometer in the virtual reality viewing device moves. A user may also change a view of the AVM to include different layers of data available in the AVM. The viewer's Vantage Point may also be adjusted by moving about a physical space that is represented by the model. To achieve this, it may be possible to incorporate positioning hardware directly into a building represented by the virtual model. The positioning hardware may interface with an augmented reality device for positioning data to accurately determine the viewing device's orientation and location with millimeter precision. The positioning hardware may include, for example, a radio transmitter associated with a reference position and height. Altitude is differentiated from height unless specifically referenced since the relative height is typically more important. Accordingly, a user may access the AVM on site and hold up a Smart Device, such as an iPad or other tablet, and use the Smart Device to generate a view inside a wall in front of which the Smart Device is positioned, based upon the AVM and the location, height and direction of the Smart Device position. In some examples, through the use of an augmented reality device, it may also be possible to view data, such as user manuals, etc. of associated devices in the view of a user, simply by looking at them in the viewing interface. In other examples, there may be interactive means to select what information is presented on the view. Various electronic-based devices implementing of the present invention may also be viewed in a virtual reality environment without accelerometer such as a laptop or personal computer. A viewable section of a model may be displayed on a Graphical User Interface (GUI) and the viewer's Vantage Point may be adjusted, through a user input device. The ability to track machinery and other components of a system and store the components associated information—such as, for example, user manuals, product specifications, and part numbers—may allow for much more efficient use and maintenance of the components included within a Structure. Additionally, the system model may also maintain Structure owner manuals and warranties and eliminate the need for storage and tracking of hard copy manuals. In a non-limiting example, a user may access information related to a machinery a Smart Device acting as a Node within it in proximity to the machinery and accessing the parallel model in the Virtual Structure. This access may occur such as by clicking on the machinery in the Virtual Structure model or by scanning the Code label attached to machinery. In some examples, an IoT-accessible machine may have the ability to pair with a user's viewing screen and allow the system model to look up and display various information. Thus, the user may have access to various intelligent parameters associated with that machinery such as service records, a manual, service contract information, warranty information, consumables recommended for use such as detergents, installation-related information, power supply information, and the like. In some examples, an AVM system may include interfaces of various kinds to components of the system. Sensors and other operational parameter-detection apparatus may provide routine feedback of information to the model system. Therefore, by processing the data-stream with various algorithms autonomous characterization of operating condition may be made. Therefore, the AVM system may provide a user with alerts when anomalies in system Performance are recognized. In some examples, standard Structure maintenance requirements may be sensed or tracked based on usage and/or time and either notification or in some cases scheduling of a service call may be made. In some examples, the alert may be sent via text, email, or both. The Structure user may, accordingly, log back into the Virtual Structure to indicate completion of a maintenance task. Additionally, if appropriate, a vendor of such service or maintenance may indicate a nature and completion of work performed. By detecting operational status, a Virtual Structure may take additional autonomous steps to support optimal operation of a system. A Virtual Structure may take steps to order and facilitate shipping of anticipated parts needed for a scheduled maintenance ahead of a scheduled date for a maintenance event (for example, shipping a filter ahead of time so the filter arrives prior to the date it is scheduled to be changed). In another example, a Virtual Structure may recall notes from an Original Equipment Manufacturer (OEM) that could be communicated to a user through the Virtual Structure. In still further examples, a Virtual Structure may support a user involved in a real estate transaction by quantifying service records and Performance of a real Property. In still another aspect, the AVM may establish a standard maintenance and warranty program based on manufacturers' published data and the system's ability to advise Structure owners of upcoming needs and/or requirements. In other examples, the model system may facilitate allowing for Structure builders, rental companies, or maintenance companies to consolidate information for volume discounts on parts or maintenance items. The model system may also facilitate minimizing unnecessary time expenditure for Structure builders hoping to minimize needless service calls for warranty issues. This may also allow Structure builders and rental companies attempting to sell a Structure or a rental to demonstrate that care has been taken to maintain a Structure. Benefits derived from monitoring and tracking maintenance with a Virtual Structure may include positively reassuring and educating lenders and/or lien holders that their investment is being properly cared for. In addition, insurance companies may use access to a Virtual Structure to provide factual support that their risk is properly managed. In some examples, a data record in a Virtual Structure model system and how an owner has cared for their Structure may be used by insurance companies or lenders to ensure that good care is being taken. Maintenance records demonstrating defined criteria may allow insurance companies to offer a Structure owner policy discount. Such criteria may include, for example, installation of an alarm system. Additionally, access to a Virtual Structure may allow municipalities and utilities to use the information for accurate metering of utility usage without having to manually check a meter. In the aggregate across multiple Structures, peaks in utility demand may then be more accurately anticipated. In some examples, a Virtual Structure may also be used to assist with Structure improvement projects of various types. In some examples, the Structure improvement projects may include support for building larger additions and modifications, implementing landscaping projects. Smaller projects may also be assisted, including in a non-limiting example such a project as hanging a picture, which may be made safer and easier with the 3D “as-built” point cloud information. Hidden water piping, electrical conduits, wiring, and the like may be located, or virtually “uncovered”, based on the model database. During construction of a Structure corresponding to a Virtual Structure, discrete features of the As Built Structure may be identified via an identification device such as an IoT device or a QR code label. The ID device may be integrated to the feature or added during the build scope. Performance monitors may also be simultaneously installed to allow monitoring of Key Performance Indicators (KPIs) for selected features. In an example, an HVAC system may be added to a Structure during construction and a simultaneously a Performance monitor may be added to the HVAC system. The Performance monitor may be used to monitor various KPIs for an HVAC system. These KPIs may include outdoor air temperature, discharge air temperature, discharge air volume, electrical current, and the like. Similar monitoring capabilities may be installed to all machinery and utilities systems in a Structure. The combination of these numerous system monitors may allow for a fuller picture of the efficiency of operations of various systems. Use of the Virtual Structure, which may include data values contributed from communication of data from the various monitoring systems, may allow owners to receive periodic reports, such as in a non-limiting sense monthly emails which may show their current total energy consumption as well as a breakdown of what key components are contributing to the current total energy consumption. The systems presented herein may be used by owners and Structure managers to make decisions that may improve the cost effectiveness of the system. An additional service for Owners may allow the Structure owner to tap into energy-saving options as their Structure ages. As an example, if a more efficient HVAC system comes on the market, which may include perhaps a new technology Node, the user may receive a “Savings Alert”. Such an alert may provide an estimated energy savings of the recommended modification along with an estimate of the cost of the new system. These estimates may be used to generate a report to the owner of an estimated associated return-on-investment or estimated payback period should the Structure owner elect to replace their HVAC system. In some examples, an AVM of a Virtual Structure may set a threshold value for the required ROI above which they may be interested in receiving such an alert with that ROI is achieved. This information will be based on data derived from actual operating conditions and actual historical usage as well as current industry information. Predictive maintenance and energy savings to key systems via Smart Structure Total Cost of Ownership (“TCO”) branded Sensors. With the ability to collect and utilize relevant Structure information with the model system, the aggregation of data and efficiency experience from numerous systems may allow for analysis of optimization schemes for various devices, machinery and other Structure components that includes real installed location experience. Analysis from the aggregated data may be used to provide feedback to equipment manufacturers, building materials fabricators and such suppliers. Referring toFIGS.3A-3D, an illustration of the collection of data by scanning a Structure during its construction is provided. InFIG.3A, a depiction of a site for building a Structure is illustrated. The depiction may represent an image that may be seen from above the site. Indications of Property boundaries such as corners301and Property borders302are represented and may be determined based on-site scanning with Property markings from site surveys or may be entered based on global coordinates for the Property lines. An excavated location303may be marked out. Roadways, parking and/or loading areas304may be located. Buried utilities such as buried telephone305, buried electric306, buried water and sewer307are located in the model as illustrated. In some examples, such other site service as a buried sprinkler system308may also be located. Referring toFIG.3B, the excavated location303may be scanned or imaged to determine the location of foundation elements. In some non-limiting examples, a foundational footing321along with buried utilities322is illustrated. The buried utilities may include utilities such as electric lines, water supply (whether from a utility or a well on-location), sewer or septic system lines, and telecommunications lines such as telephone, cable and internet. Other footing elements323may be located at structural requiring locations as they are built. In some examples, a scanning system may provide the locational orientation relative to site-orientation markings. In other examples, aerial imagery such as may be obtained with a drone may be used to convert features to accurate location imagery. Referring toFIG.3C, a wall331of the Structure in the process of build is illustrated. The Structure may be scanned by a scanning element330. In some examples, a laser three-dimensional Scanner may be used. The wall may have supporting features like top plates333, headers336, studs332, as well as internal items such as pipes334, electrical conduits, and wires335. There may be numerous other types of features within walls that may be scanned as they occur such as air ducts, data cables, video cables, telephone cables, and the like. Referring toFIG.3D, the wall may be completed with Structure components behind wall facing340may no longer be visible. Electrical outlets341and door Structures342may be scanned by a scanning element330. Referring toFIG.3E, a wireless Node may be fixedly attached to a position in or proximate to a Structure. In some embodiments, attachment may be accomplished during construction and/or retrofitting of a structure, in which case the functionality described herein may be made operational to track Agents, materials, equipment, and the like during a construction phase, and also track a location of materials and equipment included in the structure. Nodes may be installed as Reference Point Transceivers or be attached to items that dynamically change positions, such as, by way of non-limiting example one or more of: Agents, building materials, structural components, electrical components, plumbing components, equipment, machines and architectural aspects (e.g., a corner, an arch, an extremity, and the like). In some non-limiting examples of a wireless Node, a Bluetooth communications hub compatible with a standard such as, for example BLE5.1 (Bluetooth Low Energy 5.1) or Wi-Fi RTT may be fixedly attached to a structural component, such as a door header336as Node350acting as a Reference Point Transceiver. In another example, a Node351may act as a Reference Point Transceiver and be attached to a wall stud, preferentially one that has electrical conduit335running along it. In some embodiments, the electrical conduit335may supply power to the Node351. Alternatively, a Node350may be configured as part of a receptacle box. In some examples, one or more Nodes350-351may be battery powered. One or more Nodes350-351may be powered via electrical supply wiring353from a nearby power conduit335so that the Node350-351may be tied into a centrally powered electrical system. Moreover, the Nodes may be adapted to de-power and de-couple from a network based on a power supply status or a power drain change. FIG.3Fillustrates an exemplary Agent365supporting a Smart Device366with wireless communications components enabling RF communications such as, one or more of: Cellular, Wi-Fi, Bluetooth, Zigbee, and other wireless capabilities. The Smart Device366may also include devices capable of receiving and/or transmitting with infrared capabilities. The Smart Device366may also include, or be in logical communication with, transducers capable of emitting sound, and in some examples, infrasound and/or ultrasonic sound, as well as microphones capable of detecting ultrasonic sound and/or infrasound. An Agent365may become positioned proximate to a door Structure342such that the Agent365supported Smart Device366may wirelessly communicate with a Node362fixedly attached to the Structure342. The Node362may be in electrical communication with one or more of: a set of protruding antennas360, an antenna array device361(which may include a multitude of antennas separated at distances efficient for communication and/or location determination). A wireless Node with antennas362may be located proximate to a typical wall outlet Structure. Any of these Nodes360-362may communicate with the Smart device for location protocols such as RSSI, Time of Flight, and Angle of Arrival as non-limiting examples. The Nodes360-362may have a carefully measured distance characterization for each of the antennas that they employ and one of the antennas involved in wireless communication may be further characterized as being a local or global origin point (0,0,0 in Cartesian notation). In other examples, none of the antenna locations may be located at a local or global origin point, but rather a known offset from a specified origin point370may be characterized for each of the hub antenna locations. The Agent365may proceed through a threshold of the door Structure342and be located on the other side. Nodes360-362may each protrude from both sides of a wall and/or may have a second set of antennas located on a distal side of the wall. In other examples, materials used in wall construction may be configured to provide minimal interference with wireless signals travelling through the wall materials. For configurations with a second set of antennas, as the user passes through the door, a communication between the Smart Device366and the Node360-362may transfer from antennas protruding on a proximate wall side to antennas protruding on a distal wall side. A geographic position of a Structure may be calculated via wireless communications, such as those using sub-GHz wavelengths, GPS, or other longer-range wavelength a Smart Device from within the Structure. The geographic position may be used to indicate a Structure identification. A position within the Structure may be determined based upon one or more of: an angle of arrival and angle of departure of a wireless signal and one or more timing signals used to determine a distance of the Smart Device from: a) a Node acting as Reference Point Transceiver; or b) a dynamic position Node. In some embodiments, an angle of departure or an angle of arrival are not necessary, and a position may be determined by measuring a distance to three or more positioning reference devices. However, in some embodiments, it may still be useful to compute an angle between the positioning reference devices and/or the Node. Additional aspects that may be referenced to determine a location of a Node or Smart Device accurately may include one or more of: relative signal strength received from wireless transmissions emanating from another Nodes; time of arrival of radio signals of wireless transmissions emanating from another Node; generating a distance to another Node based upon a time difference of arrival of radio signals of wireless transmissions emanating from another Node; or an angle of arrival and/or angle of departure of a wireless transmission from another Node. The above steps may be repeated for multiple Nodes of various types, including both reference point transceiver Nodes and dynamic position Nodes. As mentioned above, in some embodiments, wireless communications may include a quantification of a condition within or proximate to a Structure. The condition may include, for example, one or more of: a vibration measured with an accelerometer; a temperature of at least a portion of the Structure; an electrical current measurement to equipment installed in the Structure, a number of cycles of operation of equipment installed in the Structure; a number of cycles of operation of an machinery installed in the Structure; an electrical current measurement to an electrical device located within the Structure; a vibration or other sensor measurement associated with movement of an Agent or person within the Structure; or presence of water and/or humidity within the Structure. A vibration pattern may be associated with a specific occupant and tracking the movement of the specific occupant through the Structure may be based upon measured vibration patterns. Similarly, a vibration pattern may be associated with a particular activity of a specific occupant and the activity of the specific occupant may be tracked within the Structure based upon measured vibration patterns. Referring now toFIG.4, according to the present invention, an Agent400may support a Node with one or more Transceivers. The Transceivers may include one or more of: a Multi-modality Transceiver401; Transceivers having a same modality402; Transceivers of different modalities403; transmitters of a single modality404; transmitters of multiple modalities405; receivers of a single modality406; and receivers of multiple modalities407. Similarly, a Node deployed as a Reference Point Transceiver may include multiple Transceivers, transmitters, and receivers401-408. The multiple Transceivers, transmitters, and receivers401-408may include one or both of: transmitters and receivers of a same modality; and transmitters and receivers of different modalities. A modality, as used in conjunction with a Transceiver, transmitter, and/or receiver refers to one or both of a bandwidth of wireless communication and a protocol associated with a bandwidth. By way of non-limiting example, a modality, as used in relation to a Transceiver, transmitter, and/or receiver may include: Wi-Fi; Wi-Fi RTT; Bluetooth; UWB; Ultrasonic; sonic; infrared; or other logical communication medium. FIG.5illustrates Nodes with Reference Point Transceivers501-504that may be deployed in a defined area506, such as a Structure to determine a location507of an Agent500supporting a Node505. Nodes with Reference Point Transceivers501-504may be fixed in a location and wirelessly communicate in a manner suitable for determination a position of the Node Transceiver505supported by the Agent500. Transceiving may be via wireless transmission using one or more bandwidths and communication protocols by a Node Transceiver505supported by the Agent500. By way of non-limiting example, Node Transceivers505supported by the Agent500may be included in, or be in logical communication with, a Smart Device, such as a smart phone, tablet or other Agent-supportable device, such as a headgear, ring, watch, wand, pointer with Node Transceivers505able to Transceive with the Reference Point Transceivers501-504. The Reference Point Transceivers501-504may include devices, such as, for example, a radio transmitter, radio receiver, a light generator, or an image recognizable device. A radio transmitter may include a router or other Wi-Fi, Bluetooth or other communication device for entering into logical communication with a controller. In some embodiments, Reference Point Transceivers501-504may include a Wi-Fi router that additionally provides access to a distributed network, such as the Internet. Cartesian Coordinates, Polar Coordinates, Vector values, a GPS position, or other data that may be utilized for one or more of: locating one or both of an Agent500; indicating a direction of interest; and identifying a Structure or defined area506. A precise location may be determined based upon wireless transmissions between Nodes. Timing determinations—as well as angle of arrival, angle of departure, transmission strength, transmission noise, and transmission interruptions—may be considered in generating relative positions of Nodes. Additional considerations may include AI and unstructured queries of transmissions between Nodes and triangulation logic based upon a measured distance from three or more Reference Point Nodes501-504. For example, a radio transmission or light emission may be measured, and timing associated with the radio transmission or light to determine a di stance between Nodes. Distances from three reference position identifiers501-503may be used to generate a position of a Node in consideration. Other methodologies include determination of a distance from one or more Nodes and a respective angle of arrival and/or angle of departure of a radio or light transmission between the Node in consideration and another Node (Reference Point Node or dynamic position Node). Other embodiments may include a device recognizable via image analysis and a camera or other Image Capture Device, such as a CCD device, may capture an image of three or more Reference Point Nodes501-504. Image analysis may recognize the identification of each of three or more of the Reference Point Transceivers501-504and a size ratio of the respective image captured Reference Point Transceivers501-504may be utilized to calculate a precise position. Similarly, a height designation may be made via triangulation using the position identifiers as reference to a known height or a reference height. Triangulation essentially includes determining an intersection of three distances508-510, each distance508-510calculated from a reference point501-504to an Agent-supported device505. The present invention allows for a first distance508to be determined based upon a wireless communication in a first modality; and a second distance509and a third distance510determined based upon a wireless communication in a same or different modality as the first modality. For example, a first distance508may be determined based upon a wireless communication using Wi-Fi; a second distance509may be determined based upon a wireless communication using Bluetooth; and a third communication may be determined based upon a wireless communication using ultrasonic communication (other combinations of same and/or different communication modalities are also within the scope of the present invention). Referring now toFIG.6, an automated controller is illustrated that may be used to implement various aspects of the present invention in various embodiments, and for various aspects of the present invention. Controller600may be included in one or more of: a wireless tablet or handheld smart device, a server, an integrated circuit incorporated into a Node, appliance, equipment item, machinery or other automation. The controller600includes a processor unit602, such as one or more semiconductor-based processors, coupled to a communication device601configured to communicate via a communication network (not shown inFIG.6). The communication device601may be used to communicate, for example, with one or more online devices, such as a smart device, a Node, personal computer, laptop, or a handheld device. The processor602is also in communication with a storage device603. The storage device603may comprise any appropriate information storage device, including combinations of digital storage devices (e.g., an SSD), optical storage devices, and/or semiconductor memory devices such as Random Access Memory (RAM) devices and Read Only Memory (ROM) devices. The storage device603can store a software program604with executable logic for controlling the processor602. The processor602performs instructions of the software program604, and thereby operates in accordance with the present invention. The processor602may also cause the communication device601to transmit information, including, in some instances, timing transmissions, digital data and control commands to operate apparatus to implement the processes described above. The storage device603can additionally store related data in a database605and database606, as needed. Referring now toFIG.6A, an illustration of an exemplary wireless Node610configured with a transceiver624to wirelessly communicate via one or more wireless communication Modalities, including a bandwidth and protocol, such as the Bluetooth 5.1; BLE5.1; Wi-Fi RT and/or GPS standard is illustrated. As discussed, many different Modalities of wireless technology may be utilized with the content presented herein, but a BLE5.1 “radio” module is an interesting example since its standards provide for angle of arrival (AoA) capability as well as angle of departure (AoD) and a distance determination based upon a timing signal. With AoA/AoD a designed antenna array625can be used by an RF Transceiver624to measure a phase shift amongst multiple antenna elements to estimate distance differences between the antennas and to extract an angle from the antenna array to the source of radiation. A BLE5.1-consistent multichip transceiver624may include circuitry and software code to perform the acquisition of data and determine the angle of arrival in some examples. In other examples, a BLE5.1-consistent multichip transceiver624may control the acquisition of data from an antenna array while streaming the data to off module processing capabilities. The BLE5.1-consistent Node610may contain functional blocks of circuitry for peripheral620control. The peripherals may include a connection to external host controllers/MCUs621. The peripheral620control may also interact with peripheral and IoT Sensors and other devices622. The BLE5.1-consistent Node610may include a processing element623which may have its own memory of different types as well as capabilities for encryption of data. The BLE5.1 consistent Node610may also have Transceiver624. This circuitry may include Baseband and RF functions as well as control the AoA functions and the self-verifying array functions. The Bluetooth communications624may receive signals through an on-module antenna625or an external antenna or array of antennas may provide external RF input626. The BLE5.1-consistent Node610may include functional circuitry blocks for control of Security functions627, cryptogenerations, random number generation and the like. The BLE5.1-consistent Node610may include functional blocks for power management628. The BLE5.1-consistent Node610may be operative for quantification of temperature aspects of the Node610, battery-control functions and power-conversion functions. An external power source633may be included to provide electrical energy to a power management unit628which, in some examples. may be from a battery unit, or a grid connected power supply source in other examples. The BLE5.1-consistent Node610may include functions for control of timing and triggering629. In a related sense, the BLE5.1-consistent Node610may include functions for clock management630within the module. The BLE5.1-consistent Node610may also include circuit elements that are always-on631to allow external connections632to interact with the device and perhaps awake it from a dormant state. There may also be other customized and/or generic functions that are included in a BLE5.1-consistent Node610and/or multichip module. Referring now toFIG.6B, a Node610included in a higher order deployment assembly is illustrated. A deployment Node650may be in logical communication with one or more of: sensors, customized control commands, antenna array designs and the like. A Node650may include multiple antennas or antenna arrays651-656. As described previously, the Node650may include a transceiver module610, and in some examples, the transceiver module may include Bluetooth-adherent aspects. Communications received via an antenna651-656may be directly ported into the transceiver module610. Embodiments may also include routing particular antenna/antenna array outputs to the transceiver module610in a controlled and timed sequence. A processing Module670may coordinate a connection of the Node650to external peripherals. In some examples, circuitry680to logically communicate with one or more of: a Peripheral, a data Connection, Cameras and Sensors controllers, and components to perform data and image acquisition of various kinds, or it may interface external components with the Node650. The Node650may also include its own power management unit660which may take connected power or battery power or both and use it to prove the various power needs of the components of the assembly. The Node650may have its own processing modules670or collections of different types of processing functions which may have dedicated memory components671. In some examples, specialized processing chips of various kinds such as Graphical Processing Units and fast mathematics function calculators as well as dedicated artificial intelligence processing chips may be included to allow the Node650to perform various computational functions including location determination of wirelessly connected devices amongst other functions. There may be numerous other functions to include in a Node650and alternative types of devices to perform the functions presented herein. In some examples as illustrated inFIG.6Cantenna arrays690,691may be assembled into a “Puck” shown as Node650wherein the antenna arrays are configured with antenna designs which have directional aspects to them. Directional aspects may mean that the antennas may be sensitive to incident radiation coming from a certain direction but not sensitive to radiation coming from a different direction. Antenna arrays690,691may include antennas that may have maximized signals for a particular incident waveform, the identification of which antenna may provide or supplement angle of incidence calculations. A directional antenna may include, for example, an antenna with RF shielding over some portion of an antenna's circumference. For example, 270° (or some other subset of a 360° circumference of an antenna), or an antenna array may have RF shielding to block and/or reflect back an RF signal towards the antenna-receiving portion. Other directional antennas may include a shield blocking less than 360° of RF transmissions that rotates around a receiving portion of an antenna and only receives RF communications from a direction of an opening in the shield. Shielded antennas may provide improved determination of a direction from which a wireless transmission is being received from, since RF noise is blocked from a significant portion of a reception sphere. Referring now toFIG.7, a block diagram of an exemplary mobile device702is illustrated. The mobile device702comprises an optical capture device708to capture an image and convert it to machine-compatible data, and an optical path706, typically a lens, an aperture or an image conduit to convey the image from the rendered document to the optical capture device708. The optical capture device708may incorporate a CCD, a Complementary Metal Oxide Semiconductor (CMOS) imaging device, or an optical Sensor724of another type. A microphone710and associated circuitry may convert the sound of the environment, including spoken words, into machine-compatible signals. Input facilities may exist in the form of buttons, scroll wheels, or other tactile Sensors such as touchpads. In some embodiments, input facilities may include a touchscreen display. Visual feedback to the user is possible through a visual display, touchscreen display, or indicator lights. Audible feedback734may come from a loudspeaker or other audio transducer. Tactile feedback may come from a vibrate module736. A motion Sensor738and associated circuitry convert the motion of the mobile device702into machine-compatible signals. The motion Sensor738may comprise an accelerometer that may be used to sense measurable physical acceleration, orientation, vibration, and other movements. In some embodiments, motion Sensor738may include a gyroscope or other device to sense different motions. A location Sensor740and associated circuitry may be used to determine the location of the device. The location Sensor740may detect Global Position System (GPS) radio signals from satellites or may also use assisted GPS where the mobile device may use a cellular network to decrease the time necessary to determine location. In some embodiments, the location Sensor740may use radio waves to determine the distance from known radio sources such as cellular towers to determine the location of the mobile device702. In some embodiments these radio signals may be used in addition to GPS. The mobile device702comprises logic726to interact with the various other components, possibly processing the received signals into different formats and/or interpretations. Logic726may be operable to read and write data and program instructions stored in associated storage or memory730such as RAM, ROM, flash, or other suitable memory. It may read a time signal from the clock unit728. In some embodiments, the mobile device702may have an on-board power supply732. In other embodiments, the mobile device702may be powered from a tethered connection to another device, such as a Universal Serial Bus (USB) connection. The mobile device702also includes a network interface716to communicate data to a network and/or an associated computing device. Network interface716may provide two-way data communication. For example, network interface716may operate according to the internet protocol. As another example, network interface716may be a local area network (LAN) card allowing a data communication connection to a compatible LAN. As another example, network interface716may be a cellular antenna and associated circuitry which may allow the mobile device to communicate over standard wireless data communication networks. In some implementations, network interface716may include a Universal Serial Bus (USB) to supply power or transmit data. In some embodiments other wireless links may also be implemented. As an example of one use of mobile device702, a reader may scan some coded information from a location marker in a Structure with the mobile device702. The coded information may be included on apparatus such as a hash code, bar code, RFID or other data storage device. In some embodiments, the scan may include a bit-mapped image via the optical capture device708. Logic726causes the bit-mapped image to be stored in memory730with an associated timestamp read from the clock unit728. Logic726may also perform optical character recognition (OCR) or other post-scan processing on the bit-mapped image to convert it to text. Logic726may optionally extract a signature from the image, for example by performing a convolution-like process to locate repeating occurrences of characters, symbols or objects, and determine the distance or number of other characters, symbols, or objects between these repeated elements. The reader may then upload the bit-mapped image (or text or other signature, if post-scan processing has been performed by logic726) to an associated computer via network interface716. As an example of another use of mobile device702, a reader may capture some text from an article as an audio file by using microphone710as an acoustic capture port. Logic726causes audio file to be stored in memory730. Logic726may also perform voice recognition or other post-scan processing on the audio file to convert it to text. As above, the reader may then upload the audio file (or text produced by post-scan processing performed by logic726) to an associated computer via network interface716. A directional Sensor741may also be incorporated into the mobile device702. The directional Sensor may be a compass and produce data based upon a magnetic reading or based upon network settings. Referring now toFIG.8, additional apparatus and methods for determining a geospatial location and determination of a direction of interest may include one or both of an enhanced Smart Device and a Smart Device in logical communication with wireless position devices803-810. The importance of geospatial location and determination of a direction of interest is discussed in considerable detail above. As illustrated, a Smart Device801may be in logical communication with one or more wireless position devices803-810strategically located in relation to the physical dimensions of the Smart Device. For example, the Smart Device801may include a smart phone or tablet device with a user interface surface820that is generally planar. The user interface surface820will include a forward edge818and a trailing edge819. In some preferred embodiments, the Smart Device will be fixedly attached to a smart receptacle802. The smart receptacle802may include an appearance of a passive case, such as the type typically used to protect the Smart Device801from a damaging impact. However, according to the present invention, the smart receptacle802will include digital and/or analog logical components, such as wireless position devices803-810. The wireless position devices803-810include circuitry capable of receiving wireless transmissions from multiple wireless positional reference Transceivers811-814. The wireless transmissions will include one or both of analog and digital data suitable for calculating a distance from each respective reference point811-814. In some embodiments, the smart receptacle802will include a connector815for creating an electrical path for carrying one or both of electrical power and logic signals between the Smart Device801and the smart receptacle802. For example, the connector815may include a mini-USB connector or a lightening connector. Additional embodiments may include an inductive coil arrangement for transferring power. Embodiments may also include wireless transmitters and receivers to provide logical communication between the wireless position devices803-810and the Smart Device801. Logical communication may be accomplished, for example, via one or more of: Bluetooth, ANT, and infrared media. Reference Transceivers811-814provide wireless transmissions of data that may be received by wireless position devices803-810. The wireless transmissions are utilized to generate a position of the respective wireless position devices803-810in relation to the reference Transceivers811-814providing the wireless transmissions to the wireless position devices803-810. The wireless position devices803-810are associated with one or more of: a position in a virtual model; a geographic position; a geospatial position in a defined area, such as Structure; and a geospatial position within a defined area (such as, for example a Property). According to the present invention, a Smart Device may be placed into a case, such as a smart receptacle802that includes two or more wireless position devices803-810. The wireless position devices803-810may include, for example, one or both of: a receiver and a transmitter, in logical communication with an antenna configured to communicate with reference Transceivers811-814. Communications relevant to location determination may include, for example, one or more of: timing signals; SIM information; received signal strength; GPS data; raw radio measurements; Cell-ID; round trip time of a signal; phase; and angle of received/transmitted signal; time of arrival of a signal; a time difference of arrival; and other data useful in determining a location. The Nodes803-810may be located strategically in the case802to provide intuitive direction to a user holding the case802, and also to provide a most accurate determination of direction. Accordingly, a forward Node803may be placed at a top of a Smart Device case and a rearward Node804may be placed at a bottom of a Smart Device case802. Some embodiments each of four corners of a case may include a Node805,806,807,808. Still other embodiments may include a Node809and810on each lateral side. The present invention provides for determination of a location of two or more wireless positioning devices803-810and generation of one or more directional Vectors817and/or Rays based upon the relative position of the wireless positioning devices803-810. For the sake of convenience in this specification, discussion of a Vector that does not include specific limitations as to a length of the Vector and is primarily concerned with a direction, a Ray of unlimited length may also be utilized. In some embodiments, multiple directional Vectors817are generated and a direction of one or more edges, such as a forward edge, is determined based upon the multiple directional Vectors817. According to the present invention, a geospatial location relative to one or more known reference points is generated. The geospatial location in space may be referred to as having an X,Y position indicating a planar designation (e.g., a position on a flat floor), and a Z position (e.g., a level within a Structure, such as a second floor) may be generated based upon indicators of distance from reference points. Indicators of distance may include a comparison of timing signals received from wireless references. A geospatial location may be generated relative to the reference points. In some embodiments, a geospatial location with reference to a larger geographic area is associated with the reference points, however, in many embodiments, the controller will generate a geospatial location relative to the reference point(s) and it is not relevant where the position is located in relation to a greater geospatial area. In some embodiments, a position of a Smart Device may be ascertained via one or more of: triangulation; trilateration; and multilateration (MLT) techniques. A geospatial location based upon triangulation may be generated based upon a controller receiving a measurement of angles between the position and known points at either end of a fixed baseline. A point of a geospatial location may be determined based upon generation of a triangle with one known side and two known angles. A geospatial location based upon trilateration may be generated based upon a controller receiving wireless indicators of distance and geometry of geometric shapes, such as circles, spheres, triangles and the like. A geospatial location based upon multilateration may be generated based on a controller receiving a measurement of a difference in distance to two reference positions, each reference position being associated with a known location. Wireless signals may be available at one or more of: periodically, within determined timespans, and continually. The determination of the difference in distance between two reference positions provides multiple potential locations at the determined distance. A controller may be used to generate a plot of potential locations. In some embodiments, the potential determinations generally form a curve. Specific embodiments will generate a hyperbolic curve. The controller may be programmed to execute code to locate an exact position along a generated curve, which is used to generate a geospatial location. The multilateration thereby receives as input multiple measurements of distance to reference points, wherein a second measurement taken to a second set of stations (which may include one station of a first set of stations) is used to generate a second curve. A point of intersection of the first curve and the second curve is used to indicate a specific location. In combination with, or in place of directional movement of a Smart Device801in order to quantify a direction of interest to a user, some embodiments may include an electronic and/or magnetic directional indicator that may be aligned by a user in a direction of interest. Alignment may include, for example, pointing a specified side of a device, or pointing an arrow or other symbol displayed upon a user interface on the device towards a direction of interest. In a similar fashion, triangulation may be utilized to determine a relative elevation of the Smart Device as compared to a reference elevation of the reference points. It should be noted that although a Smart Device is generally operated by a human user, some embodiments of the present invention include a controller, accelerometer, and data storage medium, Image Capture Device, such as a Charge Coupled Device (“CCD”) capture device and/or an infrared capture device being available in a handheld or unmanned vehicle or other Agent. An unmanned vehicle may include for example, an unmanned aerial vehicle (“UAV”) or an unmanned ground vehicle (“UGV”), such as a unit with wheels or tracks for mobility. A radio control unit may be used to transmit control signals to a UAV and/or a UGV. A radio control unit may also receive wireless communications from the unmanned vehicle. In some embodiments, multiple unmanned vehicles may capture data in a synchronized fashion to add depth to the image capture and/or a 3-dimensional and 4-dimensional (over time) aspect to the captured data. In some implementations, a UAV position will be contained within a perimeter and the perimeter will have multiple reference points to help each UAV (or other unmanned vehicle) determine a position in relation to static features of a building within which it is operating and also in relation to other unmanned vehicles. Still other aspects include unmanned vehicles that may not only capture data but also function to perform a task, such as paint a wall, drill a hole, cut along a defined path, or other function. As stated throughout this disclosure, the captured data may be incorporated into an AVM. In still other embodiments, captured data may be compared to a library of stored data using recognition software to ascertain and/or affirm a specific location, elevation, and direction of an image capture location and proper alignment with the virtual model. Still other aspects may include the use of a compass incorporated into a Smart Device. By way of non-limiting example, functions of the methods and apparatus presented herein may include one or more of the following factors that may be modeled and/or tracked over a defined period of time, such as, for example, an expected life of a build (such as 10 years or 20 years). Referring now toFIG.8A, in some embodiments, Nodes803A-810A may be incorporated into a Smart Device801A and not require a smart receptacle to house Nodes803-810. Nodes803A-810A that are incorporated into a Smart Device, such as a smart phone or smart tablet, will include internal power and logic connections and therefore not require wireless communication between the controller in the Smart Device801A and the Nodes803A-810A. A Smart Device801A with integrated Nodes803-810and a Smart Device801with Nodes803-810in a smart receptacle802may provide a directional indication, such as a directional Vector817817A, without needing to move the Smart Device from a first position to a second position since a directional Vector may be determined from a relative position of a first Nodes803-810and a second wireless positional device Nodes803-810. In exemplary embodiments, as described herein, the distances may be triangulated based on measurements of Wi-Fi strength at two points. Wi-Fi signal propagates outward as a wave, ideally according to an inverse square law. Ultimately, a feature of the present invention relies on measuring relative distances at two points. In light of the speed of Wi-Fi waves and the real-time computations involved in orienteering; these computations need to be as computationally simple as possible. Thus, depending upon a specific application and mechanism for quantifying a condition, such as a measurement, various coordinate systems may be desirable. In particular, if the Smart Device moves only in a planar direction while the elevation is constant, or only at an angle relative to the ground, the computation is more simple. One exemplary coordinate system includes a polar coordinate system. One example of a three-dimensional polar coordinate system is a spherical coordinate system. A spherical coordinate system typically comprises three coordinates: a radial coordinate, a polar angle, and an azimuthal angle (r, θ, and φ, respectively, though θ and φ are occasionally swapped conventionally). By way of non-limiting example, suppose Point1is considered the origin for a spherical coordinate system (i.e., the point (0, 0, 0)). Each Wi-Fi emitter e1, e2, e3 can be described as points (r1, θ1, φ1), (r2, θ2, φ2), and (r3, θ3, φ3), respectively. Each of the ri's (1<i<3) represent the distance between the Wi-Fi emitter and the Wi-Fi receiver on the Smart Device. It is understood that in some embodiments, an azimuth may include an angle, such as a horizontal angle determined in an arcuate manner from a reference plane or other base direction line, such as an angle formed between a reference point or reference direction; and line (Ray or Vector) such as a Ray or Vector generated from or continuing to; a Smart Device, or a positional Sensor in logical communication with a Smart Device or other controller. In preferred embodiments the Ray or Vector may be generally directed from a reference point Transceiver towards, and/or intersect one or more of: an item of interest; a point of interest; an architectural aspect (such as a wall, beam, header, corner, arch, doorway, window, etc.); an installed component that may act as a reference in an AVM (such as for example, an electrical outlet, a light fixture, a plumbing fixture, an architectural aspect; an item of equipment; an appliance; a multimedia device, etc.); another reference point Transceiver or other identifiable destination. Embodiments include a position of the Transceiver being determined via use of a polar coordinate system. The polar coordinate system may include a spherical coordinate system or a cylindrical coordinate system. Accordingly, in some embodiments, spherical coordinate system may include reference point Transceiver that is capable of determining an angle of departure of a location signal and a Transceiver that is capable of determining an angle of arrival of the location signal; one or both of which may be used to facilitate determination of an applicable azimuth. According to various embodiments of the present invention, one or both of an angle of departure and an angle of arrival may therefore be registered by a Transceiver that is transmitting and/or receiving wireless signals (e.g., radio frequency, Bluetooth 5.1, sonic frequency, or light frequency). In some embodiments, orienteering occurs in a Structure, in which Transceivers, (including, for example, one or more of: Wi-Fi Transceivers, UWB Transceivers, Bluetooth Transceivers, infrared Transceivers and ultrasonic Transceivers) may be located above and/or below an Agent. In these embodiments, a cylindrical coordinate system may be more appropriate. A cylindrical coordinate system typically comprises three coordinates: a radial coordinate, an angular coordinate, and an elevation (r, θ, and z, respectively). A cylindrical coordinate system may be desirable where, for example, all Wi-Fi emitters have the same elevation. Referring now toFIG.8B, in some embodiments, one or both of a Smart Device801and a smart receptacle802may be rotated in a manner (such as, for example in a clockwise or counterclockwise movement820,822relative to a display screen) that repositions one or more Nodes803-810from a first position to a second position. A Vector826may be generated at an angle that is perpendicular825or some other designated angle in relation to the Smart Device801. In some embodiments, an angle in relation to the Smart Device is perpendicular825and thereby viewable via a forward-looking camera on the Smart Device. A user may position the Smart Device801such that an object in a direction of interest is within in the camera view. The Smart Device may then be moved to reposition one or more of the Nodes803-810from a first position to a second position and thereby capture the direction of interest via a generation of a Vector in the direction of interest. Referring now toFIG.8C, as illustrated, a Vector825indicative of a direction of interest may be based upon a rocking motion823-824of the Smart Device801, such as a movement of an upper edge818in a forward arcuate movement823. The lower edge819may also be moved in a complementary arcuate movement824or remain stationary. The movement of one or both the upper edge818-819also results in movement of one or more Nodes803-810. The movement of the Nodes803-810will be a sufficient distance to register to geospatial positions based upon wireless transmissions. A required distance will be contingent upon a type of wireless transmission referenced to calculate the movement. For example, an infrared beam may require less distance than a Wi-Fi signal, and a Wi-Fi transmission may require less distance than a cell tower transmission which in turn may require less distance than a GPS signal. In some embodiments, as discussed further below, hybrid triangulation may include one or more distances based upon wireless transmissions of different bandwidths or modalities. For example, a first modality may include Wi-Fi transmissions and a second modality may include Bluetooth transmissions, still another modality may include infrared or ultrasonic modalities. Referring toFIG.8D, line segments831-838are illustrated that intersect various generated position points (PP1-PP8) for Transceivers803-810. Position points PP1-PP8 may be generated according to the methods and apparatus presented herein, including a mathematical average, median, weighted average, or other calculation of multiple positions determined via triangulation techniques. In addition, a Vector839or Ray may be generated based upon one or more of the lines831-838. In some embodiments, position points may be recorded in high numbers based upon thousands of logical communications per second and a virtual representation of the position points PP1-PP8 may be generated based upon the recorded position points PP1-PP8. Some embodiments may also include a cloud point type representation a device that comprises the Transceivers used to record position point PP1-PP8, wherein the cloud point representation is based upon the multiple positions calculated. Directional Wireless Modalities Some modalities, such as those modalities that adhere to the Bluetooth 5.1 or BL5.1 standards, allow a Node to determine an angle of arrival (AoA) or an angle of departure (AoD) for a wireless transmission. An array of antennas may be used to measure aspects of the Bluetooth signaling that may be useful to calculate these AoA and AoD parameters. By calibrating an antenna system, the system may be used to determine angles in one or two dimensions depending on the design of the antenna. The result may be significant improvement in pinpointing the location of origin of a signal. An array of antennas may be positioned relative to each other and a transmitting transceiver to allow for extraction of an AoA/AoD. Such an array may include a rectangular array; a polar or circular array; a linear array; and a patterned array, where a number of antennas are deployed in a pattern conducive to a particular environment for transceiving. Antennas may be separated by characterized distances from each other, and in some examples, a training protocol for the antenna array results in antenna positioning incorporating superior angle and location precision. Some Nodes may transceive in 2.4-2.482 GHz frequency bands, and thus the RF transmissions may have wavelengths in the roughly 125 mm length scale. A collection of antennas separated by significantly less than the wavelength may function by comparing a phase of RF transmissions arriving at the antennas. An accurate extraction of phase differences can yield a difference in path length that when accumulated can lead to a solution for the angles involved. Referring toFIGS.9A-Da series of exemplary devices employing matrices of antennas for use with Nodes that communicate via a Bluetooth standard, a Wi-Fi standard or other modality, is illustrated. Linear antenna arrays910are illustrated inFIG.9A. Rectangular antenna arrays920are illustrated inFIG.9B. Rectangular antenna arrays930are illustrated inFIG.9C. Nodes may include antenna arrays combined with batteries and circuitry to form complete self-contained devices. The Nodes or a controller may determine an AoA and/or AoD or other related angular determinations based upon values for variables involved in wireless communications. In an example, a composite device940may be formed when a Node942with a circular configuration of antenna elements943is attached to an exemplary Smart Device941. The Node942attached to the Smart Device941may communicate information from and to the Smart Device941including calculated results received from or about another Node, such as a Node fixed as a Reference Point Transceiver or a Node with dynamic locations, wherein the wireless communications are conducive to generation of reference angles of transmission and/or receiving. Referring toFIG.10A, a Smart Device1020may be equipped with a Node1030that includes a self-contained Bluetooth 5.1 antenna matrix. In the example, the matrix of antennas in the Node1030may be configured in a circular pattern. Electronics in the device may capture communication signals sent from a wireless access point1010. Each of the paths from the wireless access point to the various antennas of the Node1030has a slightly different path through air from the wireless access point1010to the Smart Device. This may give each of the signals a slightly different phase alignment with each other. The electronics of the Node1030may include both hardware and software along, with training history of the antenna array for the device and may be able to use the different phase measurements and training history to determine both an azimuthal angle1040and altitude angle1050as an example. The resulting direction pinpoints a significantly improved understanding of the location of the Smart Device1020. In some examples, the calculated result may localize the Smart Device1020relative to the wireless access point with an accuracy better than 50 cm. In desirable noise and signal situations, a relative localization accuracy may be as good or better than 50 cm-level accuracy. Referring toFIG.10B, a combination of antenna arrays and electronics to determine the angle of arrival or angle of departure may be placed in proximity to the Smart Device. In some examples, a combination of two or more antenna array devices1021may be configured to independently sit in a plane proximate to the Node1030, such as a Smart Device. The antenna arrays may interact with two or more wireless access points1010and1060which may also be called locators. When the multiple Rays are calculated from each of the locators1010and1060to each of the antenna array devices1021a set of positional points for the two antenna array device may result. These positions may again be used to calculate a Ray1070of direction between the two points. This Ray may represent the direction that the Smart Device is positioned in at a particular time. More complex combinations of the arrays of antennas may be configured to increase the signal to noise of the system and improve accuracy. In a non-limiting example, three arrays of antennas1021A,1021B, and1021C, may be found in referencingFIG.10C. In some examples, the size of the antenna devices may be such that a combination of them may be larger than a Smart Device that they are associated with. In some examples, such as the illustrated example inFIG.10C, the arrays of antennas1021A,1021B, and1021C may be overlapped in space. The result may physically relate to multiple overlapped regions of the antenna Structure. The resulting interaction of the Structures may be very complex, and training of the algorithms to extract results from the signals received by the complicated Structure may be required to achieve a directional result. The integration of multiple Structures can improve signal-to-noise ratios related to transmission or reception of signals in some examples; however, as the multiple results can be averaged (in some embodiments, a weighted average) to extract a direction of the orientation of the Smart Device. Referring now toFIG.11, method steps are illustrated that may be practiced in some embodiments of the present invention. At step11, a unique identifier is established for each Node to be included in a self-verifying array. The unique identifier may be an alphanumeric string that is unique to available Nodes, a characteristic variable of a signal (e.g., characteristic frequency or wavelength), a public-key encryption scheme, or any similar unique identifier. At step12, each Node (Node X) communicates with each other Node (Node X+Y) with which Node X may establish wireless communications. At step13, sets of values for variables descriptive of respective wireless communications are generated. Variables may include, for example: which Nodes are involved in a wireless communication (which may be determined for example via a unique Node identifier); timing values related to time of transmission of a data packet; timing values related to a time of arrival of a data packet; an angle of arrival of a wireless transmission; an angle of departure of a wireless transmission; a strength of a received wireless communication; a quality of a received wireless communication; or other variable. Each Node may generate a set of values for the variables for each wireless communication. At step14, optionally, each Node may record aspects of a wireless communication that may influence accuracy of one or more values for variables descriptive of respective wireless communications between Nodes. Example of such aspects may include the presence of an obstruction to transmission of wireless communications, a strength of a received transmission (for example a weak strength of a received transmission may indicate a significant distance between the Nodes in communication), and the like. At step15, each Node may store sets of values for the variables for respective communications and aspects that may influence accuracy of the sets of values. In some embodiments, this step is optional; a Node may be capable of immediately retransmitting a value for a variable without first storing it. In some embodiments, a Node may perform certain computations relating to the values for the variables, such as taking a weighted average of values received through multiple modalities or Sensors. At step16, respective Nodes transmit respective sets of values for the variables for respective communications and aspects that may influence accuracy of the sets of values to any other Node with wireless communication range. In some embodiments, a Node may also transmit the sets of values for the variables for respective communications and aspects that may influence accuracy of the sets of values via hardwire communication. At step17, each Node within communication range receives the transmitted sets of values for the variables and aspects that may influence accuracy of values. By the process of generating sets of values for variables of communications, receiving sets of values of variables for communications, and transmitting the same values, each Node may acquire multiple sets of values relating to itself and to other Nodes, even Nodes that are out of range for direct communication and/or obstructed from direct communication. The multiple sets of values may be used to verify each other. In some embodiments, sets of outlier values may be disregarded. At step18, using a controller with a processor and executable software, a position of a particular Node (X) may be generated based upon a composite of sets of values, or a mathematical algorithm involving multiple sets of values. In addition, aspects that may influence the sets of variables may be given mathematical weight in generating a position of Node (X). At step19, in some embodiments, a presence of an obstruction may be inferred based upon the multiple sets of values for variables in communication. Still further a position of the perceived obstruction may be generated based upon the same multiple sets of values for variables in communication. At step20, a visual representation of a verified location for each Node included in the array may be generated, and in some embodiments, the visual representation may include a position of a perceived obstruction. Each location is verified by sets of values for variables in communications between multiple Nodes. Using this process, a position of a Node may be made available to a Smart Device or another Node that is not within direct communication range and/or is obstructed from direct transmission. Each Node generates values of variables for communication that may be used to determine a particular Node's position relative to other Nodes and/or a base position. Referring now toFIG.11A, a Structure space1100having a multitude of wireless Nodes1102-1106is illustrated. Nodes1102-1106are shown located within or proximate to Structure space1100. Nodes1102-1106include Transceivers operative to communicate via a wireless modality, such as one or more of: Bluetooth 5.1; BLE5.1; Wi-Fi RTT, infrared, and ultrasonic communications. In some examples, Nodes1102-1106include components capable of supporting positioning and data transfer functions useful in establishing a self-verifying array of Nodes (i.e., a SVAN). Nodes1102-1106may establish a self-verifying array1117with direct communication paths1110-1115between Nodes illustrated by the dashed lines between the Nodes1102-1106positioned at disparate locations. Nodes that are within direct communication range are shown forming direct communication connections along the direct communications paths1110-1115. Communications between Nodes include data useful for determining one or both of: a position relative to each other; and a position of a Node to a base position1116. Direct communications within the self-verifying array may also provide improved signal to noise ratios. In some embodiments, Sensors may be co-located with one or more Nodes and in logical communication with the Nodes, thus allowing transmission of Sensor data across the Nodes. According to the present invention, the self-verifying array1117enables overall separations of Nodes that are larger than the direct communication range of the individual Nodes1102-1106. In other words, self-verifying array1117may allow a single Node to transmit to locations greater than the Node's own transmission limits using other Nodes in the self-verifying array. For example, Node1102and Node1105may not be within a direct communication range of each other due to the distance D1between Node1102and Node1105exceeding a range supported by a modality of communication used by Node1102and Node1105. However, data generated at Sensor1105A that is co-located with Node1105may be transmitted to Node1104and then to Node1103and then to Node1101; alternatively, and/or in addition, data generated at Sensor1105A may be transmitted to Node1106and then to Node1102, thereby extending the communications range of the modality in use. In addition to Sensor data, values for variables of communications between various Nodes1102-1106may be transmitted amongst each Node1102-1105, where the values may enable a determination of a relative position of respective Nodes1102-1105to each other and/or to a base position1116. In this manner, a position of any two Nodes1102-1105relative to each other and/or to the base position1116may be generated. Verification of Node1102-1105positions is accomplished via generation of a particular Node1102-1105in relation to another Node1102-1105and/or a base position1116using multiple sets of values of variables involved in disparate communications between disparate Nodes1102-1105. In an example, the Structure space1100may be considered a Bluetooth arena which is covered by a collection of Nodes1102-1105operative to communicate with at least the BLE5.1 standard and thereby form a self-verifying array, such as self-verifying array1117. In the Structure space1100, the self-verifying array1117may establish a base position1116from which positions of the various Nodes1102-1105may be represented. In some examples, the base position1116may be a spatially significant feature such as a corner, door threshold, physically marked space, or the like, which is established in a model sense with Nodes1102-1105including Bluetooth Transceivers that are fixed within the space1100. In other examples, the base position1116may be established at one of the stationary Node1102-1105locations. Referring again toFIG.11A, one exemplary Node (such as Smart Device Node1102) may include an Agent-supported Smart Device. The Smart Device Node1102may be located at a fixed position and may serve as the base position. In some examples, the Smart Device may be a pad or touch screen which is mounted to a wall position, or it may be a Kiosk-type device also located in a fixed position. In other examples, a fixed Node1103may be located within the Structure space1100such as at a ceiling-mounted position. Here too, this Node1103may be established as the base position for Nodes1102,1104-1105across the network. In other examples, a base position1116may be at a location offset from a physical, spatially significant architectural feature such as a corner of a Structure or a doorway. An Agent supporting a Smart Device1107with a Bluetooth transmitter may enter the Structure space1100containing the self-verifying array1117and act as a Node in the self-verifying array. The various positioning capabilities of the various Nodes1102-1105in the space1100may activate to provide location-positioning data to the Agent-supported Smart Device1107. In some examples, a base position unit is swapped to the Agent-supported Smart Device1107, in which case, all positions may be dynamically updated relative to the Agent-supported Smart device1107. In some examples, multiple (and in some cases temporary) additional coordinate systems may be established in addition to a base definition of coordinate system which may have a fixed base unit. Exemplary coordinate designations may include Cartesian Coordinates, Polar Coordinates, Spherical Coordinates, and Cylindrical Coordinates, wherein Bluetooth-type designations of AoD and/or AoA and radius may be represented as coordinates in a Polar, Spherical, or Cylindrical Coordinate system. There may be Nodes1104that are located upon equipment or appliances1104A and may therefore be stationary in most cases. The Node1104co-located with the appliance1104A may be powered by the appliance power supply and also have battery-backup aspects. In the example illustrated, the Node1104on the appliance1104A may be classified as the base unit. However, as illustrated, it may be located at a remote position from a doorway to the space. Thus, the use of a self-verifying array may allow for a remote Smart Device1107to be an active Node in the space1100. There may be Nodes1105that are located on wall buttons or in wall-positioned devices. Here too, such a device may be defined as the base position unit. Such a device may be battery-powered and may require means of battery replacement or charging. In some examples, the Node1105may have a connection to utility power1109and or data conduit. The use of self-verifying array may allow for a User device (not shown) to be tied into a network that connects to the self-verifying array1117that covers the bulk of the area of the Structure space1100. In some examples, a region1106A of the Structure space1101may be generally devoid of coverage to the self-verifying array1117. In designing the communications environment of the space1100, therefore, a Node1106with a Bluetooth transmitter may be fixedly located to a ceiling, support pole, or other Structure feature in a region1106A that is otherwise devoid of communications coverage. A visual representation of a self-verifying array may include some or all of the Nodes include in the array, and, in some embodiments, it will include a representation of a perceived obstruction based upon the values for communication variables. Some embodiments of a visual representation may have one or both of layers of spatial grid definition and polar coordinate definition. In a base layer, a coordinate system for the Structure space may be established using a fixed device as a base unit. The origin of this first layer's coordinate system may be established as a zero point in numerous coordinate system types such as cartesian, polar, cylindrical, spherical or other topographical coordinate models. In some examples, an overlay second layer may include a coordinate system which is spatially similar to the self-verifying array, where for example, each connection of three devices may create a regional coordinate system, and the Structure space1100is represented as a mosaic of local coordinate systems within self-verifying-array-defined spaces. In some other examples, an overlay third or more layer may be a dynamic coordinate system where a specific communication Node, which is mobile, is dynamically tracked as the coordinate system origin and the rest of the space is adjusted relative to the moving origin. Various embodiments may include schemes and layers of coordinate system definitions that become defined for a composite of self-verifying array Nodes1102-1105. In some examples, one or more of the coordinate layers may be defined, tracked, and communicated by a single network member defined as a base position unit. In other examples, a SVAN may distribute coordinate definition and communication to Nodes1102-1105dynamically. A routine update of calculated and measured position and coordinate system may be maintained that not only defines a coordinate system but also indicates where some or all self-verifying array connected Nodes1102-1105are located on a grid system. A routine update at a schedule of time may therefore track Nodes1102-1105that are moving in time, recalculating their position. In some examples, a Bluetooth-enabled device may not be authorized or may not have the capabilities to enter the self-verifying array as a Node1102-1105, but it may emit signals including identification information and may receive communications from the self-verifying array. In such embodiments, the self-verifying array may identify these non-Node-type communication devices and establish their positions. As will be defined in more detail, in some examples, a position determination for a particular non-Node device may be defined in reference to a Node1102-1105that the non-Node device communicates with, along with an estimate of a range in which the non-Node device is capable of communicating. Referring now toFIG.11B, a Smart Device may receive a communication from a Node1102-1105in a self-verifying array, wherein the communication includes multiple positional coordinates for each Node included in the array. The communication may also include positional coordinates of items of interest associated with Nodes1102-1105on the network, such as an item co-located with a Node1102-1105. In some examples, the network may be interrogated by the Smart Device to provide information related to one or more Nodes included in the self-verifying array. The data may be used by the Smart Device to generate a user interface1121with a pictorial/image representation of the various Bluetooth transmitters and network Node devices. The representation may utilize one or more coordinate systems. For example, a Smart Device1120may portray the user interface1121may include image representation of a region of the Structure space which may be user selectable on the Smart Device. The image representation in the user interface1121may include an origin1122designation for a particular coordinate layer. In an example, where the coordinate layer is one where the origin is dynamically updated for the position of the Agent, then the origin may represent the position at which the Agent is located. In another example, an origin may be congruent with an origin of a coordinate layer for a spatially relevant origin and the Agent may or may not be represented as an item on the user interface1121. For this example, an Agent may be represented by position1123. A pictorial representation may show the Agent position1123and also present parameters that refer to the Agent position such, as a two-dimensional cartesian reference1126and/or a polar notation reference1127. Other wireless Nodes of relevance1124and1125, within the scale of the image may be portrayed as well. In some examples, the self-verifying array may include a feature where some or all of the network connected Nodes have identification information associated with them. Each of the Nodes may have stored (locally or in another network data layer) a multitude of references such as an identification information internal to a transceiver. For example, a Bluetooth transceiver may transmit identification information like a 48-bit Bluetooth transceiver address, a user-assignable name to the transmitter, or a user-assignable name to an element that the transmitter is a component of may be stored. As an example of an assignable name, in a non-limiting sense, an appliance may be a Node in self-verifying array which may have the name “Downstairs Refrigerator.” In some examples, identification information may be related to different levels of security access that a Node may access, store, transmit, and the like. Information useful for generation of a user interface may be transmitted from a Node via IP on a digital communications network, such as the Internet, and a user may be located anywhere that is connected to the communications network. In this manner, a user interface may be presented to a remote user regardless of the user's location. In some examples, a stable base unit of the self-verifying array may act as a standard repository and access point for all information stored or archived for the self-verifying array. In other examples, the data storage may be distributed across the self-verifying array. In an example, a standard portion of the data stored on the self-verifying array, such as in a non-limiting sense, the identifications, timestamps, positions, characteristics, and security levels of all Nodes, and identifications and positions of all transmitters within the Structure space/self-verifying array extent may be assembled into a data table/layer. In some examples, a routine transmittal of a data table/layer may be broadcast throughout the self-verifying array. In an example, every self-verifying array Node may have an assigned broadcast order such that at a standard time indexed to the broadcast order, it will broadcast its current version of the table. All Nodes within range of a transmitting Node will receive the table and update it as the current version. Then, at their prescribed broadcast time, they might transmit the table. There may be rules that overlay such a broadcast to ensure that current data is not updated with previous versions for a Node that does not receive the update before its broadcast time. Such rules may also prevent unauthorized alteration of data through hacking or other network penetration. The Nodes may act as participants in a Blockchain in this manner. One such rule may be that transmission may occur only when the data table has been updated at the Node. Another rule may inhibit transmission for any Node that is dynamic/moving, or alternatively initiate immediate transceiving for a Node that is dynamic/moving. Transmission may include diverse types of data. Periodic transmissions may be timed based upon a time needed for a transmission, energy required for transmission, available energy, receipt of new data, and the like. Therefore, each Node may have a configuration setting that defines conditions when, how, and for how long it transceives. Such condition may include, for example, a frequency upon which it listens for and upon which it communicates data. The various definitions of coordinate layers may be transmitted. In some examples, a remote user-connected digital communications network to a self-verifying array Node or a Bluetooth device entering into a self-verifying array Node may request a copy of a standard data table transmission. The data table transmission may include positions of Nodes relative to a fixed origin, to the user position, to particular fixed Nodes of the network or a collection of some or all of these. Some data layers may be created to store Sensor information that may be obtained at some or all of the Nodes. The data layer may be segregated based on types of Sensor data. For example, all Nodes of a self-verifying array may include a Sensor providing a quantification of one or more of: ambient temperature, humidity, water presence, current draw, vibration, movement, image data, and the like. A timestamped reading of this Sensor quantification may be included into a data layer along with co-located Node identification information. In other examples, a subset of the devices may include an ambient-light Sensor as part of its infrastructure. In this case, another data layer may be created for this type of Sensor data. In some examples, the pictorial image representation1121may include one or more of the data layer Sensor information. The pictorial image representation1121may represent the Sensor readings in a textual form, or in other manners such as a color indication at a Node position or at regions around a Node position. Referring toFIG.12A, another representation of a SVAN1210is displayed. In this embodiment, space1200may include Structures1211and1212. Structures1211and1212may have a variety of different characteristics that may impact the performance of self-verifying array1210. For example, Structures1211and1212may be physically closed (e.g., walls, solid Structures) or partially closed (e.g., shelves). Structures1211and1212may also comprise solid materials (which may be stored for example at a construction site), equipment such as automobiles, tractors, backhoes, and the like. Accordingly, the presence of these Structures may change the transmission characteristics of a wireless network (e.g., Bluetooth). Some Structures may block signals, impede signals, or partially impede signals. For example, shelves may have physical regions that block and other regions that are fully transmissive. Shelves may provide an example in which the Structures in the space1200may have dynamic characteristics. Such dynamic characteristics may make self-verifying arrays (and corresponding spatial schema) more useful than traditional mapping methods. For example, if a load of metal piping is brought in and placed upon the shelves, a region that was completely transmissive may become impeded to a degree. These characteristics may create different operational characteristics for self-verifying arrays. In another sense, a shelf may hold a combination of both fixed and mobile devices that comprise a self-verifying array in the space at some given time. This may provide more accurate and more dynamic coverage for the schema. For example, the space1200may be interspersed with an assembly of fixed (or roughly fixed) network Nodes that form a grid pattern (as an example) to ensure that a minimal self-verifying array may be established that covers the entire space1200. This minimal network may be supplemented with “migrant” Nodes that are moved into the space1200and become part of the SVAN1210. From a signal coverage perspective, more participants may improve characteristics. However, more participants may increase information traffic levels, and a control formalism that limits bandwidth differentially to different network participants may be necessary in some examples. InFIG.12A, an example of a space1200with shelving units that make up Structures1211and1212is illustrated. The space may have a “global” reference point1204for positioning. There may be fixed wireless communication Nodes1201,1202, and1203(for this example, all Nodes are at least compliant with Bluetooth 5.1 and transmit at least as Bluetooth radio transmitters; however, this deployment is merely illustrative). The fixed wireless communication Nodes1201-1203may also include other aspects/components to them such as an integrated camera system. The integrated camera system may provide a visual perspective of a portion of the space that its corresponding wireless radios may cover. In a self-verifying array, Nodes may be collocated or located relative to a Sensor, such as an image-capture device. Based on a known set position of the Sensor relative to the Node, the Node may transmit information captured by the Sensor to other Nodes. Accordingly, a Node out of both Sensor and radio range of another Node may still receive data from the Sensor through the array. The data from the Sensor reflects a range of data in which a physical characteristic is quantified or capable of being quantified by the Sensor. For example, a Sensor may be an image-capture device, limited in range by both wavelength of image capture (e.g., limited to infrared) and spatial range (e.g., field of view of the image-capture device). This may be particularly desirable in embodiments in which the self-verifying array is deployed in or adjacent to an environment having a characteristic adverse to a Sensor. For example, the low temperatures found in a commercial freezer may impair operation of certain Sensors. Temperature-resistant Sensors may be collocated with Nodes within the freezer, while temperature-vulnerable Sensors (including Sensors capable of detecting conditions within the freezer) may be collocated outside the freezer. Through the self-verifying array comprised of these Nodes, data from the Sensors may be freely transferred among the Nodes, including through fiber optic communication throughout the freezer. It may be desirable to deploy spectrometers and hydrometers in this fashion. Moreover, redundant Nodes may be able to redirect Sensor readings from one Node to a base Node, especially in scenarios when an optimal Node pathway may be obstructed, such as by shelving. The space1200may also include other fixed Nodes, such as Node1223, that may not have cameras included. Node1223may be important to ensure that regardless of a makeup of migrant communication Nodes, fixed wireless communication Nodes may be able to form a complete SVAN space1200in the absence of items that block radio transmissions. There may also be migrant communication Nodes1220-1222affixed on packages, materials, or other items that may be placed and/or stored upon the shelving units. In some examples, at least a subset of the SVAN-participant Nodes may communicate periodically. The various aspects of data layer communications as have been discussed may occur between the Nodes of the network. At a base level, at least a subset of the Bluetooth transmitters may periodically transmit information such as their unique identifiers, time stamps, known positions and the like. In some embodiments, Nodes may transmit between each other or to a base information about variables between the Nodes, such as computed distances or angles between the Nodes. A Node may receive transmissions from other transmitters and may store the transmissions. In some examples, a Node may act as a repeater by receiving a transmission and then retransmitting the received transmission. A Node acting as a repeater may then take various actions related to the data involved. In an example, the Node may effectively just stream the data where no storage of any kind is made. Alternatively, a Node may store the transmission, then retransmit the transmission (immediately or after a delay) and then delete the stored data. In other examples, a repeater Node may store a received transmission and then retransmit the transmission either for a stated number of times, or until some kind of signal is received after a transmission. Thereafter the Node may also delete the data. In some examples, a Node may store data from an incoming transmission and take the various retransmission actions as have been defined, but then not delete data until its data store is filled. At that point, it may either delete some or all of the stored data, or it may just overwrite stored data with new incoming data and then clean up any remaining data with a deletion or other process. When a Node acts as a repeater, it may receive data and then merely retransmit the data. Alternatively, a repeater Node may either use the transmission of data or the time during the transmission to acquire and calculate its position and potentially the position of other transmitters in range. During retransmission of the received data, it may also include in the transmission calculations of its own position relative to other transmitters, calculations of other transmitter positions relative to itself, calculations of its own and other transmitter positions relative to an origin, and the like. It may also include other information such as a time stamp for the calculation of positions. The combined elements of a SVAN may be operated in a way to optimize power management. Some of the network Nodes and transmitting elements may operate in connection with power-providing utility connections in the Structure. Other network Nodes may operate on battery power. Each of the Nodes may self-identify its power source, and either at a decision of a centralized controller or by a cooperative decision-making process, optimized decisions may be taken relative to data transmission, low-power operational modes, data storage and the like. In some examples, where multiple Nodes provide redundant coverage and provide information to a central bridge acting as a repeater, the Nodes may alternate in operation to share the power-draw on individual Nodes. For example, if one of these Nodes is connected to a utility power source, that Node may take the full load. The battery-powered elements may have charge-level detectors and may be able to communicate their power-storage level through the network. Accordingly, an optimization may reduce traffic on the lower battery capacity Nodes. In some examples of operations, a transmitting Node may transmit a message for a number of redundant cycles to ensure that receivers have a chance to detect the message and receive it. In low power operating environments, receivers may transmit acknowledgements that messages have been received. If a base unit of the network acknowledges receipt of the message, control may be transferred to the base unit to ensure that the message is received by all appropriate network members. Message receivers may make a position determination and broadcast their position if it has changed. A self-verifying array of Bluetooth receivers may provide one of a number of Transceiver network layers where other communication protocols based on different standards or frequencies or modalities of transmission may be employed, such as Wi-Fi, UWB, Cellular bandwidth, ultrasonic, infrared and the like. A Node that is a member of different network layers may communicate and receive data between the different network layers in addition to communicating through a Bluetooth low-energy self-verifying array. Referring toFIG.12B, an illustration of the view from a camera on a network Node position is presented. A Smart Device1250may interact with the self-verifying array and communicate a desire to receive images or video from a camera. In an example, referring back toFIG.12A, the Node1201may have a camera that produces an image that inFIG.12Bis presented on the smart phone as image1260. Processing either on the Smart Device or on processors connected to the network may collect information about the location of other network Nodes through the various processes as described herein and then determine a correct location on the collected image to display icons over the position of the Nodes1221and1223. There may be numerous other types of information that may be overlaid onto the imagery such as Sensor measurements, textual presentations of data values, data related to status and transactions on the network, and the like. In some examples, the cameras may be maintained in a fixed position or positioned on mounts that can allow the plane of view of the camera to be moved. The Smart Device1250may be supported by an agent such that it is oriented in such a manner to point to a particular view-plane from the perspective of the screen. This may either be from a perspective of looking through the smart screen (i.e., in the field of view of a camera associated with the Smart Device1250) or, in other examples, supporting a screen of a Smart Device flat (i.e., parallel to a ground plane) and pointing in a direction of interest based on a direction of orientation of the Smart Device1250. In related applications, it is disclosed that a direction of interest may be determined based on wireless communications. In some examples, orientation aspects of Transceivers upon the Smart Device1250may be employed to determine Rays of interest of the user (for example, to point the Smart Device1250in a direction of interest to the user). In other examples, other Sensors on the Smart Device such as accelerometers, magnetometers, and the like may be able to infer a direction in which the Smart Device is pointed. Once a direction of interest is determined, the camera may be moved to correspond to a plane perpendicular to a Ray associated with the direction of interest (i.e., such that the Ray is a normal vector to the plane). An assessment of items of interest whose coordinates may lie in the field of view of the selected view plane may be made, thus presenting data to the user and allowing the user to filter out or learn more about the items. Referring now toFIG.12C, another type of presentation is illustrated where a plan view or map view of the space1270may be presented. In some examples, a Smart Device may access a virtual model (AVM) or other spatial schema that contains spatial data relating to the space that the user is in. The view may also include a presentation of the Structure, including features such as walls, doors, supports, shelving, equipment and the like. The location of network Nodes may be illustrated by icons1273at the two-dimensional locations determined by the various position-mapping processes as described herein. The location of the user1271may also be superimposed upon the map with a different icon, and this location may be dynamically updated as the user1271is moving. There may also be an iconic representation of the heading1272of the user which may be determined by the wireless protocols as discussed herein or it may be estimated based on the time-evolution of the position of the user (for example, through dead-reckoning). Items of interest may be presented on the map at any location surrounding the user such as in front, to the side or behind the user. In some other examples, only items in the view-plane (determined by the heading of the user) may be illustrated on the Smart Device1250. Textual data and other types of information display such as color gradation may also be superimposed on the map to represent data values such as Sensor input, network characteristics, and the like. In some examples, a relative height of items of interest relative to the floor or to the Smart Device may be presented on the image as a text presentation. Referring toFIG.12Dan extension of location tracking is illustrated for devices that do not have positional capabilities (such as a GPS) but can respond to transmissions within a certain distance. The range of the device can allow a localization to a be within a certain distance from a Node. In some examples, nanowatt Bluetooth Nodes that operate without battery power may be cheaply attached to items for tracking and/or can be affixed with Sensors to provide data acquisition. These devices may typically depend upon energy harvesting for their operation. In some examples, a transmission from a Node of the SVAN may itself carry enough energy to enable an RFID tag or other type of passive tag to respond with a low-energy transmission. Accordingly, a Node may transmit sufficient energy to activate an RFID; such as, for example, an RFID that has an identifier of an item to which it is affixed. The devices may be unable to perform all the wireless functions discussed herein, but they may be capable of transmitting identification data and perhaps Sensor data. In some examples, RFIDs may be employed. Bluetooth self-verifying array Nodes may also have incorporated RFID tag readers that can similarly transmit a unique identifier in response to a transmission from the self-verifying array Node. InFIG.12D, a Smart Device1250may display a map-form presentation of a space1270(similar to the previous discussion with SVAN Nodes located in a two-dimensional coordinate system). In an exemplary embodiment, ultralow-power Bluetooth Nodes or RFID Nodes may be located on elements such as packages or equipment placed on the illustrated shelves. In response to transmissions from the SVAN Nodes, various low-power tags may respond. In some examples, the localization of the low-power tag may be based on further refinement of measurements, such as measurements of the returned signal strength. Referring again toFIG.12D, a SVAN Node1273may detect two transmitting Nodes (labeled “A”1280and “B”1281inFIG.12D). Since Node “B”1281may also be detected by a neighboring SVAN Node1274, it may be inferred that the Node may be in a region located between the two SVAN Nodes (i.e., since Node “B” is located in the overlap of the coverage areas of SVAN Nodes1273and1274, it is likely that Node “B” is located somewhere between SVAN Nodes1273and1274). Other Nodes received by SVAN Node1274, such as Nodes1282and1284, may not be detected by other SVAN Nodes and thus may be located in non-overlapped regions. As a further illustration, Node “D”1283may be detected both by Nodes1274and1275. Node “F”1285may be detected by three different SVAN Nodes1274,1275and1276. Thus, the position of Node “F” may be determined with a high level of confidence. Node “G”1286may be located only in SVAN Node1276. Node “H”1287may be located only in SVAN Node1275. This data may provide localization information for regions around Bluetooth SVAN Nodes. The non-limiting example discussed has included a Structure with obstructions in the form of shelves; however, obstructions may include any item that interferes with or inhibits or decreases quality of inter-Node communication within the self-verifying array. Some self-verifying arrays may be established in an outdoor environment, such as a construction site. There may be numerous items, such as equipment, tools, and materials to which Nodes may be attached. In some examples, at a construction site there may be significant utility in establishing fixed Transceiver position as the site is initially established. The self-verifying array may track and locate the various equipment and materials through radio-frequency communications (e.g., via RFID). Furthermore, establishment of fixed points across the construction site may allow for a self-verifying array of significant size to be established. As described in reference toFIG.12D, there can be RFIDs or Bluetooth Nodes that may be attached to various materials such as structural beams, wallboard pallets, and the like. In these examples, the transmitting Nodes may not have battery elements for cost or environmental conditions reasons. The location of various components of construction may be tracked as they are moved across the site. In some embodiments, an AVM may be used to compare expected movements of components to the observed movements. As the Structure is built and studied during the creation of AVMs, the various Bluetooth Nodes may still be able to provide communications as to components that make up Structure or are embedded within Structure. Referring toFIG.12E, elements of a self-verifying array in a space1270may have dynamic locations and their movement may have ramification. In an example, SVAN Node1276may physically move to another location. The various self-verifying array data layers relating to location of elements may update for this move and the updated tables may be communicated to the Nodes of the network as has been described. At the new location, SVAN Node1276may signal to devices in its new region for response. There may be transmitting Nodes and RFIDS that are and have been in the new region that SVAN Node1276has moved to. For example, item “I”1294may be located by SVAN Node1276in its new location. As well, items with transmitting Nodes on them may also move as illustrated by the detected movement of item “D”1283. Another type of change may be that when Node1276occupies its new location, item “H”1287may be detected in the region of two network Nodes now, and therefore its location may be refined to that region that the two network Nodes overlap in coverage. Referring now toFIG.12F, an illustration of a complex space where regions within the space may block or impede wireless communications is provided. In some examples, parts of a Structure like internal walls, conduits, equipment, structural beams, and elevators/shafts may provide permanent or temporary blockage of wireless transmission. For example, as an elevator passes through a particular floor, it may block transmissions through the elevator shaft that may otherwise occur. Shelves may temporarily have materials or equipment moved to positions on the shelves as illustrated by regions1297and1298, which may block wireless transmissions. The self-verifying array1200and its Nodes1201-1203and1220-1223may be able to cooperate and provide coverage of the self-verifying array around such blockage. For example, a wireless communication Node1296may be too far from Node1203to communicate directly with it and communication from other fixed Nodes like1201and1202may be blocked by the blockage as discussed. The SVAN may still communicate1295with the SVAN Node1296by connecting a path1299shown in thick dashed lines essentially communicating with line-of-sight paths around the blockages. Referring toFIG.13A, mobile elements such as UAV and UGV with wireless transmitting Nodes attached are illustrated. Mobile elements may function within self-verifying arrays to create dynamic physical extensions of the self-verifying array. The mobile elements1310,1320and1330are illustrated as UAVs. As the mobile elements move, they may allow other Nodes or wireless access Nodes to make communications. In some examples, there may be at least a first fixed element1300that is part of the SVAN. It may define an origin point in some systems, but in other examples, it may be offset from an origin point1310. As a mobile element1320moves through space, its position may be updated by communication between the fixed element1300and itself1320. The location determine may in some examples be referenced to the origin. In polar notation, it may be located at r2,θ2, for example, where the angular components are taken with respect to an axis having at least a point perpendicular to mobile element1320(e.g., a ground plane). When the mobile elements are able to communicate with a fixed element, a determination of the fixed element's position relative to a local coordinate system may be straightforward since the fixed element can know its position with relatively high accuracy. A moving device that can continually measure its position relative to the fixed element can come close to that accuracy in position determination as well and can improve its determination by taking more measurements. As mentioned previously, elements in an operating space may be either statically or dynamically positioned and block or impede wireless transmission through them. Mobile communication elements create interesting solutions in such an environment because a team of communication elements can position itself in such a manner to “look around” such difficult transmission zones. At the same time, the difficult transmission zone may block the ability of a mobile element from communicating directly with a fixed communication Node. In such cases, a first mobile element may determine its position relative to a second mobile element, where the second mobile element has communication capability with fixed self-verifying array communication Nodes. In some examples, a self-verifying array may consist entirely of mobile elements, and then its practical coordinate system may be a local one that is determined in a moving coordinate system related to one or more of the mobile elements relative positions. Referring now toFIG.13B, an exemplary embodiment of this non-line-of-sight position detection is shown. In some examples, there may be mobile elements1350,1351with wireless communications capabilities that create at least a portion of a SVAN of wireless communicating devices. In some examples, the wireless communicating devices may include capability for Bluetooth protocol communications. In still further examples, the Bluetooth protocol communications devices may include capabilities for establishing self-verifying arrays as well as capabilities of performing positioning based on AoA measurements such as is defined in the Bluetooth 5.1 standard. A fixed element1300which has a known offset to position T1may locate a mobile Node1350(such as a UAV) at position T2in accordance with the orienteering methods described above. In some examples, the mobile Node1350at position T2may have moved into position T2in order to have a line-of-sight with the mobile element1351at position T3. For illustration and discussion, the devices are shown with line-of-sight between T1and T2and between T2and T3. In some examples, the wireless communication modalities described herein may be capable of passing through walls or other blockades, however, a blockade may resist or interfere with such wireless transmission. In some examples, a wireless modality deployed may just not be able to penetrate a given wall or other obstruction. Accordingly, the second reference Transceiver T2of the mobile Node1350, due to its movement, may be deployed within the line of sight of both T1and T3to assist with determining an accurate location of T3notwithstanding the lack of sight between T1and T3. Although this Figure shows a lack of line-of-sight between fixed device1300and the mobile element1351as caused by blockade B1370, line-of-sight may also be defeated by, for example, an excessive distance between T1and T3(i.e., r31365). For example, Bluetooth 5.1 has a range of approximately 1.4 km at certain frequencies. Thus, where r3>>1.4 km, the present method may be desirable for Transceivers that use Bluetooth 5.1. Using the methods described above, the fixed element1300referenced to T1may determine the location of the mobile Node1350T2by line-of-sight communication. For example, the location may be determined based on the angle of arrival of signals, as angle θ11361from T2and the distance r11360between T1and T2as measured by timing signals. For ease of calculations and discussion, the local coordinate system of mobile Node1350at T2may be oriented to a reference direction1352pointed to location T1from T2. The mobile Node1350at T2may in turn detect the location of the mobile element1351at T3, using (in some embodiments) the methods described herein. If T2uses the methods described herein to determine the location of T3, it may determine that the mobile element1351at T3is located a distance r21362from it and relative to its reference direction1352, it may be located at an angle θ21363. The mobile Node1350may aid the system of SVAN elements to determine the positions of each of the element relative to each other by relaying the relative location of the mobile element1351at T3as detected to the fixed element at1300which is referenced to the point T1. One of the components of the SVAN, which may even include connected servers that are connected to one of the self-verifying array Nodes, may then perform algorithmic calculations to trigonometrically compute several useful values, such as: the effective distance between T1and T3, notwithstanding blockade B1370, i.e., r31365; the effective angle of arrival of a signal from T3, i.e., θ31366; the angle between T3and an axis formed by T1, i.e., θ1-31364; and the like. Referring briefly toFIG.15, an exemplary method of computing the distance between two nodes not having line-of-sight communications between each other is shown. In this example, it will be assumed that the nodes and the vectors between them can accurately be projected into a two-dimensional, coplanar space, as shown inFIG.15. This may also be appropriate in situations in which, for example, three linearly independent axes can be determined (e.g., x, y, and z), but one of those axes is not of interest. For example, if a flight path is to be determined in a warehouse, one might treat all blockades as having a height equal to that of the entire warehouse, and then seek to avoid blockades on the x and y axes. In exemplary embodiments, the axes may be tangible guides, such as a floor or a wall. For the purposes of this discussion, let the distances between Nodes T1and T2, T2and T3, and T3and T1be r1, r2, and r3, respectively. Let the angles between r3and r1, r1and r2, and r2and r3be θ1, θ2, and θ3, respectively. As described above, the magnitudes of r1and r2may be known using the methods disclosed herein. The present invention also allows the position of T3to be communicated to T1using T2as a relay in a variety of ways; one exemplary way is as follows. A straightforward way of computing the magnitude of r3is to use the law of cosines. Doing so requires knowledge of at least θ1, θ2, or θ3. θ2can be determined in multiple ways, depending on the specifics of the deployment of T1, T2, and T3, as well as the specific Bluetooth 5.1 implementations of each. For example, in some embodiments, θ2may merely be any of the angle of arrival at T2or the angle of departure at T2. In embodiments in which a central controller effectively creates a map of the Nodes and translates them into a coordinate system, then θ2may be determined using a dot product or other norm between the vectors represented by r1and r2. In other embodiments, θ2may be determined geometrically as discussed in further detail below. In still other embodiments (particularly those employing a central controller), the vector represented by r2may be translated to the origin (shown inFIG.15as T1) or otherwise measured to determine its magnitudes in each axis of the chosen coordinate system. This may then be used to determine the magnitude of r3, as in the embodiment shown inFIG.15, r3is the vector sum of r1and r2. Assuming r1, r2, and θ2are known with accuracy, then the law of cosines provides that r3is simply equal to the positive square root of r12+r22−2r1r2cos(θ2). (This computation may also be applicable in three-dimensional models.) In practice, however, some or all of these quantities may be subject to uncertainty. Accordingly, in some embodiments, several methods of computing r3(some of which are discussed below) may be used, and a weighted average of these computations may be taken to more accurately determine r3. Moreover, the methods discussed below may produce additional quantities that may be desirable in some embodiments, such as a virtual angle of arrival of a signal from node T3to node T1. In some embodiments, θ2may not be cleanly determinable as simply an angle of arrival/departure of a signal at T2. However, in some embodiments, the angles of arrival/departure at T2may be determined with reference to an axis drawn parallel to the x axis, as shown in dashed lines in the figure. Let these angles be φ1and φ2. If φ1and φ2are determined with accuracy, then theta2is 180°−φ1−φ2, and the computation of r3can proceed as discussed above. Given r3, other useful quantities may be computed. For example, although the figure shows θ3, it may not be immediately quantifiable as an angle of arrival/departure because θ3represents the angle between r2(i.e., the vector connecting T2to T3, the magnitude of which is known a priori in some embodiments due to the line-of-sight tracking described herein) and r3(which is a virtual vector that has unknown characteristics a priori due to the lack of a line of sight between T3and T1). But once r3and θ2have been determined, then θ3is the arcsine of (r1/r3) sin θ2. Similarly, θ1is the arcsine of (r2/r3) sin θ2. Referring back toFIG.13B, analysis techniques, such as artificial-intelligence techniques, may also use a difference in a position calculated trigonometrically and via delayed line-of-sight to calculate an interference factor of a particular wall, material, etc. (such as blockade B). This may be used in subsequent transmissions that may pass through the particular wall, material, etc. to more accurately estimate the impact of the wall, material, etc. on the transmission. While the blockade B1370is stationary and static, it may be possible to determine a calibration factor for signal changes caused by blockade B1370that may allow for attenuated signals that come from self-verifying array Nodes that are behind the blockade to none the less be directly estimated for their relative position. In addition, a known delay can be used to determine attributes of an obstruction, such as material type, thickness, proximity, etc. This may be particularly true when the blockade is uniform in its characteristics. Moreover, the trigonometric techniques discussed herein may assist in determining a lack of an obstruction between T1and T2at a given wavelength by comparing the expected trigonometric result with an empirically determined line-of-sight result. It may be useful in controlling a particular space, such as a construction site, to utilize a team of mobile devices to survey and surveil the space. In addition to the ability to surveil a region that has regions of blocked/impeded transmission, the mobile network can establish routine (but transitory) bridge links in a self-verifying array to communication Nodes that are remote, as has been described. In addition to these abilities, a mobile element that has an RFID reader capability may also pass over a space and “inventory” RFID tags attached to items for security, location and condition tracking. As mentioned previously, low-energy Bluetooth-based Nodes may also be interrogated by mobile elements where these Nodes may also provide sensing capabilities. As a non-limiting example, a construction site may be modelled in an early version of an AVM for the Structure and it may track the location of components that will be assembled into the Structure as well as tools that may be used to construct the Structure as they arrive and, in some cases, leave a job site. In some embodiments, a mobile Node is moved about to multiple locations within a physical area, such as during variations occurring during a construction job site. As the Node is moved, a height and two-dimensional coordinates of the mobile Node may be varied such that it becomes possible for the mobile Node to communicate with other Nodes in or proximate to the physical area. In some embodiments, the mobile Node may additionally communicate with other transceiver, such as a Bluetooth Node transmitter, an RFID transceiver, ultrasonic transceiver, infrared transceiver and the like. In some embodiments, the mobile Node may additionally transmit wireless energy to a receiving Node, RFID, or transmitter Node specifically to energize the receiving Node, RFID, or transmitter Node, and enable transceiving by the energy receiving Node, RFID, or transmitter Node. Referring now toFIG.14, method steps that may be implemented in some embodiments of the present invention are illustrated. At method step1401, in some examples, a user may begin by installing wireless access points into a building Structure as it is built. In other embodiments, the wireless access points may be added into the Structure after it is built. Continuing to method step1402, a process may next be initiated that may establish a self-verifying array between the installed wireless access points and other devices that are within the communications range of the installed wireless access points. Security protocols may control whether a particular communications element that is within range of such a self-verifying array may gain access. Continuing to method step1403, the self-verifying array may detect an entry of a wireless transmitter into an area covered by the self-verifying array. Entry may involve a physical movement of the wireless transmitter or the virtual movement of the coverage of the self-verifying array to include the wireless transmitter. Entry may also include reception of a previously unreceived signal from a wireless transmitter. In some embodiments, entry may include reception of a previously unreceived frequency of a signal from a wireless transmitter. For example, it may be desirable not to detect the wireless transmitter until a chosen time, at which point a switch or other apparatus may vary the frequency of the signal from the wireless transmitter. Depending on various security protocols and generalized network protocols, an optional method step at1404may be performed to incorporate a newly detected wireless transmitter (such as a mobile device) into communications with the self-verifying array. Proceeding to Step1405, the network may optionally be configured by a user to direct a movement of one of its mobile wireless access points into a new location while still maintaining its communications capabilities with the self-verifying array. Proceeding to Step1406, the network may optionally be configured by a user to direct a movement of one of its mobile wireless access points to a location where it can simultaneously be connected to the self-verifying array while also establishing communications interchange with a device capable of wireless communications where the device may otherwise not be in range with the self-verifying array. Commercial Implementations of Self-Verifying Array of Nodes Self-verifying arrays of Nodes are applicable in many diverse commercial implementations. The following paragraphs describe several diverse implementations utilizing a SVAN. Referring now toFIG.16, method steps are illustrated for deploying a SVAN to quantify conditions in a parking area. The parking area may include, for example, a garage or parking lot. Specific embodiments may include one or more of: a rental car parking area; a commercial parking area; a residential parking area; a municipal parking area and the like. At step1601, a first Node may be fixedly attached to, placed inside of, or otherwise co-located with a vehicle. The Node will move with the vehicle as the vehicle moves. At step1602, a unique identifier associated with the first Node may also be associated with the vehicle with which the Node is co-located. For example, a database may store an association with the unique identifier of the first Node with a Vehicle Identification Number (VIN) of the vehicle. At step1603, reference position Nodes other than the first Node may be located at strategic placements within or proximate to the parking area. In some embodiments, the strategic placements selected for reference point Nodes may be based upon one or more of: a shape of parking area; a wireless modality distance capability; a presence of obstacles within an area occupied by a SVAN; at ends of rows defined for parking vehicles; at some or all defined parking spots for parking vehicles at one or more points of interest in a parking lot, such as a point of entry or egress, an office, a walkway, connecting transportation (railway or bus), an elevator or stairwell and the like. At step1604, one or more Nodes included in a SVAN may be designated as a Base Node. Base Nodes may be operative to perform functions not necessarily performed by Nodes that are not Base Nodes. For example, Base Nodes may aggregate data over time, perform controller functions, transmit data via more than one wireless modality, be powered by utility-based alternating current, or communicate via a hardwired medium (e.g., via ethernet cabling). At step1605, one or more of the Nodes may communicate with other Nodes. Preferably, each Node will communicate with each other node within range of a communication Modality. In some embodiments, a pattern of Node communication may be followed. At step1606, in some embodiments, a pattern of communication may stagger a time of wireless communication in order to avoid interference of one communication by another communication. A pattern of communication may therefore include a “cascade” or hierarchical tree of wireless communication transmission and receipt. For example, a Base Node may communicate first, followed by a first generation of Nodes that receive a communication from the Base Node, and followed by communicating from the first generation of Nodes with a second generation of Nodes (e.g., Nodes that are out of range or obstructed from communicating with the Base Node), then to third generation Nodes, etc. At step1607, one or more Nodes within the SVAN may be designated to communicate with a network access device extraneous to the SVAN. For example, a designated Node may aggregate data, such as an aggregation of values for communication variables or sensor-generated data; and communicate the aggregated data to a destination outside of the SVAN (such as, via a cellular transmission or an IP Protocol transmission). At step1608, in some embodiments, a SVAN may be defined based upon an ability of SVAN participant Nodes to communicate with each other via a primary communication Modality. For example, a primary communication modality may include a Bluetooth modality, Wi-Fi, Wi-Fi RTT, sub-GHz radio transmission and the like. A secondary communication modality may include IP transmission, a cellular transmission, sub-GHz communication and the like. At step1609, some Nodes may be excluded, based upon an inclusion or exclusion criteria. For example, in some embodiments, only Nodes with unique IDs associated with sedans may be included in a SVAN, or only Nodes with unique IDs associated with vehicles prepped for deployment (e.g., immediate rental) may be included in a SVAN, alternatively, Nodes with IDs associated with vehicles recently returned and/or in need of service may be excluded from a SVAN. At step1610, communication variable values may be aggregated. For example, one or more Nodes or a controller may aggregate and store data that is based upon, or quantifies, what transpires during a wireless communication. Examples of data that quantifies, or is based upon, what transpires during a wireless communication, may include, by way of non-limiting example, one or more of: a time of transmission, a time of receipt of a transmission, a phase angle of receipt of a transmission of a single antenna, a respective phase angle of receipt of same transmission by multiple antennas (which may include multiple antennas in one or more arrays of antennas). Other variables may include an amplitude of a received transmission, and a noise factor of a received transmission. At step1611, a respective location of some or all of the Nodes in the SVAN may be generated, based upon the values for communication variables that are descriptive of communications with the respective nodes. At step1612, in some embodiments, an algorithm (such as those discussed herein) may be provided with values from the aggregated communication variable values to determine a location of a Node. Multiple sets of values and/or multiple algorithms may be used to disparately determine a set of locations for a particular Node. The set of locations for the particular Node may in turn by mathematically reconciled to determine a best location for the Node. For example, outlier sets of values may be set aside, included sets of values, and/or the set of locations for the particular Node may be used to generate an average, a median, a weighted average, or other combined value. At step1613, a location of some or all Nodes in a SVAN may be plotted in a graphical representation. The location for a Node may be the locations determined as described herein. In some embodiments, the unique IDs for plotted Nodes may be included in the graphical representation. Alternatively, or in addition to, the unique IDs, an annotation associated with a particular Node may be included in the graphical representation. A graphical representation may include one or both of two-dimensional and three-dimensional models of space occupied by the SVAN. In some embodiments, these spatial models may be augmented with a time variable (e.g., by displaying a change in an area covered by a SVAN over time). At step1614, in some embodiments, a position of an Agent-supported Smart Device may be determined relative to one or more of the Nodes in a SVAN. The Agent-supported Smart Device may be a smart phone carried by a person or a smart device attached to a UAV or UGV. In some embodiments, the smart device will be programmed to communicate with a Base Node when the determines that it is within communication range with the Base Node using a predetermined communication modality. For example, a GPS position calculated by smart phone may indicate that the smart phone is within Bluetooth 5.1 range of a particular Base Node. The smart phone, acting as a Node, may then initiate Bluetooth 5.1 communication with the particular Base Node. At step1615, using Orienteering methods, the SVAN may guide an Agent supporting a Smart Device to a particular vehicle. For example, a customer who has rented a car may be guided to that car via a graphical user interface on a smart phone. A controller may receive position information of the rented car and the customer's smart phone and modify the graphical user interface on the customer's smart phone to provide directions to the rented car. As the customer's smart phone begins the process by communicating with a first set of Nodes (that are within communication range of the customer's smart phone), and as the customer traverses a parking area (or areas proximate to the parking area), the customer's smart phone may transition to communicating with additional Nodes as those additional Nodes come with range of the smart phone. A graphical user interface will be modified as the customer traverses the parking area to reflect in real time a relative of the customer and the rented car (or other programmed destination, such as a rental car, office, or elevator). At step1616, in some embodiments, an angle of a viewing screen of the customer's smart phone relative to a ground plane may be determined as the customer communicates with the SVAN. The angle of a viewing screen may help determine if an image captured via operation of a smart phone onboard-CCD image generator or other Image Capture Device is suitable for inclusion in a graphical user interface. For example, most smart device-onboard CCD Image Capture Devices have a field of view that is generally perpendicular to a viewing screen of a smart phone. Consequently, a customer may hold up the customer's smart phone at an angle generally perpendicular to the ground plane and capture a view of an area towards which the customer is walking. At step1617, a graphical user interface may be overlaid on top of an image captured by the CCD Image Capture Device in a position perpendicular to the ground plane. Positions of Nodes within the field of view of the CCD device may be indicated in combination with the image data captured by the CCD device, based upon the verified position of the CCD device, an angle at which the CCD device is being supported and a direction of interest determined via automated Orienteering apparatus and methods. Embodiments may include the positions of the Nodes within the field of view of an Image Capture Device associated with the smart phone being indicated as the vehicles with which they are associated, and information related to those vehicles. Information may include, for example, an indication of which vehicle is being rented by a particular customer associated with the smart device; which vehicles need service; a vehicle type (compact, midsize, truck, specialty); which vehicles are recently returned; which vehicles are ready to be rented, etc. At step1618, the graphical user interface may also include annotations or other details as they relate to the Nodes and/or the associated vehicles and/or aspects included in the field of view, such as a parking row number, an exit, an office, or other detail. At step1619, in another aspect, some embodiments may include an overlay of image data captured in a field of view with information descriptive of or related to a Node with a position within the Field of View. Node information may include, for example, the unique ID associated with the Node, a Node model, battery charge remaining, signal strength, time of last communication, details of data stored on the Node, amount of storage left in the Node, etc. In some embodiments, Nodes included in a GUI may be limited to those Nodes associated with vehicles and not display Nodes deployed as reference position Nodes or associated with other items. At step1620, in still another aspect, in some embodiments, a Node fixed to or within a vehicle may continue to communicate after it exits a parking area. For example, if a Node is able to communicate with another Node, one or both of the Nodes external to the parking area may note a GPS location and store the GPS location in a manner associated with the Node-to-Node communication. If a Node is in a vehicle that is in motion, the Node may also note aspects of the travel of the vehicle in which the Node is located, such as, one or more of: speed, acceleration, vehicle diagnostics. Similarly, the Node may note a speed, acceleration and location of a Node with which it is communicating. All or some communication data generated as a result of the Node-to-Node communication may be transmitted via a modality other than a modality used for the Node-to-Node communication. For example, if Node-to-Node communication is accomplished via a Bluetooth modality or a sub-GHz modality, the information resulting from the Node-to-Node communication may be retransmitted via a cellular or IP modality to an off-SVAN destination. Off-SVAN destinations may include, for example, a server, a controller or a smart device, in logical communication with the Internet or a cellular connection, Referring now toFIG.17, method steps are illustrated for deploying a SVAN to manage activities, materials and people on a construction site. The construction site may include contract workers, tradesmen, management, guests, equipment, emerging building structure, undeployed materials and materials included in the structure, and the like. At step1701, a unique Node ID is associated with one or more of: onsite Agent, material, equipment, structural aspect, or reference point (e.g., pole placed onsite specifically to provide positional reference). At step1702, a first Node may be fixedly attached to, placed inside of, or otherwise co-located with one or more of: an onsite Agent, material, equipment, structural aspect, or a reference point. At step1703, reference point Nodes are located at strategic points within or proximate to the Construction site. In some embodiments, the strategic placements selected for reference point Nodes maybe based upon one or more of: a shape of the construction area; a wireless modality distance capability; a presence of obstacles within an area occupied by a SVAN; at ends of constructed elements on a construction site, and the like. At step1704, one or more Nodes included in a SVAN may be designated as a Base Node. Base Nodes may be operative to perform functions not necessarily performed by Nodes that are not Base Nodes. For example, Base Nodes may aggregate data over time, perform controller functions, transmit data via more than one wireless Modality, be powered by utility-based alternating current, or communicate via a hardwired medium (e.g., ethernet cabling). At step1705, one or more of the Nodes may communicate with other Nodes. Preferably, each Node will communicate with each other node within range of a communication modality. In some embodiments, a pattern of Node communication may be followed. At step1706, in some embodiments, a pattern of communication may stagger a time of wireless communication in order to avoid interference of one communication by another communication. A pattern of communication may therefore include a “cascade” or hierarchical tree of wireless communication transmission and receipt. For example, a Base Node may communicate first, followed by a first generation of Nodes that receive a communication from the Base Node, follow up by communication from the first generation of Nodes with a second generation of Nodes (e.g., Nodes that are out of range or obstructed from communicating with the Base Node), then to third generation Nodes, etc. At step1707, one or more Nodes within the SVAN may be designated to communicate with a network access device extraneous to the SVAN. For example, a designated Node may aggregate data, such as an aggregation of values for communication variables or sensor-generated data; and communicate the aggregated data to a destination outside of the SVAN (such as via a cellular transmission or an IP transmission). At step1708, in some embodiments, a SVAN may be defined based upon an ability of SVAN participant Nodes to communicate with each other via a primary communication modality. For example, a primary communication modality may include a Bluetooth modality, Wi-Fi, Wi-Fi RTT, sub-GHz radio transmission and the like, and a secondary communication modality may include IP Protocol transmission, a cellular transmission, sub-GHz communication and the like. At step1709, some Nodes may be excluded based upon an inclusion or exclusion criteria. For example, in some embodiments, only Nodes with unique IDs associated with a particular type of equipment may be included in a SVAN, or only Nodes with unique IDs associated with materials prepped for deployment (e.g., immediate assembly into a structure) may be included in a SVAN. Alternatively, Nodes with IDs associated with construction equipment recently returned or in need of service may be excluded from a SVAN. At step1710, communication variable values may be aggregated. For example, one or more Nodes or a controller may aggregate and store data that is based upon, or quantifies, what transpires during a wireless communication. Examples of data that quantifies, or is based upon, what transpires during a wireless communication, may include, by way of non-limiting example, one or more of: a time of transmission, a time of receipt of a transmission, a phase angle of receipt of a transmission of a single antenna, a respective phase angle of receipt of same transmission by multiple antennas (which may include multiple antennas in one or more arrays of antennas). Other variables may include an amplitude of a received transmission, and a noise factor of a received transmission. At step1711, a respective location of some or all of the Nodes in the SVAN may be generated, based upon the values for communication variables that are descriptive of communications with the respective nodes. Methods and variables involved in determining a location for a Node are discussed extensively herein. At step1712, in some embodiments, an algorithm (such as those discussed herein) may be provided with values from the aggregated communication variable values to determine a location of a Node. Multiple sets of values and/or multiple algorithms may be used to disparately determine a set of locations for a particular Node. The set of locations for the particular Node may in turn be mathematically reconciled to determine a best location for the Node. For example, outlier sets of values may be set aside. Included sets of values and/or the set of locations for the particular Node may be used to generate an average, weighted average, or other combined value. At step1713, a location of some or all Nodes in a SVAN may be plotted in a graphical representation. The location for a Node may be the locations determined as described herein. In some embodiments, the unique IDs for plotted Nodes may be included in the graphical representation. Alternatively, or in addition to, the unique IDs, an annotation associated with a particular Node may be included in the graphical representation. A graphical representation may include one or both of two-dimensional and three-dimensional models of space occupied by the SVAN. At step1714, in some embodiments, a position of an Agent-supported Smart Device may be determined relative to one or more of the Nodes in a SVAN. The Agent-supported Smart Device may be a smart phone carried by a person or a smart device attached to a UAV or UGV. In some embodiments, the Smart Device will be programmed to communicate with a Base Node when the Smart Device determines that it is within communication range with the Base Node using a predetermined communication modality. For example, a GPS position calculated by a smart phone may indicate that the smart phone is within Bluetooth 5.1 range of a particular Base Node. The smart phone, acting as a Node, may then initiate Bluetooth 5.1 communication with the particular Base Node. At step1715, using Orienteering methods, the SVAN may guide an Agent supporting a Smart Device to a particular piece of equipment, a set of materials, a staging area, a drop off area, an office, or the like. For example, a worker who has placed a piece of equipment on a construction lot may be guided to that equipment via a graphical user interface on a smart phone. A controller may receive position information of the equipment and the customer's smart phone and modify the graphical user interface on the customer's smart phone to provide directions to the equipment. An Agent's Smart Device may begin the process by communicating with a first set of Nodes (that are within communication range of the customer's smart phone), and as the customer traverses a construction site (or areas proximate to the construction site), the customer's smart phone may transition to communicating with additional Nodes as those additional Nodes come with range of the smart phone. A graphical user interface may be modified as the customer traverses the construction site to reflect in real time a relative location of the customer and the equipment. In general, a user interface may be displayed upon a Smart Device, touch screen or other human ascertainable mechanism. The interface may display positions of Nodes and/or associated Sensors, associated Structure aspects, communications paths between Nodes, communications interrupted by perceived obstructions, locations of items of interest, locations of Agents, locations of non-Agent persons and the like. At step1716, in some embodiments, an angle of a viewing screen of the customer's smart phone relative to a ground plane may be determined as the customer communicates with the SVAN. The angle of a viewing screen may help determine if an image captured via operation of a smart phone onboard CCD image generator or other Image Capture Device is suitable for inclusion in a graphical user interface. For example, most smart device-onboard CCD Image Capture Devices have a field of view that is generally perpendicular to a viewing screen of a smart phone. Consequently, a customer may hold up the customer's smart phone at an angle generally perpendicular to the ground plane and capture a view of an area towards which the customer is walking. At step1717, a graphical user interface may be overlaid on top of an image captured by the CCD image capture device in a position perpendicular to the ground plane. Positions of Nodes within the field of view of the CCD device may be indicated in combination with the image data captured by the CCD device, based upon the verified position of the CCD device, an angle at which the CCD device is being supported and a direction of interest determined via automated Orienteering apparatus and methods. At step1718, the graphical user interface may also include annotations or other details as they relate to the Nodes and/or the associated equipment, material, structural aspects, agents and/or aspects included in the Field of View, such as a site topographic drawing references or other detail. At step1719, in another aspect, some embodiments may include an overlay of image data captured in a field of view with information descriptive of or related to a Node with a position within the Field of View. Node information may include, for example the unique ID associated with the Node, a Node model, battery charge remaining, signal strength, time of last communication, details of data stored on the Node, amount of storage left in the Node, etc. In some embodiments, Nodes included in a GUI may be limited to those Nodes associated with equipment, materials, agents, and the like. The GUI may not display Nodes deployed as reference position Nodes or associated with other items. At step1720, in some embodiments, Node information may be integrated into Augmented Virtual Model (CAD), as well as any sensor co-located with Nodes. Referring now toFIG.18, method steps are illustrated for deploying a SVAN to quantify conditions in a parking area. The parking area may include, for example, a garage or parking lot. Specific embodiments may include one or more of: a rental car parking area; a commercial parking area; a residential parking area; a municipal parking area and the like. At step1801, a unique ID number is associated with a Node ID. At step1802, respective Nodes are placed within, or proximate to, multiple respective defined occupancy areas. The occupancy areas may include, by way of non-limiting example, a hotel room and a healthcare provider room. At step1803, a Sensor and/or Sensor assembly, such as a multi-sensor module, is placed win logical communication with at least one Node that is within or proximate to each disparate defined occupancy space. In some embodiments, the strategic placement of Nodes maybe based upon one or more of: a shape of the construction area; a wireless modality distance capability; a presence of obstacles within an area occupied by a SVAN; at ends of constructed elements on a construction site, and the like. At step1804, one or more Nodes included in a SVAN may be designated as a Base Node. Base Nodes may be operative to perform functions not necessarily performed by Nodes that are not Base Nodes. For example, Base Nodes may aggregate data over time, perform Controller functions, transmit data via more than one wireless Modality, be powered by utility-based A/C current, and/or communicate via a hardwired medium (e.g., ethernet cabling). At step1805, one or more of the Nodes may communicate with other Nodes. Preferably, each Node will communicate with each other Node within range of a communication modality. In some embodiments, a pattern of Node communication may be followed (e.g., through the cascading process described above). At step1806, in some embodiments, a pattern of communication may stagger a time of wireless communication in order to avoid interference of one communication by another communication. A pattern of communication may therefore include a “cascade” or hierarchical tree of wireless communication transmission and receipt. For example, a Base Node may communicate first, followed by a first generation of Nodes that receive a communication from the Base Node, followed by communication by the first generation of Nodes with a second generation of Nodes (e.g., Nodes that are out of range or obstructed from communicating with the Base Node), then to third generation Nodes, etc. At step1807, one or more Nodes within the SVAN may be designated to communicate with a network access device extraneous to the SVAN. For example, a designated Node may aggregate data, such as an aggregation of values for communication variables, Sensor-generated data; and communicate the aggregated data to a destination outside of the SVAN (such as, via a cellular transmission or an IP transmission). At step1808, in some embodiments, a SVAN may be defined based upon an ability of SVAN participant Nodes to communicate with each other via a primary communication modality. For example, a primary communication modality may include a Bluetooth modality, Wi-Fi, Wi-Fi RTT, sub-GHz radio transmission and the like, and a secondary communication modality may include IP transmission, a cellular transmission, sub-GHz communication and the like. At step1809, some Nodes may be excluded, based upon an inclusion or exclusion criteria. For example, in some embodiments, only Nodes with unique IDs associated with a particular occupant, or only Nodes with unique IDs associated with occupancy areas that Sensor readings indicate are vacant, may be included in a SVAN. Similarly, Nodes with IDs associated with a group of persons or an item of equipment, as well as reference point position Nodes, may be included in inclusion or exclusion criteria. At step1810, communication variable values may be aggregated. For example, one or more Nodes or a controller may aggregate and store data that is based upon, or quantifies, what transpires during a wireless communication. Examples of data that quantifies, or is based upon, what transpires during a wireless communication, may include, by way of non-limiting example, one or more of: a time of transmission, a time of receipt of a transmission, a phase angle of receipt of a transmission of a single antenna, a respective phase angle of receipt of same transmission by multiple antennas (which may include multiple antennas in one or more arrays of antennas). Other variables may include an amplitude of a received transmission, and a noise factor of a received transmission. Data generated by Sensors associated with the respective Nodes may also be aggregated. At step1811, a respective location of some, or all, of the Nodes in the SVAN may be generated, based upon the values for communication variables that are descriptive of communications with the respective nodes. Methods and variables involved in determining a location for a Node are discussed extensively herein. At step1812, in some embodiments, an algorithm (such as those discussed herein) may be provided with values from the aggregated communication variable values to determine a location of a Node. Multiple sets of values and/or multiple algorithms may be used to disparately determine a set of locations for a particular Node. The set of locations for the particular Node may in turn by mathematically reconciled to determine a best location for the Node. For example, outlier sets of values may be set aside, included sets of values, and/or the set of locations for the particular Node may be used to generate an average, a mean of other combined value. At step1813, a location of some, or all, Nodes in a SVAN may be plotted in a graphical representation. The location for a Node may be the locations determined as described herein. In some embodiments, the unique IDs for plotted Nodes may be included in the graphical representation. Alternatively, or in addition to, the unique IDs, an annotation associated with a particular Node may be included in the graphical representation. A graphical representation may include one or both of two-dimensional and three-dimensional models of space occupied by the SVAN. At step1814, in some embodiments, a position of an Agent-supported Smart Device may be determined relative to one or more of the Nodes in a SVAN. The Agent-supported Smart Device may be a smart phone carried by a person, or a Smart Device attached to a UAV or UGV. In some embodiments, the Smart Device will be programmed to communicate with a Base Node when the Smart Device determines that it is within communication range with the Base Node using a predetermined communication modality. For example, a GPS position calculated by a smart phone may indicate that the smart phone is within Bluetooth 5.1 range of a particular Base Node. The smart phone, acting as a Node may then initiate Bluetooth 5.1 communication with the particular Base Node. At step1815, using Orienteering methods, the SVAN may guide an Agent supporting a smart device to a particular piece of occupancy area, such as an occupancy area that Sensor data indicates is vacant or an area that the Sensor data indicates is occupied. In some embodiments, a controller may receive position information of the occupancy area and the Agent's smart phone and modify the graphical user interface on the Agent's smart phone to provide directions to a selected occupancy area. The Agent's smart phone may begin by being guided via processing of values for variables of communications with a first set of Nodes (what are within communication range of the Agent's smart phone), and as the Agent traverses a structure containing the occupancy areas (or areas proximate to the occupancy area), the Agent's smart phone may transition to communicating with additional Nodes as those additional Nodes come within range of the smart phone. A graphical user interface may be modified as the Agent traverses the c structure containing the occupancy areas to reflect in real time a relative location of the Agent and an occupancy area of interest At step1816, in some embodiments, an angle of a viewing screen of the Agent's smart phone relative to a ground plane may be determined as the Agent communicates with the SVAN. The angle of a viewing screen may help determine if an image captured via operation of a smart phone onboard CCD image generator (e.g., charged coupled device camera) is suitable for inclusion in a graphical user interface. For example, most smart device onboard CCD image capture devices have a field of view that is generally perpendicular to a viewing screen of a smart phone. Consequently, an Agent may hold up the Agent's smart phone at an angle generally perpendicular to the ground plane and capture a view of an area towards which the Agent is walking. At step1817, a graphical user interface may be overlaid on top of an image captured by the CCD Image Capture Device in a position perpendicular to the ground plane, and positions of Nodes within the field of view of the CCD device may be indicated in combination with the image data captured by the CCD device, based upon the verified position of the CCD device, an angle at which the CCD device is being supported and a direction of interest determined via automated Orienteering apparatus and methods. At step1818, the graphical user interface may also include annotations or other details as they relate to the Nodes and/or the associated occupancy areas and/or aspects included in the field of view, such as a site topographic drawing references or other detail. At step1819, in another aspect, some embodiments may include an overlay of image data captured in a field of view with information descriptive of, or related to, a Node with a position within the field of view. Node information may include, for example, the unique ID associated with the Node, a Node model, battery charge remaining, signal strength, time of last communication, details of data stored on the Node, amount of storage left in the Node, etc. In some embodiments, Nodes included in a GUI may be limited to those Nodes associated with a particular occupancy area, or group of occupancy areas. The GUI may or may not, upon discretion of a User or system manager, display Nodes deployed as reference position Nodes or associated with other items. At step1820, in some embodiments, Node information and occupancy areas may be integrated into an Augmented Virtual Model (AVM) as well as data from any Sensor co-located with Nodes. Referring now toFIG.19, method steps are illustrated for deploying a SVAN and displaying or communicating geolocated information. At step1901, the method may include associating a respective unique identifier for each of at least a first Node; a second Node and a third Node included in an array of Nodes, wherein each of the first Node; second Node and third Node comprises: a processor, a digital storage, a communication module and an antenna. At step1902, the method may include designating a base position in relation to the first Node. At step1903, the method may include wirelessly communicating between multiple Nodes comprising at least the first Node the second Node, the third Node and a fourth Node, wherein the fourth node includes an agent supporting a smart device with a wireless communication capability who enters a structure space comprising at least the first Node, the second Node and the third Node, and wherein the fourth Node comprises an antenna array. At step1904, the method may include generating values for the first Node; the second Node and the third Node, for communication variables based upon the wirelessly communicating between the first Node, the second Node and the third Node, wherein the communication variables may include: one or more of a start time of a respective wireless communication transmission (T1), a receipt time of the respective wireless communication (T2), or a calculated transmission time. The communication variables may also include one or more of a phase difference of the respective wireless communication transmission between a respective first antenna and a respective second antenna, or a calculated angle of arrival based upon the phase difference. At step1905, the method may include calculating relative position coordinates for the first Node, the second Node and the third node based on the communication variables of step1904. At step1906, the method may include generating values, for the fourth Node, for communication variables based upon the wirelessly communicating between the first Node, the second Node and the third Node, wherein the communication variables may include one or more of a start time of a respective wireless communication transmission (T1), a receipt time of the respective wireless communication (T2), or a calculated transmission time. The communication variables may also include one or more of a phase difference of the respective wireless communication transmission between at least a respective first antenna and a respective second antenna within the antenna array of the smart device. At step1907, the method may include calculating a relative position and a relative orientation of the fourth node of the smart device based on the communication variables of step1906, wherein the relative orientation determines a direction of interest of the user in the structure space. At step1908, the method may include communicating information stored within the self-verifying array of nodes to the smart device, wherein a selection of data to transmit as the information utilizes the relative orientation and relative position calculated for the fourth Node. CONCLUSION Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order show, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the claimed invention. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted the terms “comprising”, “including”, and “having” can be used interchangeably. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in combination in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Similarly, while method steps may be depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in a sequential order, or that all illustrated operations be performed, to achieve desirable results. | 243,909 |
11861270 | DETAILED DESCRIPTION By way of an introductory example, as turbine engine manufacturing method includes: identifying, with a controller, a plurality of virtual combinations of a plurality of scanned virtual vane components, each scanned virtual vane component comprising a respective first data set that indicates a three-dimensional virtual representation of a different one of a plurality of physical vane components manufactured and three-dimensionally scanned to generate the scanned virtual vane components; determining, with the controller, a plurality of virtual throat areas for the plurality of virtual combinations, wherein determining the plurality of virtual throat areas comprises, for each combination: virtually aligning, with the controller, mating surfaces of scanned virtual vane components of a respective virtual combination; in response to the virtually aligning, determining, with the controller, an aligned virtual vane corresponding to the respective virtual combination, the aligned virtual vane comprising a second data set indicating a three-dimensional virtual representation of a different one of a plurality of physical combinations of the physical vane components; and calculating, with the controller, a respective virtual throat area of the aligned virtual vane; selecting a physical combination of physical vane components of the plurality of physical vane components, the selection dependent on an optimal virtual combination having an optimal virtual throat area from among the plurality of virtual throat areas, wherein the physical combination corresponds to the optimal virtual combination; and assembling together physical vane components of the selected physical combination to form a vane of a gas turbine engine having an optimal physical throat area. An interesting feature of the systems and methods described below may be that the various ways that various physical vane components interact or combine with each other can be predicted before certain combinations are selected to form inseparable, physical vanes. Another interesting feature of the systems and methods described below may be that vane assembly processes are improved since they will produce, at least on average, physical vanes with actual throat areas closer to a nominal throat area. Another interesting feature of the systems and method described below is that engine performance will be improved since, on average, the engines will have vanes with throat areas closer to a nominal throat area. Another interesting feature of the systems and methods described below is that vane assembly processes will have reduced numbers of vanes needing to be discarded, such as ones manufactured out of specification due to arbitrary selection of vane components for assembly. Another interesting feature is that combinations of physical vane components can be intelligently, rather than arbitrarily selected, leading to, on average, improved vanes. Another interesting feature of the systems and methods described below may be that among numerous possible combinations of physical vane components available to a manufacturer, only those best combinations can be identified and selected to form the inseparable, physical vanes. Another interesting feature of the systems and methods described below is that an initial virtual analysis can be used to filter out certain combinations of vane components for a subsequent virtual analysis that identifies physical combinations of vane components to use to form vanes. Another interesting feature of the systems and methods described below is that an initial virtual analysis of an initial set of virtual combinations of vane components can be used to identify better virtual combinations by mix-and-matching virtual components from different combinations in the initial set. Another interesting feature of the systems and methods described below is that electronically stored data, such as scan mesh data, that three-dimensionally virtually represents actual manufactured individual vane components can be analyzed in combination with each other, rather than individually, to assess the behavior and interaction of their physical counterparts in combination before those physical counterparts are inseparably combined together to form the vanes. Another interesting feature of the systems and methods described below is that computers can be incorporated into a manufacturing process to perform virtual throat area calculations of virtual combinations of vane components before their physical counterparts are inseparably combined, calculations that cannot be performed on the physical counterparts themselves before the physical counterparts are inseparably combined. Another interesting feature of the systems and methods described below is that computers are incorporated into a manufacturing process to analyze virtual representations of actually manufactured physical structures that are three-dimensionally complex, whose combined behavior and interconnection, due to imperfections of the manufacturing processes and three-dimensional complexities of the parts, cannot be accurately predicted, and therefore can only be accurately assessed until after the physical structures are inseparably combined together to form a single vane structure. These and other interesting features are described in further detail with reference to various embodiments described below and the attached drawings. FIG.1is a cross-sectional view of a gas turbine engine100. In some examples, the gas turbine engine100may supply power to and/or provide propulsion of an aircraft. Examples of the aircraft may include a helicopter, an airplane, an unmanned space vehicle, a fixed wing vehicle, a variable wing vehicle, a rotary wing vehicle, an unmanned combat aerial vehicle, a tailless aircraft, a hover craft, and any other airborne and/or extraterrestrial (spacecraft) vehicle. Alternatively or in addition, the gas turbine engine100may be utilized in a configuration unrelated to an aircraft such as, for example, an industrial application, an energy application, a power plant, a pumping set, a marine application (for example, for naval propulsion), a weapon system, a security system, a perimeter defense or security system. The gas turbine engine100may take a variety of forms in various embodiments. Though depicted as an axial flow engine, in some forms the gas turbine engine100may have multiple spools and/or may be a centrifugal or mixed centrifugal/axial flow engine. In some forms, the gas turbine engine100may be a turboprop, a turbofan, or a turboshaft engine. Furthermore, the gas turbine engine100may be an adaptive cycle and/or variable cycle engine. Other variations are also contemplated. The gas turbine engine100may include an intake section120, a compressor section160, a combustion section130, a turbine section110, and an exhaust section150. During operation of the gas turbine engine100, a main fluid received from the intake section120, such as air, travels through a main fluid flow path in a main fluid flow path direction D1through blades121in the intake section120. As shown inFIG.1, the main fluid flow path direction D1is generally parallel with a centerline X of the engine100. The fluid may be compressed within the compressor section160. The compressed fluid may then be mixed with fuel and the mixture may be burned in the combustion section130. The combustion section130may include any suitable fuel injection and combustion mechanisms. The hot, high pressure fluid may then pass through the turbine section110to extract energy from the fluid and cause a turbine shaft of a turbine114in the turbine section110to rotate, which in turn drives the compressor section160. Discharge fluid may exit the exhaust section150. As noted previously, the hot, high pressure fluid passes through the turbine section110during operation of the gas turbine engine100. As the fluid flows through the turbine section110, the fluid passes between adjacent turbine blades112of the turbine114causing the turbine114to rotate. The rotating turbine114may turn a shaft140in a rotational direction D2, for example. The turbine blades112may rotate around an axis of rotation, which may correspond to a centerline X of the turbine114in some examples. The turbine blades112may be distributed in an array of blades122circumferentially spaced around a hub124(or core or turbine spool) of the turbine114. Circumferentially surrounding the array of blades122is a blade track system126. The blade track system126is designed to track outer edges or tips of turbine blades112included in the array of blades122as the blades112radially expand and contract, due to, for example, rotation of the hub124causing centrifugal force, and/or changes in temperature causing materials to expand and contract. The turbine section112may also include one or more vane stages128. A vane stage (also called a vane assembly)128is a component that directs the flow of fluid through at least a portion of the turbine section112. Example types of a vane stage128include an inlet vane stage and an exhaust vane stage, although other types of vane stages may be possible. Also, as described in further detail below, a vane stage12includes a plurality of vanes (also called vane segments) connected together. Like a vane stage as a whole, each vane segment, or a combination of two or more vane segments, is configured to direct the flow of fluid through at least a portion of the turbine section112. Surrounding the blade track system126and the vane stage(s)128is a turbine casing180. The blade track system126and the vane stage(s)128are positionable axially with the centerline X within the turbine casing180and radially outward of the turbine blades112. The blade tracks may be dynamically radially moved outwardly and inwardly by the blade track system126in response to dynamic operation of the gas turbine engine100to avoid/control a rub of the tip of the turbine blades112on the segments of the blade tracks. FIG.2is an exploded perspective view of an example vane (or vane segment)200. In various embodiments, including that shown, the vane200is a nozzle guide vane (NGV), such that a plurality of NGVs coupled together form within the engine100form an NGV stage or an NGV assembly, although other types of vanes, and/or names to refer to a vane, may be possible. The vane200includes four separately manufactured (e.g., machined) components that are then combined—i.e., fixedly coupled or “locked”—together to form the vane200. Each component of the vane200is of one of four vane component types, including an inner endwall type, an outer endwall type, a leading airfoil type, a trailing airfoil type. A vane component of a given type may be referred to by the name of the type. Accordingly, the components of a vane200includes an inner endwall202, an outer endwall204, a leading airfoil206, and a trailing airfoil208. As illustrated inFIG.2, the inner endwall202and the outer endwall204are each generally curved structures, such as shells, plates, or panels, that have curved contours, or radius of curvatures that generally track each other. When a predetermined number of vanes200are connected together, their combination forms a vane stage or assembly128, which is a generally circular or cylindrical structure having the inner endwalls202connected together and the outer endwalls204connected together, such that the combination of inner endwalls202is concentric with the combination of outer endwalls204. Additionally, when the vane200is positioned in the engine100, the inner endwall202is the radially inward-most component of the vane200, and accordingly positioned or disposed radially inward (e.g., closer to the centerline X of the engine100and/or a center of the vane stage128) from the outer endwall204. Also, when the vane200is positioned in the engine100, the outer endwall204is the radially inward-most component of the vane200, and accordingly position or disposed radially outward (e.g., farther from the centerline X of the engine200and/or a center of the vane stage128) from the inner endwall202. The airfoils206,208are radially extending structures that radially space apart the outer endwall204from the inner endwall204. When combined, portions of surfaces of each of the inner endwall202, the outer endwall204, the leading airfoil206and the trailing airfoil208, in combination, define, enclose, or otherwise provide a boundary for a three-dimensional volume or space, disposed between the inner and outer endwalls202,204and between the leading and trailing airfoils206,208, through which fluid in the engine flows. As shown inFIGS.3and4, certain predetermined points, called throat window points, along the surfaces of the four components202-208define a perimeter or boundary of a portion, called a throat window, of the space, and the area of the throat window, called a throat area and as defined by the throat window points, determines the flow rate of the air flow through the space of the vane200. Accordingly, different amounts of the throat area can cause different flow rates. Additionally, each of the vane components202-208includes a pair of mating surfaces. Each mating surface of a given vane component of a given vane is shaped to mate with, or contact, a corresponding mating surface of another vane component, in order for the vane components202-208to combine to form the vane200. Specifically, each of the leading airfoil206and the trailing airfoil208includes a respective inner end having an inner mating surface210,212, and a respective outer end having an outer mating surface214,216. The inner mating surfaces210,212are each shaped to mate with a respective one of a pair of mating surfaces218,220of the inner endwall202, and the outer mating surfaces214,216are each shaped to mate with a respective one a pair of mating surfaces222,224of the outer endwall204. As shown inFIGS.2and5, the mating surface218-224are generally recessed relative to radially inward and outward facing curved surfaces of the inner or outer endwalls202,204. The inner and outer mating surfaces210-216of the airfoils206,208may then project or extend into the recesses in order to mate with, or contact, their respective mating surfaces218-224of the endwalls202,204. For some embodiments, a given mating surface of a given vane component includes a conical surface. In addition or alternatively, for some embodiments, a given mating surface of a given vane component includes only a portion of a total surface that contacts a surface of another vane component when those vane components are combined together. For example, a mating surface of a given vane component may include only a conical surface portion of a surface that contacts a surface of another vane component. Two vane components of the same vane component type may be generally manufactured according to the same manufacturing process according to the same drawing and/or specifications, and upon being manufactured, generally have the same physical structure, features, dimensions, and/or shape, except for imperfections due to non-idealities of the manufacturing process by which the components are manufactured. An imperfection may be a physical or geometrical feature or portion of the vane component that deviates from a perfect or nominal form of that vane component feature. An imperfection may be measurable and/or quantifiable, such that the imperfection has an associated quantified deviation that can be indicated in terms of units (such as thousands of an inch or microns as non-limiting examples) to indicate an amount of distance, area, or volume by which the imperfect feature deviates from the nominal feature, or in terms of a percentage that indicates a percent difference from the nominal value or that indicates a percentage of the nominal value (a percentage by which the imperfect feature is greater than or less than the nominal value). Due to non-idealities of the manufacturing process, in the vast majority of situations, most vane components are imperfect (i.e., they have at least one imperfect feature), and rarely ever are two imperfect parts of the same vane component type the same (i.e., they have at least one imperfect feature that differs from each other, in terms of where the imperfection is located, how much it deviates from the nominal value, size, shape, etc.). Accordingly, in the vast majority of situations, no two imperfect vane components of the same vane component type are exactly the same. This non-matching between vane components of the same vane component type is exacerbated by the fact that each of the surfaces of the vane components are generally complex three-dimensional surfaces, having various combinations of differently shaped features, different curvatures, and extending differently in various directions, including three-dimensional (x-y-z) directions, and/or radial and axial directions, in turn augmenting the various ways in which two imperfect vane components of the same vane component type, made by the same manufacturing process according to the same drawings and/or specifications, can nonetheless dimensionally differ from each other. In an ideal situation, each of the four vane components202-208that combine to form the vane200are manufactured perfectly. Such perfectly manufactured vane components202-208, in turn have perfect mating surfaces210-224, such that when they mate or contact each other, their mating is perfect—meaning that for each pair of mating surfaces that mate together, 100% of a given mating surface is in contact with the other mating surface with which it mates. When perfect or nominal vane components are combined together to form a perfect or nominal vane, such a nominal combination provides or determines a nominal throat area. In reality, at least one, and most likely all four of the vane components202-208that combine to form a vane are manufactured imperfectly. When such is the case, the resulting imperfect vane may, and will most likely, determine an actual throat area that is different from the nominal throat area. Where an imperfect feature is located on a mating surface, the mating of the various mating surfaces214-222may contribute, at least in part, to the imperfect (or non-ideal) actual throat area. Different mating surfaces with different imperfections may affect the amount by which the actual throat area deviates from the nominal threat area amount. For example, a vane component, due to its imperfection, may cause an increased throat area from the nominal value, or may cause a decreased throat area from the nominal value. Also, the amounts by which a throat area is caused to increase or decrease from the nominal value may differ for different vane components with different imperfections. For example, one vane component may cause a throat area to increase or decrease more than another vane component causes a throat area to increase or decrease. Correspondingly, even with imperfect mating surfaces, certain combinations of vane components may yield a better throat area (one closer to its nominal value) than other combinations. For example, a vane component causing a throat area to increase and a vane component causing a throat area to decrease may, at least partially, offset their deviating effects, yielding an actual throat area that is closer to the nominal value, compared to two vane components that both cause a throat area to increase or both cause a throat area to decrease. When manufacturing one vane or several vanes for a single engine, or for multiple engines, a manufacturer may have available several individual (e.g., separately machined) vane components, including several vane components for each vane component type. Such vane components may be individually stored or kept in any of various storage locations. To assemble a vane, a manufacturer may arbitrarily select one vane component from each of the four vane component types and combine them together to form the vane. The manufacturer may repeat this process of arbitrarily selecting vane components to make several vanes. At the outset of a manufacturing process to manufacture one or more vanes, where several individual vane components are available to form the vanes, there exists many different possible combinations of those vane components to form the one or more vanes. As an example, in various embodiments, a given vane stage128includes twenty vanes, each including four vane components. Statistically, those eighty vane components provide 624,000 different possible combinations of four vane components (one inner endwall, one outer endwall, one leading airfoil, and one trailing airfoil) to form the twenty vanes. Due to the various ways that different vane components can have different imperfections, the various possible combinations yield different possible throat area amounts. That is, among the various possible combinations, some combinations would yield better throat areas (closer to the nominal throat area) than others; one or more combinations would yield an M-number of optimal combinations having optimal throat areas (i.e., an M-number of throat areas closest to the nominal value) from among the various possible combinations; some possible combinations may yield non-optimal throat areas that are still within an acceptable range of throat areas (i.e., satisfies the specification); and some possible combinations may yield throat areas that are outside of an acceptable range of throat areas (i.e., outside the specification). For the latter case, if such a possible combination were actually manufactured, the resulting vane may be deemed unusable and discarded. Due to the arbitrary selection process, and the extremely high number of possible combinations, selecting optimal combinations of vane components to form one or more vanes through the arbitrary selection is highly unlikely. Additionally, when the vane components are combined, they are fixedly coupled together to form a vane. Thereafter, a throat area of the vane is measured. If the throat area is determined to be less than optimal or outside an acceptable range, it is impractical, if not impossible, to disassemble the vane and form a new, reassembled vane with one or more new, replacement components, and determine if the reassembled vane provides a better throat area. The result then, is that during manufacturing, vanes are assembled that cannot be used, and/or vane stages are assembled that have less than optimal throat areas, in turn leading to engines that perform less than optimally due to the less than optimal air flow through the vane stages. The present description describes improved vane manufacturing methods and/or gas turbine engine manufacturing methods that incorporate computing devices that perform virtual analysis of virtual three-dimensional representations of actual, physical vane components and determine which combinations of the physical vane components yield optimal throat areas. The manufacturing process can then intelligently select those combinations of physical vane components having optimal throat areas to form vanes included in a gas turbine engine. The result of the improved manufacturing methods is physical vanes and vane stages with better throat areas compared to if those vanes and vane stages were formed through arbitrary selection of the vane components. Moreover, the computer portion of the improved manufacturing processes is not merely to utilize a computer to perform functions and calculations faster than a person would otherwise do in real life. Rather, the computing devices are used to analyze data sets indicating virtual combinations of virtual representations of the actual, physical components, and calculate virtual throat areas of those virtual combinations, before the actual, physical components are combined together. As mentioned, a manufacturer may manufacture the throat area of a physical vane after the vane components are fixedly combined together. That is, the manufacturer does not combine the vane components in some releasable manner, measure the throat area, and then solidify the combination only if the throat area is acceptable. In contrast, the computing device performs virtual analysis on virtual combinations of virtual components to provide accurate predictions of what actual throat areas would be for their physical counterparts before the corresponding physical vane components are assembled. FIG.6is a flow chart of an example manufacturing method600for manufacturing a vane that includes an inner vane, an outer vane, a leading airfoil, and an outer airfoil. A non-limiting example of such a vane is the vane200shown inFIGS.2-5. In addition, as described in further detail below, certain actions or functions, including those functions performed on electronic data, are performed using a controller, which is generally an electronic device or circuit, implemented in hardware, or a combination of hardware and software, an example of which is described in further detail below with reference toFIG.9. At block602, a controller may identify a plurality of virtual combinations of a plurality of scanned virtual vane components. As used herein, a virtual vane component is a data set, configured to be stored in a memory, that three-dimensionally and virtually represents a physical vane component. An example type of data set is a three-dimensional scan mesh. A virtual vane component may include a virtual inner endwall, a virtual outer endwall, a virtual leading airfoil, and a virtual trailing airfoil. Also, a data set of a virtual vane component may include a plurality of data points, where each data point indicates a portion of a virtual surface of the virtual representation of the physical component. The controller may be configured to determine a virtual three-dimensional space (e.g., a virtual workspace) in which a virtual component can be virtually positioned. The controller can virtually move the virtual vane component around within the virtual workspace, such as in accordance with the six degrees of freedom associated with movement of an object. When a given virtual component is in a given virtual position in the virtual workspace, each of the data points of a data set of the given virtual component indicates a relative virtual position in the virtual workspace corresponding to the given virtual position. When the controller moves the virtual component to a different virtual position, the controller adjusts the relative virtual positions of each of the data points of the virtual component. Also, a scanned virtual vane component is a virtual vane component that is generated as a result of a three-dimensional scanning process performed on an actual, physical vane component that was manufactured, such as by machining. An example three-dimensional scanning process may be a blue light three-dimensional scanning process, although other types of scanning processes, including other types of optical scanning processes, may be possible. Accordingly, after a physical vane component is manufactured, such as by machining, the physical vane component is three-dimensionally scanned to generate a virtual vane component that corresponds to the physical vane component. The virtual vane component that is generated may be stored in a memory that is part of the controller, or to which the controller otherwise has access. In this sense, a scanned virtual vane component uniquely corresponds to a particular physical vane component that is manufactured. In addition, in various embodiments of a manufacturing process, a physical vane component that is manufactured may be assigned a unique identifier (ID), such as a serial number, that uniquely identifies the physical vane component. The ID may be formatted in any of various ways in order to uniquely identify the physical vane component from other physical vane components, including from physical vane components of the same vane component type. A virtual vane component that corresponds to a physical vane component may similarly be assigned the same unique ID as, or a unique ID corresponding to, the ID of the corresponding physical vane component. The controller may use the IDs of virtual vane components in order to select and identify certain vane components, and import them into the virtual workspace. Also, the controller may associate different data sets with a given virtual vane component ID. For example, the controller may be configured to identify several virtual positions of one virtual vane component within the virtual workspace. In turn, the controller may generate and store different data sets, each corresponding to a different position of the virtual vane component. The controller may associate these different data sets for one virtual vane component ID, in order to store and recognize the different positions of the virtual vane component. Another type of virtual vane component is a nominal virtual vane component. A nominal virtual vane component is a data set that three-dimensionally virtually represents a perfect virtual vane component of a given vane component type. Accordingly, in various embodiments, the controller may be configured to recognize a nominal virtual inner endwall, a nominal outer endwall, a nominal leading airfoil, and a nominal trailing airfoil. Unlike a scanned virtual vane component, a nominal virtual vane component corresponds to a model according to which physical vane components are manufactured. The model may be virtually generated using computing technology. For example, the model may be a computer-aided design (CAD) drawing generated using CAD software. Additionally, each nominal virtual vane component may have an associated predetermined virtual position within the virtual workspace. Accordingly, when the controller imports a given nominal virtual vane component into the virtual workspace, the controller positions the given nominal virtual vane component at its associated predetermined virtual position. Each scanned virtual vane component corresponds to, or has an association with, a nominal virtual vane component of the same type. That is, scanned virtual inner endwalls correspond to, or are associated with, the nominal virtual inner endwall; scanned virtual outer endwalls correspond to, or are associated with, the nominal outer endwall; scanned virtual leading airfoils correspond to, or are associated with, the nominal virtual leading airfoil; and scanned virtual trailing airfoils correspond to, or are associated with, the nominal virtual trailing airfoil. For a given physical vane component that is perfectly manufactured, the corresponding virtual vane component matches the corresponding nominal virtual vane component. Likewise, for a given physical vane component that is imperfectly manufactured, the corresponding virtual vane component differs from the corresponding nominal virtual vane component depending on the imperfections of the given physical vane component. In addition to virtual vane components, the controller may identify virtual vanes. A virtual vane is a data set, configured to be stored in memory, that three-dimensionally virtually represents a physical vane. Similar to its physical counterpart, a virtual vane includes, or is a combination of, a virtual inner endwall, a virtual outer endwall, a virtual leading airfoil, and a virtual trailing airfoil. Like the individual virtual vane components, the controller may be configured to select a virtual vane or import a virtual vane into the virtual workspace. In the virtual workspace, the controller may virtually move the virtual vane around in any of various three-dimensional virtual positions. When a virtual vane is in a given virtual position within the virtual workspace at a given point in time, a data set of the virtual vane may indicate the given virtual position. Correspondingly, when the controller moves the virtual vane to a different virtual position in the virtual workspace, the controller modifies the data set to indicate the different virtual position. In various embodiments, the controller may store different sets associated with a single virtual vane in order to indicate different virtual positions of the single virtual vane. Also, a virtual vane may be of one of two types, including a scanned virtual vane and a nominal virtual vane. A scanned virtual vane is a combination of scanned virtual vane components, including a combination of a scanned virtual inner endwall, a scanned virtual outer endwall, a scanned virtual leading airfoil, and a scanned virtual trailing airfoil. Two scanned virtual vanes may differ from each other by having at least one scanned virtual vane component of the same vane component type that is different from each other, such as by having different virtual vane component IDs. For example, two scanned virtual vanes are different from each other by respectively having different scanned virtual inner endwalls. As mentioned, the controller may identify two scanned virtual vane components of the same vane component type as being different from each other by identifying different IDs for the two scanned virtual vane components. Additionally, a nominal virtual vane is a combination of nominal virtual vane components, including a combination of a nominal virtual inner endwall, a nominal virtual outer endwall, a nominal virtual leading airfoil, and a nominal trailing airfoil. In various embodiments, a nominal virtual vane has a predetermined virtual position within the virtual workspace. Accordingly, where the controller imports the nominal virtual vane into the virtual workspace, the controller virtually positions the nominal virtual vane at its predetermined virtual position in the virtual workspace. As mentioned, at block602, the controller identifies a plurality of virtual combinations of a plurality of scanned virtual vane components. One virtual combination of scanned virtual vane components is one scanned virtual vane. In various embodiments, at block602, the plurality of scanned virtual vane components may be stored in memory, and the controller may access the memory to identify multiple, different virtual combinations of scanned virtual vane components that the plurality of scanned virtual vane components can form. In various embodiments, the number of virtual combinations may be all of the possible combinations that can be formed, or may be less than all of the possible combinations. At block604, the controller calculates a plurality of virtual throat areas, each for a respective one of the plurality of virtual combinations identified at block602. Similar to how an actual throat area for a physical vane can be measured, the controller may be configured to identify those data points of a given virtual vane, formed from a given virtual combination, in a given virtual position in the virtual workspace, that form a set of virtual throat window points. Upon identifying the virtual throat window points of the data set, the controller may calculate a virtual throat area for the combination. In general, calculating a throat area using throat window points, actual or virtual, is known, and outside the scope of the present description. As described in further detail below, at block604, for each virtual combination, the controller may virtually align mating surfaces of the scanned virtual vane components of a respective combination in order to determine an aligned virtual vane corresponding to the virtual combination. Upon determining the aligned virtual vane, the controller may calculate a virtual throat area for the aligned virtual vane and the corresponding virtual combination. At block606, the manufacturing method600may transition to a physical selection process that selects a physical combination of physical vane components. The physical combination may include a physical inner endwall, a physical outer endwall, a physical leading airfoil, and a physical trailing airfoil. The physical combination is a subset of a plurality of physical vane components that were three-dimensionally scanned to generate the plurality of scanned virtual vane components for which the virtual combinations are determined at block602. The selection performed at block606is dependent on the virtual throat areas calculated at block604. In particular, the selection is dependent on an optimal virtual throat combination that has an optimal virtual throat area from among the plurality of virtual throat areas calculated at block604. In various embodiments, an optimal virtual throat area may be one of an M-number (M being one or more) of best virtual throat areas among the plurality of virtual throat areas calculated at block604. In general, the closer a calculated virtual throat area is to a predetermined nominal throat area value, the better the virtual throat area is. Accordingly, a best virtual throat area is one of an M-number of virtual throat areas, from among the plurality of virtual throat areas, closest to the nominal throat area value. The selection performed at block606may be performed manually or through computer automation. For example, in various embodiments, the controller may be coupled to an electronic display that displays at least an M-number optimal virtual combinations having the M optimal virtual throat areas. The information that is displayed may indicate the physical vane components to select to form the physical combination—i.e, the physical vane. A person (or several persons) manually performing the selection process may analyze the display to determine the physical vane components to select. In other embodiments, an automated process may utilize a machine that is configured to identify, select, and grasp physical vane components from storage. The machine may electronically receive information identifying at least the M-number of optimal virtual combinations from the controller, and in response, select the physical vane components of the physical combination corresponding to one of the M-number of optimal virtual combinations. At block608, the physical vane components of the selected physical combination are assembled together to form a physical vane. As a result of the assembly at block608, the physical vane has an optimal physical throat area that is better, or at least much more statistically likely to be better, than if blocks602and604were never performed and the physical vane components were instead arbitrarily selected at random. At block608, the physical vane components may be assembled or combined together in any of various ways. In general, the selected physical vane components may be moved to a tooling that loads the mating surfaces of the physical components against each other to being the physical vane components into contact with each other. Subsequently, a casting process may be performed that joins and locks together the physical vane components so that they become one single physical vane. Also, for at least some example embodiments, the actual throat area of the physical vane may be measured as part of an inspection process. Since the physical vane components were intelligently selected based on the virtual processes at blocks602and604, then the actual throat area calculated at block608is much more likely to yield an optimal, or at least acceptable, value compared to if the physical vane components were just arbitrarily selected. Accordingly, through performance of the virtual processes at blocks602and604, the likelihood that the physical vane produced at block608will have to be discarded due to having an unacceptable throat area is reduced, in turn leading to greater efficiency and cost savings. As previously described, at block604, in order for the controller to calculate the plurality of virtual throat areas, the controller may virtually align mating surfaces of the scanned virtual vane components of respective virtual combinations before calculating the virtual throat areas.FIG.7is a flow chart of an example virtual throat area calculation method700, involving virtually aligning mating surfaces of scanned virtual vane components, that the controller may perform for block604in order to calculate the plurality of virtual throat areas. At block702, the controller may determine whether there are any virtual combinations for which to calculate a corresponding virtual throat area. As previously described, the controller may identify a plurality of virtual combinations of scanned virtual vane components, such as at block602inFIG.6. In various embodiments, the controller may use the plurality of virtual combinations it identified at block602to make the determination at block702. If the controller determines it has calculated virtual throat areas for all of the identified virtual combinations, then the controller determines that there are no further virtual throat area calculations to make, and the method700may end. Alternatively, at block702, if the controller identifies a virtual combination for which to calculate a corresponding virtual throat area, then the method700may proceed to block704. As previously described, a given virtual combination that the controller identifies may have an associated scanned virtual inner endwall, a scanned virtual outer endwall, a scanned virtual leading airfoil, and a scanned virtual trailing airfoil. Accordingly, by identifying a given virtual combination, the controller is able to identify those scanned virtual vane components associated with the given virtual combination. At block704, the controller may determine an aligned virtual position in the virtual workspace for a scanned virtual inner endwall of the identified virtual combination. To do so, the controller may import the scanned virtual inner endwall into the virtual workspace, and perform an initial alignment to determine an initial virtual position for the scanned virtual inner endwall. Additionally, for at least some embodiments, after determining the initial virtual position, the controller then performs a secondary alignment that fine tunes the initial virtual position to arrive at the aligned virtual position for the scanned virtual inner endwall. In various embodiments, the controller may align the scanned virtual inner endwall to the nominal virtual inner endwall to determine an initial virtual position. As previously described, the nominal virtual inner endwall may have an associated predetermined virtual position in the virtual workspace. Accordingly, when performing the initial alignment, the controller aligns the scanned virtual inner endwall to the nominal virtual inner endwall, with the nominal virtual inner endwall positioned at its predetermined virtual position. Also, in various embodiments, the controller may execute a local best fit algorithm in order to align the scanned virtual inner endwall to the nominal virtual inner endwall. For at least some embodiments, the local best fit algorithm may be executed on all of the data points of the data sets of the scanned and nominal virtual inner endwalls. In various embodiments, the secondary alignment is a six-point relative point system (RPS). When performing the RPS alignment, the controller identifies six points or datums of the nominal virtual inner endwall, and aligns the scanned virtual inner endwall to the six datums. The six datums may include three primary datums, two secondary datums, and one tertiary datum. The controller may identify one or more sequences associated with the datums in order to perform the RPS alignment. A given sequence identifies an order of primary, secondary, and tertiary datums to which the controller aligns the scanned virtual inner endwall. A first sequence indicates to the controller to align to the scanned virtual inner endwall to the primary datums first, to the secondary datums second, and to the tertiary datum last. A second sequence indicates to the controller to align to the scanned virtual inner endwall to the secondary datums first, to the tertiary datum second, and to the primary datums last. Other sequences are possible, and in general, the controller may position the primary, secondary, and tertiary datums in any order for a given sequence. Accordingly, the controller may perform the RPS alignment in one or more iterations, with each iteration being performed according to a different sequence of datums. At the end of an iteration, the controller may determine an updated virtual position, updated from the initial virtual position, for the scanned virtual inner endwall. For at least some embodiments, the controller may perform a current iteration, and then condition whether to perform a next iteration on whether the current iteration has sufficiently aligned the scanned virtual inner endwall to the datums. If it has, then the controller may determine to end the RPS alignment. Alternatively, if the controller determines that the current iteration has not sufficiently aligned the scanned virtual inner endwall to the datums, then the controller may determine to perform a next iteration according to a different sequence of datums. In other embodiments, the controller may perform a predetermined number of iterations, each according to a different one of a plurality of sequences of datums. At the end of a last iteration, the controller may determine which iteration provided the best alignment. For any of the embodiments, at the end of last iteration, the controller determines a final virtual position of the scanned virtual inner endwall, which is one of the one or more updated virtual positions determined from the one or more iterations. For at least some embodiments, the controller fixes or locks the virtual position of the scanned virtual endwall for a remaining portion of the alignment process. That is, the controller may align the rest of the scanned virtual leading airfoil, the scanned virtual trailing airfoil, and the scanned virtual outer endwall to the scanned virtual inner endwall by keeping the virtual position of the scanned virtual inner endwall fixed, and virtually moving the other scanned virtual components in order to properly align them to each other. In further detail, at block706, the controller may determine an aligned virtual position in the virtual workspace for a first scanned virtual airfoil of the identified virtual combination. The first scanned virtual airfoil may be the scanned virtual leading airfoil or the scanned virtual trailing airfoil. That is, as described further in connection with block708, the controller aligns both the scanned virtual leading airfoil and the scanned virtual trailing airfoil to the scanned virtual inner endwall. In various embodiments, the controller may perform the alignment for the scanned virtual leading airfoil first and the scanned virtual trailing airfoil second. In other embodiments, the controller may perform the alignment for the scanned virtual trailing airfoil first and the scanned virtual leading airfoil second. To determine an aligned virtual position for the first scanned virtual airfoil, the controller may import the first scanned virtual airfoil into the virtual workspace, and perform an initial alignment to determine an initial virtual position for the first scanned virtual airfoil. Additionally, for at least some embodiments, after determining the initial virtual position, the controller then performs a secondary alignment that fine tunes the initial virtual position to arrive at the aligned virtual position for the first scanned virtual airfoil. To perform the initial alignment, the controller may align the first scanned virtual airfoil to a first nominal virtual airfoil to determine an initial virtual position. (If the first scanned virtual airfoil is the scanned virtual leading airfoil, then the first nominal virtual airfoil is the nominal virtual leading airfoil; and if the first scanned virtual airfoil is the scanned virtual trailing airfoil, then the first nominal virtual airfoil is the nominal virtual trailing airfoil). As previously described, the first nominal virtual airfoil may have an associated predetermined virtual position in the virtual workspace. Accordingly, when performing the initial alignment, the controller aligns the first scanned virtual airfoil to the first nominal virtual airfoil, with the first nominal virtual airfoil positioned at its predetermined virtual position. Also, in various embodiments, the controller may execute a local best fit algorithm in order to align the first scanned virtual airfoil to the first nominal virtual airfoil. For at least some embodiments, the local best fit algorithm may be executed on all of the data points of the data sets of the first scanned and nominal virtual airfoils. In the secondary alignment process, the controller aligns the inner mating surface of the first scanned virtual airfoil to a first mating surface of the scanned virtual inner endwall. To do so, the controller keeps the scanned virtual inner endwall fixed in its aligned virtual position, and adjusts the initial virtual position of the first scanned virtual airfoil to identify a local best fit between the two virtual mating surfaces. In further detail, the controller highlights the inner mating surface of the first scanned virtual airfoil and the first mating surface of the scanned virtual airfoil for execution of a local best fit algorithm. For example, the controller identifies the data points of the data set of the first scanned virtual airfoil that represent or comprise the inner mating surface of the first scanned airfoil, and similarly, the controller identifies the data points of the data set of the scanned virtual inner endwall that represent or comprise the first mating surface of the scanned virtual inner endwall. For at least some example embodiments, the controller may be configured to execute a projection function in order to identify the data points comprising the virtual mating surfaces. In the projection function, the controller identifies the data points that represent the inner mating surface of the first nominal virtual airfoil, and then projects those data points onto the first scanned virtual airfoil in order to determine those data points representing or comprising the inner mating surface of the first scanned virtual airfoil. Similarly, the controller identifies the data points that represent the first mating surface of the scanned virtual inner endwall, and projects those data points into the scanned virtual inner endwall in order to determine those data points representing or comprising the first mating surface of the scanned virtual inner endwall. Additionally, for at least some example embodiments, the projection may result in determining a number of data points for a mating surface of a given scanned virtual vane component that is less than all of the data points of the mating surface, depending on any imperfections of the physical mating surface of the corresponding physical vane component. Accordingly, the data points that the controller determines when performing the highlighting of a given mating surface may or may not be less than 100% of the data points representing or comprising the given mating surface, and that percentage may vary depending on the imperfections of the corresponding physical mating surface. Upon highlighting the two virtual mating surfaces, the controller may align the inner mating surface of the first scanned virtual airfoil to the first mating surface of the scanned virtual inner endwall. To do so, the controller may execute a local best fit algorithm to determine a local best fit of the two virtual mating surfaces. By first highlighting the two virtual mating surfaces before performing the local best fit, the local best first algorithm is executed on the two virtual mating surfaces only, without the remaining portions of the first scanned virtual airfoil and the scanned virtual inner endwall. In general, the controller may execute the local best fit algorithm in one or more iterations. During each iteration, the controller may virtually move the first scanned virtual airfoil a certain amount, such as in one or more of the six degrees of freedom, and calculate a percentage that the inner mating surface of the first scanned virtual airfoil is virtually contacting the first mating surface of the scanned virtual inner endwall. Also, for at least some example embodiments, the controller executes the local best fit algorithm according to a movement constraint that limits how much the first scanned virtual airfoil can virtually move during each iteration. Without the movement constraint, the local best fit algorithm may determine an aligned virtual position for the first scanned virtual airfoil that corresponds to a physically impossible, or otherwise noncompliant position for the corresponding physical airfoil relative to the physical inner endwall. For particular embodiments, the movement constraint value is equal or corresponds to an upper bound or a lower bound value from a nominal value of a manufacturing tolerance range of one or more of the physical mating surfaces when manufacturing the physical vane components. For example, a general tolerance range may be a given nominal value X plus-or-minus Y. For such an example, the movement constraint may be Y. A particular value for Y and/or the movement constraint is 0.002 inches, although other movement constraint values may be possible. Upon executing the local best algorithm for the two highlighted mating surfaces, the controller may determine an aligned virtual position of the first scanned virtual airfoil. In particular embodiments, the controller may store the aligned virtual position in memory. At block708, the controller may determine an aligned virtual position for a second scanned virtual airfoil, where the second virtual airfoil is the other of the scanned virtual leading airfoil and the scanned virtual trailing airfoil virtually aligned at block706. In general, the controller may determine the aligned virtual position for the second scanned virtual airfoil in the same way it determines the aligned virtual position for the first scanned virtual airfoil at block706. That is, the controller first aligns the second scanned virtual airfoil to a second nominal virtual airfoil in its predetermined virtual position to determine an initial virtual position for the second scanned virtual airfoil. Then, the controller highlights an inner mating surface of the second scanned virtual airfoil and a second mating surface of the scanned virtual inner endwall, and then executes a local best fit algorithm on the virtual mating surfaces, with a movement constraint for the second scanned virtual airfoil. In particular embodiments, the movement constraint for the best fit algorithm executed for the second scanned virtual airfoil is the same as the movement constraint for the best fit algorithm executed for the first scanned virtual airfoil at block706. Upon executing the local best algorithm for the two highlighted mating surfaces, the controller may determine an aligned virtual position of the second scanned virtual airfoil. In particular embodiments, the controller may store the aligned virtual position in memory. At block710, the controller may determine an aligned virtual position in the virtual workspace for the scanned virtual outer endwall of the identified virtual combination. To do so, the controller may import the scanned virtual outer endwall into the virtual workspace. The controller may then perform an initial alignment to determine an initial virtual position for the scanned virtual outer endwall, and subsequently perform a secondary alignment that fine tunes the initial virtual position to arrive at the aligned virtual position for the scanned virtual outer endwall. To perform the initial alignment, the controller may align the scanned virtual outer endwall to a nominal virtual outer endwall to determine an initial virtual position. As previously described, the nominal virtual outer endwall may have an associated predetermined virtual position in the virtual workspace. Accordingly, when performing the initial alignment, the controller aligns the scanned virtual outer endwall to the nominal virtual outer endwall, with the nominal virtual outer endwall positioned at its predetermined virtual position. Also, similar to the other initial alignments, the controller may execute a local best fit algorithm on the data set comprising the scanned virtual outer endwall and the data set comprising the nominal virtual outer endwall. After determining the initial virtual position for the scanned virtual outer endwall, the controller then performs the secondary alignment to determine the aligned virtual position of the scanned virtual outer endwall, and, for at least some embodiments, to determine updated aligned virtual positions of the first scanned virtual airfoil and/or the second scanned virtual airfoil. To do so, the controller may virtually align first and second mating surface of the scanned virtual outer endwall with the outer mating surfaces of the first and second scanned virtual airfoils, respectively. Accordingly, at block710, the secondary alignment may include two virtual alignments on two pairs of virtual mating surfaces, including a first pair that includes the outer mating surface of the first scanned virtual airfoil and the first mating surface of the scanned virtual outer endwall, and a second pair that includes the outer mating surface of the second scanned virtual airfoil and the second mating surface of the scanned virtual outer endwall. Similar to the secondary alignments performed at blocks706and708, the controller may highlight the two pairs of virtual mating surfaces, and execute a pair of local best fit algorithms on the two pairs, including a first local best fit algorithm of the highlighted data points of the outer mating surface of the first scanned virtual airfoil and the first mating surface of the scanned virtual outer endwall, and a second local best fit algorithm of the highlighted data points of the outer mating surface of the second scanned virtual airfoil and the second mating surface of the scanned virtual outer endwall. The controller may be configured to perform the first and second local best fit algorithms in various ways to obtain the local best fits. For at least some embodiments, the controller may virtually move the scanned virtual outer endwall from its initial virtual position in at least one iteration of the first local best fit algorithm and/or at least one iteration of the second local best fit algorithm. In addition or alternatively, the controller may virtually move the first scanned virtual airfoil during at least one iteration of the first local best fit algorithm, and/or virtually move the second scanned virtual airfoil during at least one iteration of the second local best fit algorithm. In other words, during the first local best fit algorithm, the controller may virtually move the scanned virtual outer endwall, may virtually move the first scanned virtual airfoil, or a combination thereof; and during the second local best fit algorithm, the controller may virtually move the scanned virtual outer endwall, may virtually move the second scanned virtual airfoil, or a combination thereof. For embodiments where the controller virtually moves the scanned virtual outer endwall during the first local best fit algorithm and/or then second local best fit algorithm, then upon completing the first and second local best fit algorithms, the controller may determine an aligned virtual positioned for the scanned virtual outer endwall, which may be the same as, or different from, the initial virtual position of the scanned virtual outer endwall. Also, the aligned virtual positions of the first and second scanned virtual endwalls determined at blocks706and708may be starting virtual positions for the scanned virtual airfoils when the controller performs the first and second local best fit algorithms, respectively. For embodiments where the controller virtually moves the first scanned virtual airfoil during the first local best fit algorithm, then upon completing the first local best fit algorithm, the controller may determine an updated aligned virtual position for the first scanned virtual airfoil, which may be the same as, or different from, the aligned virtual position for the first scanned virtual airfoil determined at block706. Similarly, for embodiments where the controller virtually moves the second scanned virtual airfoil during the second local best fit algorithm, then upon completing the second local best fit algorithm, the controller may determine an updated aligned virtual position for the second scanned virtual airfoil, which may be the same as, or different from, the aligned virtual position for the second scanned virtual airfoil determined at block708. Also, for at least some embodiments of block710, the controller performs the first and second local best fit algorithms sequentially. That is, the controller performs the entirety of the first local best fit algorithm first, and then performs the entirety of the second local best fit algorithm; or performs the entirety of the second local best fit algorithm first, and then performs the entirety of the first local best fit algorithm. In other embodiments, the controller interleaves the iterations of the first and second local best fit algorithms together. For example, the controller may perform one or more iterations of one of the local best fit algorithms, then perform one or more iterations of the other local best fit algorithm, then move back to the initial local best fit algorithm, and so on. For any of the various embodiments, if the controller moves the scanned virtual outer endwall to a new virtual position during a given iteration of one of the local best fit algorithms, the controller may use that new virtual position when performing an initial or next iteration of the other local best fit algorithm. In addition, for at least some example embodiments of block710, in addition to virtually aligning the first pair of mating surfaces together, and the second pair of mating surfaces together, the controller may also perform additional virtual alignments on the inner mating surfaces of the first and second scanned virtual airfoils with the first and second mating surfaces of scanned virtual inner endwall. Accordingly, the controller may highlight a third pair of mating surfaces including the inner mating surface of the first scanned virtual airfoil and the first mating surface of the scanned virtual inner endwall, and perform a third local best fit algorithm on the third pair of mating surfaces; and/or may highlight a fourth pair of mating surfaces including the inner mating surface of the second scanned virtual airfoil and the second mating surface of the scanned virtual inner endwall, and perform a fourth local best fit algorithm on the fourth pair of mating surfaces. In various embodiments, the controller may perform the first, second, third, and/or fourth local best fit algorithms sequentially. In other embodiments, the controller may interleave one or more iterations of the various local best fit algorithms. Also, in various embodiments, similar to the local best fit algorithms performed at blocks706and708, the controller may perform the first, second, third, and/or fourth local best fit algorithms at block710according to one or more movement constraints that limits the virtual movement of the first scanned virtual airfoil, the second scanned virtual airfoil, and/or the scanned virtual outer endwall to within a certain amount in any of the various six-degrees of freedom. In particular embodiments, the movement constraint used at block710is larger than the movement constraint used at block706and/or block708. In various of these embodiments, the movement constraint used at block710is in a range of about 1.5 to 2.5 times larger than the movement constraint used at block706and/or at block708. In particular embodiments, the movement constraint is two times the movement constraint used at block706and/or at block708. In addition or alternatively, the movement constraint used at block710depends on a tolerance range associated with manufacturing the physical mating surfaces corresponding to the virtual mating surfaces. For example, the movement constraint used at block710is equal to a difference between an upper bound and the lower bound of the tolerance range (or a size of the tolerance window defined by the upper and lower bounds), where such difference (or window size) is greater than the difference between the upper bound and the nominal value, or between the lower bound and the nominal value. In one particular non-limiting example, the movement constraints used at blocks706and708is 0.002 inches, equal to the absolute value of the difference between the upper bound of the tolerance range and the nominal value or between the lower bound of the tolerance range and the nominal value; and the movement constraint used at block710is 0.004 inches, equal to the difference between the upper and lower bounds. Upon completing the first and second local best fit algorithms at block710, with or without having performed the third and/or fourth local best fit algorithms, the controller may determine an aligned virtual position of the scanned virtual outer endwall, and for at least some embodiments, may also determine an updated aligned virtual position of the first scanned virtual airfoil and/or an updated aligned virtual position of the second scanned virtual airfoil. Upon completing the virtual alignment at block710, at block712, the controller may identify an aligned virtual vane for the identified virtual combination, which is a combination of the four virtual vane components virtually positioned at their respective final aligned virtual positions determined at the end of block710. Accordingly, the aligned virtual vane that the controller determines at block712may have an associated aligned virtual position that is a combination of the aligned virtual positions, determined from blocks704-710, of the four virtual vane components forming the aligned virtual vane. For at least some example embodiments, after identifying the aligned virtual vane at block712, at block714, the controller may perform a fine-tuning alignment on the aligned virtual to determine an updated aligned virtual vane. In particular, the controller may align the aligned virtual vane to the nominal virtual vane. Similar to block704, the controller may perform an RPS alignment on the aligned virtual vane relative to the nominal virtual vane in order to determine the updated aligned virtual vane. The controller may perform the RPS alignment in one or more iterations, and with each iteration, the controller may perform the RPS alignment according to a different datum sequence of primary, secondary, and tertiary datums. In other example embodiments, the controller may skip the fine tuning at block714, and proceed directly to block716. At block716, the controller may identify which data points of the data set of the aligned virtual vane (updated aligned virtual vane if block714is performed) are throat window points for the (updated) aligned virtual vane. At block718, the controller calculates a virtual throat area from the throat window points for the virtual combination. As mentioned, the virtual throat area may calculated in terms of units (e.g., thousands of inches) or in terms of a percentage, such as a percentage of, or a percent deviation from, a nominal value. The particular ways a controller determines the throat window points of a data set of a three-dimensional virtual vane at block716, and calculates a virtual throat area at block718, are known and outside the scope of the present description. For at least some embodiments, practically, there is not enough available time for the controller to calculate virtual throat areas for all of the various possible combinations that might be available to the controller at a given point in time of a manufacturing method. For example, as mentioned, a vane stage including twenty vanes with a total eighty vane stage components may yield approximately 624,000 different possible virtual combinations of vanes that the controller can identify, and for which it can perform virtual alignments and calculate virtual throat areas. Even with present computing capabilities, performing virtual alignments and calculating virtual throat areas for such a larger number of possible combinations could taken an impracticable amount of time, such as on the order of months or even years. As mentioned, in various embodiments of block602ofFIG.6, the number of virtual combinations that the controller identifies may be less than all of the possible combinations in order to reduce the process time that the controller takes to calculate the virtual throat areas at block604. In various embodiments, the controller may calculate a first set of virtual throat areas for an initial set of virtual combinations of a plurality of virtual vane components, and then select a subset or a reduced number of virtual vane components from the plurality of virtual vane components based on the first set of virtual throat areas. Then, the controller may identify virtual combinations of the subset, and calculate virtual throat areas of those virtual combinations, such as according to the method700ofFIG.7. Doing so may reduce the number of virtual combinations for which the controller calculates virtual throat areas from hundreds of thousands to a reduced number in the low thousands, or to even a number under 1,000 or under 100. FIG.8is a flow chart of another example manufacturing method800for manufacturing a vane that includes an inner vane, an outer vane, a leading airfoil, and an outer airfoil. At block802, the controller may identify a first set of virtual combinations of a plurality of scanned virtual vane components. For block802, the scanned virtual vane components of a given virtual combination of the first set are all different from the scanned virtual vane components of the other virtual combinations of the first plurality. Otherwise stated, if the controller uses a given scanned virtual vane component for one virtual combination of the first set, it does not use that given scanned virtual vane component for any other virtual combination of the first set. Upon identifying the first set of virtual combinations at block804, the controller may then calculate a first set of virtual throat areas for each of the virtual combinations of the first set. For example, the controller may perform the method700on the first set of virtual combinations. At block806, the controller may determine a second set of virtual combinations of the plurality of virtual vane components based on the first set of calculated virtual throat areas. For at least some embodiments, the controller may determine an M-number of optimal virtual throat areas of an M-number of virtual combinations of the first set, and add those M-number of virtual combinations to the second set of virtual combinations. In addition or alternatively, the controller may form at least one virtual combination for the second set based on a combination of two or more virtual combinations of the first set. The controller may select two or more virtual combinations from the first set, for a virtual combination of the second set, based on their associated virtual throat areas, with the expectation that “mix-and-matching” the virtual vane components of the multiple virtual combinations of the first set will yield one or more different virtual combinations with better virtual throat areas. For example, the controller may select a two virtual combination from the first set with associated virtual throat areas having opposite polarities of deviation from the nominal virtual throat area, with a first virtual throat area being higher than the nominal amount, and the second virtual throat area being lower than the nominal amount. The controller may then select one or more virtual vane components from the first virtual combination and one or more virtual vane components from the second virtual combination, to form a virtual combination for the second set. The expectation is that since one virtual throat area is higher than the nominal amount and the other virtual throat area is lower than the nominal amount, then forming one or more new virtual combinations from the original two virtual combinations would offset their individual higher and lower deviations, resulting in at least one new virtual combination for the second set that has a virtual throat area lower than the virtual throat areas of the original two virtual combinations. In addition to selecting virtual combinations of the first set based on whether their calculated virtual throat areas are higher and lower, the controller may also select virtual combinations from the first set based on the degrees to which, or how much, their virtual throat areas are higher and lower. That is, the controller may select, for mix-and-matching, virtual combinations from the first set having virtual throat areas with opposite polarities but the same or similar magnitude deviations. To illustrate, when calculating virtual throat areas for the first set, suppose hypothetically that the controller calculates a first virtual throat area of a first virtual combination that is 5% greater than the nominal value, a second virtual throat area of a second virtual combination that is 1% less than the nominal value, and a third virtual throat area of a third virtual combination that is 4.5% less than the nominal value. In turn, the controller may determine to form a virtual combination of the second set based on the first virtual combination and the third virtual combination of the first set since they have virtual throat areas with opposite polarities of deviation and sufficiently close magnitudes of deviation. In addition, the controller may determine not to form a virtual combination for the second set based on the first and second virtual combinations of the first set since their magnitudes of deviations are not sufficiently close despite having opposite polarities of deviation. In addition, the controller may determine not to form a virtual combination for the second set based on the second and third virtual combinations since they have the same polarity of deviation. Upon identifying the second set of virtual combinations, then at block808, the controller may calculate a second set of virtual throat areas for the second set of virtual combinations. Similar to block804, the controller may perform the method700on the second set of virtual combinations in order to calculate the second set of virtual throat areas. At block810, the manufacturing method800may transition to a physical selection process that selects a physical combination of physical vane components, as in block606ofFIG.6. Like block606, the selection of the physical combination is dependent on the virtual throat areas calculated for the second set of virtual combinations at block808. At block812, the physical vane components of the selected physical combination are assembled together to form a physical vane. As with block608, the physical vane assembled at block812has a physical throat area that is better, or at least has a much higher statistical likelihood of being better, than if the physical components are arbitrarily selected without performance of the virtual alignment and throat area calculations performed by the controller. Also, by performing virtual alignment and throat area calculations in two stages for two sets of virtual combinations, the controller not only reduces the number of virtual combinations from all of the possible combinations, but intelligently does so such that the virtual combinations of the second set and that are ultimately used to determine the physical combination are those virtual combinations having the best virtual throat areas. FIG.9is a block diagram of an example controller900configured to carry out the actions or functions performed by the controller in the various embodiments of the methods600,700,800inFIGS.6-8. In general, the controller900is an electronic device, such an electronic circuit, or system or network of electronic devices or electronic circuits, implemented in hardware or a combination of hardware and software. In the block diagram, the controller900includes a processor902and a memory904. In general, the processor (or processor circuitry)902is a component of the controller900, implemented in hardware alone, or as a combination of hardware and software, that is configured to perform the electronic functions described herein. In various embodiments where the controller900uses software to perform or carry out a given function, the function may have associated computer code or a set of computer instructions, stored in at least a portion of the memory904. The processor902is configured, such as a microprocessor, a central processing unit (CPU), or the like, to access the memory904and execute the computer code/instructions in order to carry out the function. Also, in various embodiments the controller900may use hardware only, such as in the form of digital logic circuitry or the like, to perform a given function. Accordingly, in any of various embodiments, to perform the functions described herein, the processor902may use hardware circuitry only to perform functions, execute computer software code/instructions stored in the memory904to perform functions, or a combination thereof. In various embodiment, the controller900may be or include an integrated circuit (IC), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a circuit, a digital logic circuit, an analog circuit, a combination of discrete circuits, gates, or any other type of hardware or combination thereof. In addition, the memory904may be implemented according to any of various types of memory configured to store electronic data, including volatile memory, non-volatile memory, combinations therefore, or any other types of memory configured to store data, such in the form of digital bits of information. As mentioned, the memory904may store computer code or instructions that the processor902is configured to execute in order to carry out one or more of the functions described herein. For example, in various embodiment, the memory904may store computer-implemented algorithms, such as the alignment and local best fit algorithms described herein. As another example, the memory904stores software that the processor902executes to establish a virtual workspace and virtually move data, such as scan mesh data, in the virtual workspace. In addition or alternatively, the memory904is configured to store data on which the controller900performs the functions, including the data sets of the scanned and nominal virtual vanes and virtual vane components, and virtual positions of the virtual vane and vane components. For example, when machined physical vane component are three-dimensionally scanned, their scanned virtual vane component counterparts are stored in the memory904, which the processor902then access to perform the various functions described herein. Also, in various embodiments, the controller900may be, or may be a component of, an electronic device operable by a user, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a server, or a network of such devices interconnected together through using any of various forms of wired and/or wireless connections, as non-limiting examples. Accordingly, in various embodiments, the processor902may be configured locally with the memory904. In other embodiments, the memory904may be configured remotely from the processor902, such as part of a remote server for example, and the processor902may be configured to communicate with the memory904over a network, such as the Internet or WiFi for example, in order to access data stored in the memory904. Also, in various embodiment, the controller900may be electronically coupled to, or in some embodiments include, an electronic display, configured to display any of various electronic information, non-limiting examples of which include a liquid crystal display (LCD), a light emitting diode (LED) display, a touchscreen display on a mobile device, or any other of various types of electronic displays. Through the display906, the controller900may be configured to display, to a user, the three-dimensional virtual representations of the scanned and/or nominal virtual vanes in any of various virtual positions in a virtual workspace, alone or combined together in any of various virtual combinations. Additionally, through the display906, the controller900may graphically illustrate, in the one or more of the virtual components virtually move, such as in accordance with any of various six-degrees of freedom. In addition or alternatively, through the display906, the controller900may be configured to display calculated virtual throat areas of any of various virtual combinations, or statuses of any of the various stages of the methods described herein, such as when a given alignment is starting or has been completed, as non-limiting examples. To clarify the use of and to hereby provide notice to the public, the phrases “at least one of <A>, <B>, . . . and <N>” or “at least one of <A>, <B>, . . . <N>, or combinations thereof” or “<A>, <B>, . . . and/or <N>” are defined by the Applicant in the broadest sense, superseding any other implied definitions hereinbefore or hereinafter unless expressly asserted by the Applicant to the contrary, to mean one or more elements selected from the group comprising A, B, . . . and N. In other words, the phrases mean any combination of one or more of the elements A, B, . . . or N including any one element alone or the one element in combination with one or more of the other elements which may also include, in combination, additional elements not listed. While various embodiments have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible. Accordingly, the embodiments described herein are examples, not the only possible embodiments and implementations. The subject-matter of the disclosure may also relate, among others, to the following aspects: 1. A gas turbine engine manufacturing method comprising: identifying, with a controller, a plurality of virtual combinations of a plurality of scanned virtual vane components, each scanned virtual vane component comprising a respective first data set that indicates a three-dimensional virtual representation of a different one of a plurality of physical vane components manufactured and three-dimensionally scanned to generate the scanned virtual vane components;determining, with the controller, a plurality of virtual throat areas for the plurality of virtual combinations, wherein determining the plurality of virtual throat areas comprises, for each combination:virtually aligning, with the controller, mating surfaces of scanned virtual vane components of a respective virtual combination;in response to the virtually aligning, determining, with the controller, an aligned virtual vane corresponding to the respective virtual combination, the aligned virtual vane comprising a second data set indicating a three-dimensional virtual representation of a different one of a plurality of physical combinations of the physical vane components; andcalculating, with the controller, a respective virtual throat area of the aligned virtual vane;selecting a physical combination of physical vane components of the plurality of physical vane components, the selection dependent on an optimal virtual combination having an optimal virtual throat area from among the plurality of virtual throat areas, wherein the physical combination corresponds to the optimal virtual combination; andassembling together physical vane components of the selected physical combination to form a vane of a gas turbine engine having an optimal physical throat area. 2. The method of aspect 1, wherein virtually aligning the mating surfaces of the scanned virtual vane components of the respective virtual combination comprises: virtually aligning inner mating surfaces of a pair of scanned virtual airfoils to a pair of mating surfaces of a scanned virtual inner endwall. 3. The method of aspect 2, wherein virtually aligning the inner mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall comprises executing, with the controller, a pair of local best fit algorithms according to a movement constraint. 4. The method of aspect 3, wherein virtually aligning the inner mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall comprises highlighting data points of the inner mating surfaces and the pair of mating surfaces in order to execute the pair of local best fit algorithms on the inner mating surfaces and the pair of mating surfaces only, without remaining portions of the scanned virtual inner endwall and the pair of scanned virtual airfoils. 5. The method of any of aspects 2 to 4, further comprising: before virtually aligning the inner mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall, virtually aligning, with the controller, the scanned virtual inner endwall to a nominal virtual inner endwall to determine an aligned virtual position of the scanned virtual inner endwall. 6. The method of aspect 5, wherein virtually aligning the scanned virtual inner endwall to the nominal virtual inner endwall comprises executing, with the controller, a local best fit algorithm, the method further comprising: after executing the local best fit algorithm, performing, with the controller, a relative point system alignment between the scanned virtual inner endwall and datums of the nominal virtual endwall. 7. The method of aspect 6, wherein performing the relative point system alignment comprises performing the relative point system alignment over a plurality of iterations, each iteration according to a different one of a plurality of datum sequences. 8. The method of any of aspects 5 to 7, wherein virtually aligning the inner mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall comprises virtually aligning the inner mating surfaces to the pair of mating surfaces with the scanned virtual inner endwall fixed in the aligned virtual position. 9. The method of any of aspects 2 to 8, further comprising: before virtually aligning the inner mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall, virtually aligning, with the controller, the pair of scanned virtual airfoils to respective nominal virtual airfoils. 10. The method of any of aspects 2 to 9, further comprising: after virtually aligning the inner mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall, virtually aligning, with the controller, outer mating surfaces of the pair of scanned virtual airfoils to a pair of mating surfaces of a scanned virtual outer endwall. 11. The method of aspect 10, wherein virtually aligning the outer mating surfaces of the scanned virtual inner endwall to the pair of mating surfaces of the scanned virtual outer endwall comprises: executing, with the controller, a pair of local best fit algorithms according to a movement constraint. 12. The method of aspect 11, wherein the movement constraint comprises a first movement constraint that is larger than a second movement constraint according to which a second pair of local best fit algorithms are executed to virtually align the inner mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall. 13. The method of aspect 12, wherein the first movement constraint is in a range of about 1.5 to about 2.5 times larger than the second movement constraint. 14. The method of any of aspects 10 to 13, further comprising: determining, with the controller, aligned virtual positions of the pair of scanned virtual airfoils in response to virtually aligning the mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual inner endwall; and updating, with the controller, the aligned virtual positions of the pair of scanned virtual airfoils in response to virtually aligning the outer mating surfaces of the scanned virtual endwall to the pair of mating surfaces of the scanned virtual outer endwall. 15. The method of any of aspects 10 to 14, wherein determining the aligned virtual vane corresponding to the respective virtual combination comprises determining, with the controller, the aligned virtual vane in response to completing virtually aligning the outer mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual outer endwall. 16. The method of any of aspects 10 to 14, further comprising: determining, with the controller, an initially aligned virtual vane corresponding to the respective virtual combination in response to completing virtually aligning the outer mating surfaces of the pair of scanned virtual airfoils to the pair of mating surfaces of the scanned virtual outer endwall; and performing, with the controller, a relative point system alignment between the initially aligned virtual vane and datums of a nominal virtual vane to determine the aligned virtual vane. 17. The method of any of aspects 1 to 16, wherein the plurality of virtual combinations comprises less than all possible virtual combinations of the plurality of scanned virtual vane components. 18. The method of any of aspects 1 to 17, wherein the plurality of virtual combinations comprises a second set of virtual combinations of the plurality of scanned virtual vane components, and the plurality of virtual throat areas comprises a second set of virtual throat areas, the method further comprising: identifying, with the controller, a first set of virtual combinations of the plurality of scanned virtual vane components; determining, with the controller, a first set of virtual throat areas for the first set of virtual combinations; and identifying, with the controller, the second set of virtual combinations based on the first set of virtual throat areas. 19. The method of aspect 18, wherein identifying the second set of virtual combinations comprises: identifying, with the controller, an M-number of virtual combinations comprising an M-number of best virtual throat areas in the first set of virtual throat areas. 20. The method of aspect 18 or 19, wherein identifying the second set of virtual combinations comprises: selecting, with the controller, a first virtual combination and a second virtual combination from the first set, the selecting dependent on the first virtual combination and the second virtual combination comprising respective virtual throat areas having opposite polarities of deviation from a nominal throat area; and forming, with the controller, a virtual combination of the second set by combining one or more virtual vane components of the first virtual combination with one or more vane components of the second virtual combination. | 92,021 |
11861271 | DESCRIPTION OF THE EMBODIMENTS The following description of embodiments of the invention is not intended to limit the invention to these embodiments but rather to enable a person skilled in the art to make and use this invention. Variations, configurations, implementations, example implementations, and examples described herein are optional and are not exclusive to the variations, configurations, implementations, example implementations, and examples they describe. The invention described herein can include any and all permutations of these variations, configurations, implementations, example implementations, and examples. 1. Method As shown inFIGS.1and2, a method includes: accessing a part model comprising a three-dimensional representation of a part in Block S110; accessing a material profile of a material selected for the part, the material profile relating exposure energy and three-dimensional polymerization geometry of the material in Block S120; segmenting the part model into a set of model layers in Block S130; detecting a first upward-facing surface in the part model in Block S140; defining a first model volume in a first model layer in the set of model layers, adjacent the first upward-facing surface, and fully contained within the part model in Block S150; based on the material profile, calculating a first exposure energy predicted to yield a first three-dimensional polymerization geometry approximating a first contour of the first upward-facing surface when this exposure energy is projected onto the material during a build in Block S152; populating a first print image with the first exposure energy in a first image area corresponding to the first model volume in the first model layer in Block S154; and storing the first print image in a print file for the part in Block S160. 1.1 Variation: Steep Surface Contour As shown inFIGS.1and4, one variation of the method S100includes: accessing a part model comprising a three-dimensional representation of a part in Block S110; accessing a material profile of a material selected for the part, the material profile relating exposure energy and three-dimensional polymerization geometry of the material in Block S120; segmenting the part model into a set of model layers in Block S130; segmenting a first model layer, in the set of model layers, into a first set of model volumes in Block S132; detecting a first superficial (or “surface-level”) model volume, in the first set of model volumes of the first model layer, intersecting a first upward-facing surface of the part model in Block S140; selecting a first interior model volume, in the first set of model volumes of the first model layer, fully contained within the part model in Block S150; based on the material profile, calculating a first exposure energy predicted to yield a first three-dimensional polymerization geometry approximating a first contour of the first upward-facing surface contained within the first superficial model volume when this exposure energy is projected onto the material during a build in Block S152; populating a first print image with the first exposure energy in a first image area corresponding to the first interior model volume in the first model layer in Block S154; and storing the first print image in a print file for the part in Block S160. 1.2 Variation: Shallow Surface Contour As shown inFIGS.1,2, and4, another variation of the method S100includes: accessing a part model including a three-dimensional representation of a part in Block S110; accessing a material profile of a material selected for the part, the material profile relating exposure energy and three-dimensional polymerization geometry of the material in Block S120; segmenting the part model into a set of model layers in Block S130; segmenting a first model layer, in the set of model layers, into a first set of model volumes in Block S132; segmenting a second model layer, in the set of model layers and above the first model layer, into a second set of model volumes in Block S132; detecting a first superficial model volume, in the second set of model volumes of the second model layer, intersecting a first upward-facing surface of the part model in Block S140; selecting a first interior model volume, in the first set of model volumes of the first model layer below the second superficial model volume and fully contained within the part model in Block S150; based on the material profile, calculating a first exposure energy predicted to yield a first three-dimensional polymerization geometry approximating a first contour of the first upward-facing surface contained within the first superficial model volume when this exposure energy is projected onto the material during a build in Block S152; populating a first print image with the first exposure energy in a first image area corresponding to the first interior model volume in the first model layer in Block S154; and storing the first print image in a print file for the part in Block S160. 2. Applications As shown inFIGS.1and4, the method S100can be executed by a computer system in conjunction with an additive manufacturing system (shown inFIG.5) to achieve sub-voxel control of polymerization of material within upward-facing surfaces of a part during a print cycle, such as to increase dimensional accuracy of superficial (i.e., surface-level and surface-adjacent) features on the part and/or achieve or improve textural consistency and/or surface quality across the part. In particular, the computer system can: access a part model defining a virtual volume representing a part; access a material profile that predicts scope of one- or three-dimensional polymerization of a material selected for the part based on irradiation intensity and/or irradiation duration; project a three-dimensional grid array of voxels representing resolution of a projection system within an additive manufacturing system (e.g., a “3D SLA printer”) and a target layer thickness for the part; identify surface-level voxels containing upward-facing surfaces (or “overhangs,” surfaces facing a build platform of the additive manufacturing system) within the virtual volume of the part model; select interior voxels near these surface-level voxels; implement the material profile to set exposure energies (e.g., irradiation intensities and durations) for the interior voxels such that the predicted scope of polymerization around these interior voxels extends up to and approximates the surfaces defined in the nearby surface-level voxels. For example, a computer system can execute these Blocks of the method S100to: calculate an exposure energy—for an interior volume of a 3D-printed part—that will yield a three-dimensional volume of polymerized material (or “resin”) that extends beyond the interior volume of the part to approximate a three-dimensional surface contour defined in a three-dimensional part model; write this exposure energy to a set of pixels in a print image corresponding to the interior volume; compile this print image and others into a print file; and to serve this print file to an additive manufacturing system. The additive manufacturing system then selectively exposes sequential layers of the material to energy (e.g., ultraviolet light) according to these print images to polymerize select regions of these layers and thus form the part, which may thus exhibit surface characteristics and dimensions that closely approximate the three-dimensional part model. 2.1 Steep v. Shallow Surface Contour For example, a material (or “resin”) selected for a part may polymerize in semi-ellipsoidal three-dimensional geometry. Depth of polymerization of the material along an axis of energy exposure is less than a radius of polymerization of the material perpendicular to the axis of energy exposure. Accordingly, a semi-ellipsoidal polymerized volume of the material may exhibit: a steepest slope near a base of the polymerized volume; and a shallowest slope near a base of the polymerized volume. A material model can thus store these polymerization characteristics of the material. Upon ingest of a part model, the computer system can characterize slopes of surfaces within the part model, such as contained within discrete voxel volumes projected onto the part model. Then, for a first surface characterized by a steep slope (e.g., greater than 45° from a horizontal x-y plane) and contained in a first voxel in a first model layer of the part model, the computer system can: select a second voxel in the same model layer and inset from the first voxel; and calculate a second energy exposure predicted to yield both polymerization (e.g., to a minimum green strength) within a volume of the part corresponding to the second voxel and polymerization that extends laterally beyond this volume of the part to approximate the first, steep surface defined in the part model. Accordingly, the computer system can: write a null energy exposure to a first pixel corresponding to the first voxel in a first print image for the part; and write the second energy exposure to a second pixel corresponding to the second voxel in the first print image. Conversely, for a second surface characterized by a shallow slope (e.g., less than 45° from the horizontal x-y plane) and contained in a third voxel in the first layer of the part model, the computer system can: select a fourth voxel in a second, lower layer (and therefore printed after the first layer) and located below the third voxel; and calculate a fourth energy exposure predicted to yield both polymerization within a volume of the part corresponding to the fourth voxel and polymerization that extends vertically beyond this volume to approximate the second, shallow surface defined in the part model. Accordingly, the computer system can: write a null energy exposure to a third pixel corresponding to the third voxel in the first print image; and write the fourth energy exposure to a fourth pixel corresponding to the fourth voxel in a second print image succeeding the first print image in a print file for the part. 2.2 Part Green Strength v. Tolerances Generally, the interior volume of a newly-printed part may predominantly contribute to strength of the part. Accordingly, the computer system can prioritize dimensional accuracy and surface profile (e.g., texture) over green strength across the surface of the part. In particular, the computer system can execute the method S100to set exposure energies (e.g., irradiation intensities and durations) of interior voxels within the part—based on a material profile of the selected part material—to achieve polymerization that extends beyond these interior voxels to reach and approximate contours of nearby exterior surfaces prescribed in the part model. In particular, when a region of a volume of resin—arranged over a build window in the additive manufacturing system—is exposed to radiation (e.g., “light”), monomers and/or oligomers in the exposed region of resin cross-link to form polymers and thus form a solid layer of a part (i.e., a “print layer”). However, a polymerized region of the volume of resin may differ from the exposed region of the volume of resin. For example, over-exposure of the resin due to greater exposure energy may cause polymerization of the resin to: extend laterally past the exposed region such that the polymerized region of the resin layer is larger than the exposed region; and/or extend upwardly to a preceding print layer of the part, thereby polymerizing uncured resin present on the surface of the preceding print layer. The computer system can thus leverage these characteristics of a resin (hereinafter the “material”) to selectively over-expose interior voxels in order to achieve polymerization of the material in neighboring surface-level voxels that approximates the surface profiles prescribed in these surface-level voxels by the part model. 2.2 Computer System Generally, the method S100is described as executed by a computer system before and in preparation for printing a part at the additive manufacturing system. Additionally or alternatively, Blocks of the method S100can be executed in part or in whole by the additive manufacturing system, such as in real-time while printing the part. Furthermore, the method S100is described herein as executed by a computer system to define exposure energies for surface-level and surface-adjacent voxels of a part. However, the method S100can be similarly executed by a computer system to define exposure energies (e.g., irradiation intensities within a fixed exposure duration, irradiation intensity profiles over a period of time) for surface-level and surface-adjacent voxels of a part. 2.3 Disclaimers Furthermore, the method S100is described below as executed by the computer system to: segment a part model into a sequence of model layers of common thickness; to define a two-dimensional array of voxels in each model layer, wherein each voxel corresponds to the field of view of a pixel—in a projection system of an additive manufacturing system—at a build window of the additive manufacturing system; and to selectively adjust exposure energies assigned to interior voxels adjacent surface-level voxels in the part model. However, the computer system can additionally or alternatively implement these methods and techniques to adjust exposure energies for: larger fixed clusters of interior voxels (e.g., 2-by-2 voxel clusters) within a layer of the part model; clusters of voxels of variable size; or for discrete sub-volumes with the part model not bounded or defined by fields of view of pixels in the projection system. The method S100is further described below as executed by the computer system to selectively increase exposure energies—from a nominal exposure intensity—of select interior volumes adjacent upward-facing surfaces of a part in order to improve dimensional and geometric tolerances of the part. However, the computer system can implement similar methods and techniques to increase exposure energies of select interior volumes adjacent downward-facing surfaces of the part. Additionally or alternatively, the computer system can implement similar methods and techniques to decrease exposure energies—from a nominal exposure intensity—of select volumes containing surfaces of the part in order to improve dimensional and geometric tolerances of the part. 3. Terms A “voxel” is referred to herein: as a volume of layer of a part falling within the field of view of a pixel of a projection system of an additive manufacturing system when this layer of the part is fabricated across a build window of the additive manufacturing system; and as a corresponding model volume in model layer of a part model. A “surface-level voxel” or (“surface-level model volume”) is referred to herein as a voxel (or model volume) that intersects a surface defined in the part model. An “interior voxel” (or “interior model volume”) is referred to herein as a voxel (or model volume) that is fully contained within a scope of the part model and that does not intersect a surface defined in the part model. 4. Additive Manufacturing System In one implementation shown inFIG.5, the computer system interfaces with an additive manufacturing system as described in U.S. patent application Ser. No. 16/672,415. The additive manufacturing system is configured to print a part according to a print file—containing a sequence of print images—generated by the computer system according to the method S100. Therefore, the method S100can be executed by a computer system in conjunction with a additive manufacturing system to modulate intensity and duration of light energy directed to voxels defining upward-facing surfaces of a part in order to: a) achieve a target green strength; b) prevent resin from polymerizing on upward-facing surfaces of the part beyond the target geometry of the part while the part remains immersed in liquid resin; and thus c) achieve improved surface quality and dimensional accuracy on upward-facing surfaces of the part. Furthermore, polymerization effects resulting from light exposure within the additive manufacturing system may be material dependent, such as based on opacity of a resin material in unexposed, exposed, and polymerized states and based on pigmenting, photoinhibitors, or other components added to the resin. Therefore, the method S100can further be executed by a computer system in conjunction with a additive manufacturing system to enable the additive manufacturing system to generate parts of improved dimensional tolerance with more challenging resin materials. The method S100is described herein as executed in conjunction with and/or by a additive manufacturing system containing a DLP projection system. However, the method S100can executed in conjunction with and/or by a additive manufacturing system containing a laser, NIR, and/or other type of projection system. 5. Part Model Block S110of the method S100recites accessing a part model comprising a three-dimensional representation of a part. In one implementation, the computer system accesses (or “ingests”) a 3D part model, such as uploaded by a user or retrieved from a local or remote database. For example, the part model can include: a solid model defining a volume of a 3D part between its interior or exterior surfaces; or a mesh defining target interior and exterior surfaces of the 3D part. 6. Model Annotation and Modification The computer system can then: render the 3D part model within a user interface; render 3D part model in the user interface; detect interior and/or exterior surfaces within the 3D part model; and highlight these surfaces within the user interface or otherwise prompt the user to annotate these surfaces with dimensional, surface finish (or “texture”), and/or other target characteristics. 6.1 Surface Segmentation In one example, the computer system detects a constellation of vertices and edges in 3D part model. Then, based on these vertices and edges, the computer system defines a set of abutting (i.e., non-overlapping) surface sectors: that cooperate to span all interior and exterior surfaces of the 3D part model; and that individually define the shortest perimeter lengths between vertices and along edges in the 3D part model. In another example, the computer system: defines a grid density; projects a mesh—at the grid density—onto the interior and exterior surfaces of the 3D part model; snaps gridlines in the mesh onto vertices and edges detected in the 3D part model; and then defines a surface sector within each grid unit in the mesh. In this example, the computer system can set the grid density such that each surface sector approximates a target surface area (e.g., one centimeter), such as preset or selected by the user via a slider rendered within the user interface. Alternatively, the computer system can set the grid density such that each surface sector approximates a target proportion (e.g., 5%) of the total surface area of the 3D part model, such as preset or selected by the user via a slider rendered within the user interface. Therefore, in the preceding examples, the computer system can apply a uniform grid density across an entire 3D part model. Alternatively, the computer system can enable user to isolate subregions of the 3D part model, such as: by selecting subvolumes (e.g., bosses, extrusions) and/or subsurfaces (e.g., cavities) delineated by edges or faces in the 3D part model; or by manually drawing, selecting, or delineating subvolumes or subsurfaces within the 3D part model. The computer system can then project a mesh—and therefore define a surface sector area—onto each subregion in the 3D part model according to its designated grid density. However, the computer system can implement any other method or technique to delineate surface sectors on interior and/or exterior surfaces of the 3D part model. The computer system can then interface with the user to assign characteristics to individual surface sectors or groups of surface sectors within the part model. 6.2 Dimensional Accuracy The computer system can then populate surface sectors within the 3D part model with quantitative or qualitative dimensional tolerances, such as including: distances between features; lengths of features; flatness callouts; parallelism callouts; cylindricity; and/or over-under tolerance callouts. In one implementation, the 3D part model: defines a nominal part geometry; and includes dimensions and/or dimensional tolerances on various faces, edges, and vertices. Therefore, in this implementation, the computer system can: extract geometric dimensional tolerances from the 3D part model; interpret thickness, straightness, flatness, and/or cylindricity tolerances, etc. for subvolumes and/or subsurfaces from dimensional tolerances contained in the 3D part model; and project these tolerances onto corresponding surface sectors across the interior and exterior surfaces of the 3D part model. In a similar implementation, the computer system: accesses an engineering drawing associated with the 3D part model; extracts dimensional tolerances from the engineering drawing; and projects these tolerances onto corresponding surface sectors across the interior and exterior surfaces of the 3D part model. In another implementation, the computer system interfaces with the user to manually annotate surface sectors (or subvolumes or subsurfaces more generally) within the 3D part model with dimensional tolerances. Alternatively, in the foregoing implementations, the computer system can: abstract dimensional tolerances to a range of dimensional abstractions, such as including: “loose” dimensional control (e.g., up to +/−0.010″ from a nominal dimension); “moderate” dimensional control (e.g., up to +/−0.005″ from a nominal dimension); and “tight” dimensional control (e.g., up to +/−0.0010″ from a nominal dimension). The computer system can then write these dimensional abstractions to individual surface sectors within the 3D part model. Therefore, the computer system can assign absolute (e.g., quantitative) or abstract (e.g., qualitative) tolerances to individual surface sectors within the 3D part model. 6.3 Surface Finish The computer system can similarly populate surface sectors within the 3D part model with target surface finishes (or “textures”). For example, a target surface finish can specify a surface finish type, such as: surface roughness (e.g., “Ra”); lay pattern (e.g., vertical, horizontal, radial, cross-hatched, circular, isotropic, concave dimple, convex dimple); or waviness. The target surface finish can additionally or alternatively include a specification, such as: material removal not allowed or required; tolerance direction (e.g., upper or lower); filter (e.g., noise or waviness); and/or dimple (e.g., direction, depth, width). In one implementation, the 3D part model (or an associated engineering drawing) contains surface finish callouts. Accordingly, the computer system can project these surface finish callouts onto corresponding subvolumes and/or subsurfaces—and therefore onto surface sectors—within the 3D part model. Alternatively, the computer system can interface with the user to manually annotate surface sectors (or subvolumes or subsurfaces more generally) within the 3D part model with surface finish callouts. Additionally or alternatively, in the foregoing implementations, the computer system can: qualitative surface finishes to a range of surface finish abstractions, such as including: “mirror,” “smooth,” “rough”; or gloss, semi-gloss, satin, matte, flat. The computer system can then write these surface finish abstractions to individual surface sectors within the 3D part model. Therefore, the computer system can assign qualitative surface finish callouts to individual surface sectors within the 3D part model. 6.4 Surface Finish and Texture Projection Furthermore, in one variation, the computer system projects a three-dimensional representation of a specified texture onto each surface sector of the part model. In one implementation, the computer system accesses a parametric or non-parametric texture model that converts surface finish callouts into textural 3D surface map(s), which the computer system can then project onto the surface of the part model. For example, the texture model can link: a) lay patterns (e.g., horizontal, radial, cross-hatched, circular, isotropic, concave dimple, and/or convex dimple lay patterns) defined in surface finish callouts; to b) 3D texture profiles (e.g., 2D sawtooth, sinusoidal, square-wave profiles). The texture model can also: convert profile depth defined in a surface finish callout into 2D texture profile amplitude; and convert outset proportion, inset proportion, or “on” layer edge specification in a surface finish callout into an offset between the neutral plane of a 3D texture profile and the surface of the part model. Accordingly, for a first surface sector in the 3D part model, the computer system can implement the texture model to transform a surface finish callout in the first surface sector into a 3D texture profile, a profile amplitude, and/or a neutral plane offset, etc. The computer system can then project the 3D texture profile—at the profile amplitude—onto the first surface sector in the part model. The computer system can then repeat this process for each other surface segment in the part model. 7. Material Selection and Material Profile Block S120of the method S100recites accessing a material profile of a material selected for the part, the material profile relating exposure energy and three-dimensional polymerization geometry of the material. Generally, in Block S120, the computer system can: receive a material selection for the 3D part model, such as from a menu of available resin formulas currently available for processing in the additive manufacturing system or by extracting a material specification from the 3D part model; and then retrieve a material profile for the selected material. In particular, the computer system can retrieve a material profile that defines correlations (e.g., relationships) between: print parameters (e.g., print layer thickness, light-shell thickness and step size, light-shell exposure intensity, light-shell exposure duration); print outcomes (e.g., intra- and inter-layer dimensional tolerance, polymerization bleed between print layers, surface finish, surface texture, green strength); and part characteristics (e.g., cross-sectional of features, vertical and horizontal aspect ratios of features). For example, the material profile can be generated empirically based on results of (many) parts of different geometries and characteristics printed on the same or other additive manufacturing system(s), such as described in U.S. patent application Ser. No. 17/173,174. In one implementation, the material profile includes a working curve that defines a relationship between irradiation energy (or energy intensity) and polymerization depth (e.g., in one dimension parallel to an irradiation direction or in three dimensions relative to the irradiation direction) for the material. In one example, the working curve includes a logarithmic function (or “curve”) that relates: critical energy (i.e., to initiate polymerization); depth of polymerization penetration (e.g., cured film thickness through the depth of a part layer) from a surface exposed to the critical energy; and width of polymerization from the boundary of the surface exposed to the critical energy. In a similar implementation, the material profile for the material includes a) a first working curve that predicts horizontal polymerization (i.e., lateral and longitudinal polymerization in an x-y plane through a layer of the material) and b) a second working curve that predicts vertical polymerization (i.e., depth of polymerization along a z-axis perpendicular to the layer of the material), both as a function of: exposure area (e.g., the width and length of the field of view of a pixel in the projection system at the build window of the additive manufacturing system); exposure intensity within the exposure area; exposure duration; wavelength of light; and/or environmental conditions (humidity, temperature, presence of ambient gas) within the additive manufacturing system. For example, the material profile can predict: horizontal polymerization of the material at a first rate proportional to exposure energy; and vertical polymerization of the material at a second rate—less than the first rate—proportional to exposure energy. Thus, in this example, the material profile can predict a three-dimensional polymerized volume of the material that approximates: a hemisphere at low exposure energies; and a flat, wide semi-ellipsoid at high exposure energies. In another implementation, the material profile can define a polymerization radius—from a center of an exposure area—as a function of exposure energy (e.g., an combination or integral of exposure intensity over exposure duration). For example, the material profile can predict a hemispherical polymerization volume—characterized by a spherical radius proportional to exposure energy—for the material. 8. 3D Part Model Segmentation Blocks S130and S132recite: segmenting the part model into a set of model layers; segmenting a first model layer, in the set of model layers, into a first set of model volumes; and/or segmenting a second model layer, in the set of model layers, into a second set of model volumes. Generally and as shown inFIGS.1,2, and4, in Blocks S130and S132, the computer system can segment the three-dimensional part model into a three-dimensional array of voxels, wherein each voxel: defines a depth corresponding to a thickness of the layer of the part model containing the voxel; and an area bounded by a field of view of a corresponding pixel—in an x-y plane at the build window—in the projection system of the additive manufacturing system. In particular, in this variation, the computer system can segment the part model into discrete voxels of common three-dimensional geometry, including: surface-level voxels that intersect surfaces of the part model; and interior voxels that are fully contained within the part model. In one implementation, the computer system: sets a common width and length of voxels according to a resolution of the projection system; and sets a common height of voxels according to a vertical part resolution (versus print speed) selected by the user, according to a dimensional tolerance specified in the part model, or approaching a resolution limit of the additive manufacturing system, etc. For example, the computer system can prompt the user to set a slide between maximum print speed and tightest dimensional tolerance. In particular, if the user elects faster print speed (i.e., shorter print duration), the computer system can increase layer and voxel thickness for the part model. Similarly, if the user elects tighter dimensional tolerance, the computer system can decrease layer and voxel thickness, such as down to a minimum step height (e.g., a maximum vertical resolution) of the build tray of the additive manufacturing system. The computer system can then generate a three-dimensional grid array of voxels according to these voxel dimensions and project this three-dimensional grid array of voxels onto the part model. In particular, each voxel defines a discrete subvolume of the part and includes a bottom surface that fully spans the field of view of one pixel in the projection system of the additive manufacturing system such that the projection can selectively irradiate this voxel during a build in order to selectively polymerize material within and just behind the voxel based on controlled exposure energy. For example, in Blocks S130and S140, the computer system can: segment the part model into a set of model layers of a common thickness, such as proportional to a part resolution or tolerance selected by the user; project fields of view of an array of pixels—in the projection system of the additive manufacturing system—onto the first model layer; define a first set of model volumes (e.g., “voxels”) bounded by the common thickness of the first model layer and fields of view of the array of pixels projected onto the first model layer; and repeat this process for each other model layer of the part model to segment the part model into a three-dimensional array of discrete volumes. 9. Upward-facing Surface-level Voxel Block S140of the method S100recites detecting a first superficial model volume, in the first set of model volumes of the first model layer, intersecting a first upward-facing surface of the part model. Generally, in Block S140, the computer system scans the part model for a set of surface-level voxels that contain upward-facing surfaces, as shown inFIG.4. For example, the computer system can: set an orientation of the part model within a virtual three-dimensional coordinate system; detect overhands in the part model; detect a first set of upward-facing exterior surfaces—on and around overhangs of the part model—that are parallel to a horizontal (e.g., an x-y) plane of the coordinate system; detect a second set of exterior surfaces of the part model that taper upwardly to face opposite the bottom plane; and detect a third set of exterior surfaces of the part model that taper downwardly to face the bottom plane. The computer system can then derive: a first frequency of surface-level voxels that contain (e.g., intersect) the first set of exterior surfaces; a second frequency of surface-level voxels that contain the second set of exterior surfaces; and a third frequency of surface-level voxels that contain the third set of exterior surfaces. The computer system can then iteratively adjust the position and/or orientation of the part model in the virtual three-dimensional coordinate system to: minimize the first frequency; maximize angles of the second set of exterior surfaces; and maximize the third frequency. The computer system can then isolate the remaining surface-level voxels that contain tapered surfaces that face opposite the bottom plane of the coordinate system—that is “upward-facing surface-level voxels” (hereinafter “surface-level voxels”). For example, the computer system can identify—as a surface-level voxel—a voxel: partially but not fully contained in the virtual volume of the part model; and for which the cross-sectional area of the virtual volume of the part at the bottom of the voxel is greater than the cross-sectional area of the virtual volume of the part at the top of the voxel. 10. Hemispherical Polymerization Model In one implementation shown inFIGS.1and3Aand described above, the computer system retrieves a material profile that predicts a hemispherical polymerization volume—characterized by a spherical radius proportional to exposure energy—for the material. In particular, the material profile can predict polymerization of the material that radiates outwardly from the center of the bottom of a voxel when the voxel is irradiated by a corresponding pixel in the projection system, wherein the radius of hemispherical polymerization is predicted by the working curve of the material and the irradiation energy incident on the voxel. The computer system can then implement this hemispherical polymerization model to: set exposure energies for a second set of (i.e., one or more) interior voxels such that the surface of the predicted polymerization hemisphere—radiating from these interior voxels—approximates the surface contained in a surface-level voxel; and then repeat this process for each other surface-level voxel in the part model. Furthermore, the computer system can: calculate or retrieve a maximum depth of penetration for the selected material, such as a depth corresponding to a horizontal asymptote of the working curve described above; and then calculates a maximum voxel offset distance equal to the maximum depth of penetration of the material. In particular, the maximum offset voxel distance approximates the maximum distance between a surface-level voxel and an interior voxel for which irradiation of the interior voxel is predicted to initiate polymerization of material within the surface-level voxel. 10.1 Hemispherical Polymerization Model: Ray to Interior Voxel Selection In one implementation, the computer system then selects a first interior voxel on the same layer and inset from the surface-level voxel by a distance proportional to the angle of the surface in the surface-level voxel and that falls within the maximum voxel offset distance. In one example, the computer system: calculates a centroid of the surface contained in the surface-level voxel; calculates a ray extending from the centroid of this surface within the surface-level voxel and facing into the virtual volume of the part; calculates a best-fit spherical radius of the surface contained in the surface-level voxel; sets a length of the ray to the best-fit spherical radius; and selects a first interior voxel fully contained within the part and for which its bottom center is nearest the inner end of the ray. The computer system can then: calculate a first minimum distance between the bottom center of the first interior voxel and the surface contained within the surface-level voxel; and calculates a first exposure energy for the first interior voxel based on the working curve and the first minimum distance. In the foregoing example, the computer system can select a first interior voxel that is further from the surface-level voxel and contained in the same layer of the part as the surface-level voxel if the surface-level voxel contains a steeper, convex surface such that a lower segment of a polymerization hemisphere—representing a volume of material polymerized by irradiation of the first interior voxel with the first exposure energy—intersects the surface of the part contained within the surface-level voxel with minimal offset (or “error”). Similarly, the computer system can select a first interior voxel that is nearer the surface-level voxel and contained in the same layer of the part as the surface-level voxel if the surface-level voxel contains a convex surface near 45° (with a small effective radius) such that a mid-latitude segment of a polymerization hemisphere intersects the surface contained within the surface-level voxel with minimal offset (or “error”). Furthermore, the computer system can select a first interior voxel that is below the layer of the surface-level voxel if the surface-level voxel contains a shallow surface angle such that a top segment of a polymerization hemisphere intersects the surface contained within the surface-level voxel with minimal offset (or “error”). The computer system can then: assign the first exposure energy to the first interior voxel; repeat this process for all other surface-level voxels in all layers of the part; and write the resulting exposure energies of these interior voxels to pixels—representing these interior voxels—in a sequence of print images (e.g., tables, matrices) for the part. In a similar example, the computer system can: detect a first superficial model volume (or “surface-level voxel”)—in a first model layer of the part model—that intersects a first upward-facing surface defined in the part model; calculate a best-fit radius (e.g., spherical radius) of a first contour of the first upward-facing surface contained in the first surface-level model volume; and calculate a centroid of the first contour. In this example, the computer system can then calculate a ray: normal to the first contour proximal the centroid; and defining a nominal origin offset from the first contour by the best-fit radius. Accordingly, the computer system can select an interior model volume (e.g., a “interior voxel”) fully contained within the first layer model and containing the nominal origin. Then, based on the material profile and a nominal (e.g., fixed) exposure duration per layer implemented by the additive manufacturing system, the computer system can then: calculate a first exposure intensity predicted to yield a first three-dimensional polymerization geometry that approximates the first contour when this exposure energy is projected onto the material for the nominal layer exposure duration by the additive manufacturing system during the build. 10.2 Spherical Error to Interior Voxel Selection In another implementation, for a surface-level voxel, the computer system identifies a subset of interior voxels: fully contained within the virtual volume defined by the part model; that fall on the same or lower layer of the voxel grid projected onto the part model; and that define centroids within the maximum offset voxel distance of the centroid of the surface-level voxel (or within the maximum offset voxel distance of a centroid of the surface of the part contained within the voxel, or within the maximum offset voxel distance of the entire surface of the part contained within the voxel). Then, for each interior voxel in this subset of interior voxels, the computer system calculates a radius of a polymerization hemisphere—radiating outwardly from the center of the bottom of the interior voxel—that falls fully within the virtual volume of the part and exhibits a minimum offset (or “error”) from the surface contained within the surface-level voxel. The computer system then identifies a particular interior voxel associated with a polymerization hemisphere exhibiting minimal error in this subset of interior voxels. For example, the computer system can set an error threshold: inversely proportional to a tolerance selected by user; inversely proportional to a nominal tolerance associated with the material; and/or proportional to a print speed selected by the user. If this error calculated for the particular interior voxel is less than the threshold error, the computer system: sets an exposure energy of the surface-level voxel to null; calculates a target exposure energy (or radiation intensities over a period of time) for the particular interior voxel based on the working curve to achieve the radius of the corresponding polymerization hemisphere; and stores the target exposure energy (or radiation intensities over a period of time) in a pixel representing the particular voxel in a print image of the corresponding layer of the part. Alternatively, in this implementation, if the error calculated for the particular interior voxel is greater than the threshold error, then the computer system can iteratively: select a combination of (e.g., two, three) interior voxels in the subset of interior voxels; combine the polymerization hemispheres—calculated for these interior voxels as described above—to derive a composite surface; and calculate a composite total error between the surface within the surface-level voxel and this composite surface. Furthermore, the computer system can iteratively repeat this process for increasing quantities of interior voxels until the total error between the surface within the surface-level voxel and the composite surface calculated for a combination of interior voxels—according to the hemispherical polymerization model—is less than the threshold error. The computer system can then: set an exposure energy of the surface-level voxel to null; calculate a target exposure energy for each interior voxel based on the working curve of the material and the radius of their corresponding polymerization hemispheres; and store these target exposure energies in pixels representing these interior voxels in a sequence of print images of the part. For example, to avoid over-exposure of the surface-level voxel within the part during a later build cycle, the computer system can: store a target exposure energy for one interior voxel in this set in each image—in a sequence of images—for each layer of the part around the surface-level voxel; and set a delay time between exposing the build window in the system to these images such that the surface-level voxel cools to a nominal operating temperature before exposure to a next image in this sequence in order to control depth of cure of material at the surface-level voxel. The computer system can similarly store one target exposure energy for each interior voxel in this set in each image—in a sequence of images—for the next layer of the part. 10.3 Sub-Surfaces In another implementation, for a surface-level voxel, the computer system can set a surface-level sub-voxel area: inversely proportional to tolerance selected by the user; and proportional to print speed selected by the user. The computer system can then segment the surface contained within the surface-level voxel into a set of sub-surfaces: of area approximately (and less than) the surface-level sub-voxel area; and characterized by shortest total edge length. The computer system can then execute the foregoing methods and techniques to select a set of (i.e., one or more) interior voxels and to calculate corresponding exposure energies for each sub-surface in the surface-level voxel. 10.4 Other Surface-Level Voxel Intersection Checks In one variation, the computer system further verifies that an energy exposure thus calculated for an interior voxel—in order to achieve a polymerization boundary that approximates upward-facing surface in a nearby surface-level voxel—will not also polymerize material substantially beyond the surface of the part outside of this nearby surface-level voxel. For example, the computer system can: estimate a polymerization volume resulting from irradiation of an interior voxel according to the target exposure energy calculated as described above; project the polymerization volume onto the virtual volume of the part defined in the part model; and characterize a disjoint volume of the polymerization volume located outside of the virtual volume of the part. Then, if the maximum thickness and/or total area of this disjoint volume exceed(s) a threshold(s), the computer system can reduce the target exposure energy assigned to the interior voxel or repeat the foregoing processes to select an alternate set of interior voxels and target exposure energies predicted to produce polymerization near the surface contained in the surface-level voxel with less polymerization beyond the surface of the part outside of this surface-level voxel. 10.5 Concurrent Interior Voxel Irradiation In one variation, the computer system implements similar methods and techniques to select multiple (e.g., two) interior voxels in the same layer of the part for modified irradiation—such as in parallel in one image or in series over multiple images—in order to polymerize a volume of material that extends into the surface-level voxel to approximate the profile of the surface within the surface-level voxel. In particular, the computer system can retrieve a polymerization model that predicts polymerization depth (in three dimensions) resulting from irradiation of multiple discrete surfaces of the material (e.g., the bottoms of two or more interior voxels). The computer system can then implement the polymerization model and methods described above to select and refine two or more interior voxels predicted to yield a composite polymerization volume that extends into the surface-level voxel and approximates the profile of the surface contained in the surface-level voxel when these interior voxels are concurrently irradiated with particular exposure energies. The computer system can then write these exposure energies to representations of these interior voxels in one image corresponding to this layer of the part. Additionally or alternatively, the computer system can then implement methods described above to select and refine two or more interior voxels—in the same or different layers of the part—predicted to yield a combined polymerization volume that extends into the surface-level voxel and approximates the profile of the surface contained in the surface-level voxel when these interior voxels are irradiated—in series and with sufficient delay time therebetween—according to particular exposure energies. The computer system can then write these exposure energies to representations of these interior voxels in a series of images corresponding to this layer (or these layers) of the part. 10.6 Error Modes In one variation, the computer system characterizes both textural error and dimensional error between the surface defined in the surface-level voxel and the polymerization volume of the first set of interior voxels at their prescribed exposure energies. For example, the computer system can characterize textural error as polymerization volume resulting from irradiation of the first set of interior voxels at their prescribed exposure energies. In particular, if the slope (e.g., steepness) and profile (e.g., curvature) of the polymerization volume deviates from the surface defined in the surface-level voxel, the resulting texture at the surface of the final part in the location of this surface-level voxel may differ from the surrounding surface of the part, and the part may therefore exhibit inconsistent texture. Therefore, the computer system can iteratively repeat the foregoing processes to select one or more interior voxels and to set exposure energies for these interior voxels in order to reduce differences between the slope and profile of the surface defined in the surface-level voxel and the slope and profile of the polymerization volume prescribed by these interior voxels in order to achieve more consistent, predictable texture on the surface of a final part. Similarly, the computer system can characterize dimensional error as a total (e.g., absolute) volume between the surface defined in the surface-level voxel and the polymerization volume resulting from irradiation of the first set of interior voxels at their prescribed exposure energies. In particular, a large total volume between the surface of the polymerization volume and the surface defined in the surface-level voxel may predict low dimensional accuracy of the final part in the region of this surface-level voxel. Therefore, the computer system can iteratively repeat the foregoing processes to select one or more interior voxels and to set exposure energies for these interior voxels in order to reduce the total volume between the surface defined in the surface-level voxel and the polymerization volume prescribed by these interior voxels in order to achieve greater dimensional accuracy of a final part. Thus, if a surface sector of the part model containing a surface-level voxel is annotated with a tight tolerance, the computer system can implement the foregoing process to iteratively refine selection of a set of interior voxels and their exposure energies in order to reduce the total volume between the surface defined in the surface-level voxel and the polymerization volume prescribed by these interior voxels. Similarly, if a surface sector of the part model containing a surface-level voxel is annotated with a looser tolerance and a prescribed texture, the computer system can implement the foregoing process to iteratively refine selection of a set of interior voxels and their exposure energies in order to preferentially reduce slope and profile differences between the total volume between the surface defined in the surface-level voxel and surface of the polymerization volume prescribed by these interior voxels. 11. Interior Model Volume Selection by Concave/Convex Surface+Slope In one implementation described above and shown inFIG.4, the computer system: segments the part model into a three-dimensional array of model volumes (e.g., voxels) in Block S130; selects a surface-level model volume in the array; and characterizes a slope of a section of a surface of the part model contained in the surface-level model volume in Block S140. For example, the computer system can calculate: a minimum angle between a horizontal x-y plane through the surface-level model volume and a tangent ray located on the surface section; an average angle between the bottom plane of surface-level model volume and the surface section at the intersection of the bottom plane of the surface-level model volume and the surface section; or an angle between the horizontal x-y plane through the surface-level model volume and a tangent ray located on a centroid of the surface section; etc. The computer system can then store this angle as the “slope” of the section of the surface of the part model contained in the virtual model volume. In this implementation, the computer system can also characterize the surface section as either (predominantly) convex or concave. For example, the computer system can: implement methods and techniques described above to calculate a best-fit (spherical) radius of the surface section; and calculate a position of a sphere—of radius equal to the best-fit radius—that minimizes error between the surface of the sphere and the surface section. The computer system can then: characterize the surface section as convex if the origin of the sphere falls inside of the part model; and characterize the surface section as concave if the origin of the sphere falls outside of the part model. The computer system can then select a single interior model volume (or a small quantity of interior model volumes) on the same layer as the surface-level model volume for increased energy exposure—and therefore extended polymerization to approximate the surface section—if the surface section exhibits a steep slope (e.g., greater than 45°) and is convex. Alternatively, the computer system can select multiple interior model volumes (or a greater quantity of interior model volumes) on the same layer as the surface-level model volume for increased energy exposure if the surface section exhibits a steep slope and is concave. Yet alternatively, the computer system can select a single interior model volume (or a small quantity of interior model volumes) on the succeeding model layer for increased energy exposure if the surface section exhibits a shallow slope (e.g., less than 45°) and is convex. Yet alternatively, the computer system can select multiple interior model volumes (or a greater quantity of interior model volumes) on the succeeding model layer for increased energy exposure if the surface section exhibits a shallow slope and is concave. Therefore, the computer system can select interior model volumes—including a single interior model volume or groups of interior model volumes on the same or subsequent layer as the surface-level model volume—based on the slope and contour of the surface section contained in the surface-level model volume. 11.1 Steep Convex Surface Contour In one implementation shown inFIG.4, the computer system implements methods and techniques described above to characterize an upward-facing surface—contained in a surface-level model volume—as convex with an effective tangent inclined from a horizontal plane by greater than a threshold angle, such as: a threshold angle of 45° for a material model that predicts hemispherical polymerization; or a threshold angle of 60° for a material model that predicts semi-ellipsoidal polymerization with slower rate of vertical polymerization than horizontal polymerization. Accordingly, the computer system selects a set of interior model volumes—in the same model layer as the surface-level model volume—proximal the surface-level model volume, such as: a first group of three interior model volumes nearest and immediately adjacent the surface-level model volume; and five interior model volumes offset from the surface-level model volume by the first group of interior model volumes. Then, for each interior model volume in this set of interior model volumes, the computer system: calculates an exposure energy predicted—by the material profile—to yield a three-dimensional polymerization geometry intersecting the upward-facing surface contained within the surface-level model volume when this energy is projected onto a corresponding volume of the material during a build; and calculates an error between this three-dimensional polymerization geometry and the upward-facing surface contained within the surface-level model volume. The computer system then selects a particular interior model volume—from the set of interior model volumes—associated with a lowest error in this set. Alternatively, in this implementation, the computer system can define combinations of interior model volumes in this set of interior model volumes in the same model layer as the surface-level volume, such as combinations of one, two, three, and four interior model volumes in this set of interior model volumes. Then, for each combination of interior model volumes, the computer system: defines a composite interior model volume containing the interior model volumes in the combination; and calculates a single exposure energy predicted to yield a three-dimensional polymerization geometry that approximates (e.g., exhibits a minimum error or offset from) the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a volume of the material corresponding to this composite interior model volume during the build. The computer system then selects a particular combination of interior model volumes—from this set of combinations—that predicts a three-dimensional polymerization geometry characterized by a lowest error (e.g., an offset) from the upward-facing surface contained in the surface-level model volume. The computer system then writes the exposure energy calculated for the particular combination to a set of pixels—corresponding to the interior model volumes in this combination—in the print image for this model layer. 11.2 Steep Concave Surface Contour In another implementation shown inFIG.4, the computer system implements methods and techniques described above: to characterize an upward-facing surface—contained in a surface-level model volume—as concave with an effective tangent inclined from a horizontal plane by greater than the threshold angle; and to calculate a best-fit radius of the upward-facing surface. Accordingly, the computer system selects a set of interior model volumes—in the same model layer as the surface-level model volume—proximal the surface-level model volume, such as: a single interior model volume nearest the surface-level model volume if the best-fit radius of the upward-facing surface is very large; two interior model volumes nearest the surface-level model volume if the best-fit radius of the upward-facing surface is large; three interior model volumes nearest the surface-level model volume if the best-fit radius of the upward-facing surface is moderate; and four interior model volumes nearest the surface-level model volume if the best-fit radius of the upward-facing surface is small. For each interior model volume in this set, the computer system then calculates an exposure energy predicted to yield a first three-dimensional polymerization geometry that intersects the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a volume of the material corresponding to the interior model volume during a build. For example, if the best-fit radius of the upward-facing surface contained in the surface-level model volume is large, the computer system can select two interior model volumes nearest the surface-level model volume. For the first interior model volume in this set, the computer system can: calculate a first exposure energy predicted to yield a first three-dimensional polymerization geometry intersecting the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a first volume of the material corresponding to this first interior model volume during the build; and write this first exposure energy to a corresponding pixel in a print image for this model layer. Similarly, for the second interior model volume in this set, the computer system can: calculate a second exposure energy predicted to yield a second three-dimensional polymerization geometry intersecting the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a second volume of the material corresponding to this second interior model volume during the build; and write this second exposure energy to a corresponding pixel in the print image for this model layer. Alternatively, as shown inFIG.4, the computer system can: define a composite interior model volume containing the first and second interior model volumes; calculate a single exposure energy predicted to yield a three-dimensional polymerization geometry that approximates (e.g., exhibits a minimum error or offset from) the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a volume of the material corresponding to this composite interior model volume during the build; and write this exposure energy to pixels corresponding to the first and second interior model volumes in the print image for this model layer. In a similar implementation, the computer system can define combinations of interior model volumes in the same model layer as the surface-level volume, such as combinations of one, two, three, and four interior model volumes within one, two, and three voxel widths of the surface-level volume. Then, for each combination of interior model volumes, the computer system: defines a composite interior model volume containing the interior model volumes in the combination; and calculates a single exposure energy predicted to yield a three-dimensional polymerization geometry that approximates (e.g., exhibits a minimum error or offset from) the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a volume of the material corresponding to this composite interior model volume during the build. The computer system then selects a particular combination of interior model volumes—from this set of combinations—that predicts a three-dimensional polymerization geometry characterized by a lowest error (e.g., an offset) from the upward-facing surface contained in the surface-level model volume. The computer system then writes the exposure energy calculated for the particular combination to a set of pixels—corresponding to the interior model volumes in this combination—in the print image for this model layer. 11.3 Shallow Convex Surface Contour In another implementation shown inFIG.4, the computer system implements methods and techniques described above to characterize an upward-facing surface—contained in a surface-level model volume—as convex with an effective tangent inclined from a horizontal plane by less than the threshold angle. Accordingly, the computer system selects a set of interior model volumes—in a second, subsequent model layer below the surface-level model volume—proximal the surface-level model volume, such as: a first interior model volume immediately below the surface-level model volume; and eight interior model volumes in the second layer and encompassing the first interior model volume. Then, for each interior model volume in this set of interior model volumes, the computer system calculates: an exposure energy predicted—by the material profile—to yield a three-dimensional polymerization geometry intersecting the upward-facing surface contained within the surface-level model volume when this energy is projected onto a corresponding volume of the material during a build; and an error between this three-dimensional polymerization geometry and the upward-facing surface contained within the surface-level model volume. The computer system then selects a particular interior model volume—from the set of interior model volumes—associated with a lowest error in this set. Alternatively, as shown inFIG.4, the computer system can define combinations of interior model volumes in this set of interior model volumes in the second layer below the surface-level model volume, such as combinations of one, two, three, and four interior model volumes in this set of interior model volumes. Then, for each combination of interior model volumes, the computer system: defines a composite interior model volume containing the interior model volumes in the combination; and calculates a single exposure energy predicted to yield a three-dimensional polymerization geometry that approximates (e.g., exhibits a minimum error or offset from) the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a volume of the material corresponding to this composite interior model volume during the build. The computer system then selects a particular combination of interior model volumes—from this set of combinations—that predicts a three-dimensional polymerization geometry characterized by a lowest error (e.g., an offset) from the upward-facing surface contained in the surface-level model volume. The computer system then writes the exposure energy calculated for the particular combination to a set of pixels—corresponding to the interior model volumes in this combination—in the print image for this model layer. In the foregoing implementations, the computer system can select interior model volumes contained only in the second model layer immediately below the first model layer of the surface-level model volume and implement the foregoing methods and techniques to calculate new exposure energies for a subset of these interior model volumes predicted to yield a three-dimensional polymerization volume that approximates the upward-facing surface contained in the surface-level model volume. Alternatively, the computer system can select interior model volumes in multiple layers below the surface-level model volume and then implement the foregoing methods and techniques to calculate new exposure energies for a subset of these interior model volumes. For example, the computer system can: characterize a slope of the upward-facing surface contained in the surface-level model volume; select a second model layer located below the surface-level model volume by a distance inversely proportional to the slope; select a set of interior model volumes from the second model layer; and then implement the foregoing methods and techniques to calculate new exposure energies for a subset of these interior model volumes in this second layer. Therefore, in this example, the computer system can: selectively increase a first exposure energy for a first interior model volume on the same layer as and adjacent a surface-level model volume if the upward-facing surface contained in the surface-level model volume is steep; and selectively increase a second exposure energy for a second interior model volume on a second layer immediately below the surface-level model volume if the upward-facing surface contained in the surface-level model volume is moderately steep; and selectively increase a third exposure energy for a third interior model volume on a third layer two layers below the surface-level model volume if the upward-facing surface contained in the surface-level model volume is shallow. 11.4 Shallow Concave Surface Contour In another implementation shown inFIG.4, the computer system implements methods and techniques described above to characterize an upward-facing surface—contained in a surface-level model volume—as concave with an effective tangent inclined from a horizontal plane by less than the threshold angle. Accordingly, the computer system selects a set of interior model volumes—in a second, subsequent model layer below the surface-level model volume—proximal the surface-level model volume, such as: a first interior model volume immediately below the surface-level model volume; and eight interior model volumes in the second layer and encompassing the first interior model volume. Then, for each interior model volume in this set of interior model volumes, the computer system: calculates an exposure energy predicted—by the material profile—to yield a three-dimensional polymerization geometry intersecting the upward-facing surface contained within the surface-level model volume when this energy is projected onto a corresponding volume of the material during a build; and calculates an error between this three-dimensional polymerization geometry and the upward-facing surface contained within the surface-level model volume. The computer system then selects a particular interior model volume—from the set of interior model volumes—associated with a lowest error in this set. Alternatively, as shown inFIG.4, the computer system can define combinations of interior model volumes in this set of interior model volumes in the second layer below the surface-level model volume, such as combinations of one, two, three, and four interior model volumes in this set of interior model volumes. Then, for each combination of interior model volumes, the computer system: defines a composite interior model volume containing the interior model volumes in the combination; and calculates a single exposure energy predicted to yield a three-dimensional polymerization geometry that approximates (e.g., exhibits a minimum error or offset from) the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a volume of the material corresponding to this composite interior model volume during the build. The computer system then selects a particular combination of interior model volumes—from this set of combinations—that predicts a three-dimensional polymerization geometry characterized by a lowest error (e.g., an offset) from the upward-facing surface contained in the surface-level model volume. The computer system then writes the exposure energy calculated for the particular combination to a set of pixels—corresponding to the interior model volumes in this combination—in the print image for this model layer. In another implementation, if a best-fit radius of the upward-facing surface contained in the surface-level model volume is large, the computer system can: select a single interior model volume below the surface-level model volume and nearest a centroid of the upward-facing surface contained in the surface-level model volume; and implement the foregoing methods and techniques to calculate an exposure energy predicted—by the material profile—to yield a three-dimensional polymerization geometry that intersects the upward-facing surface contained within the surface-level model volume when this energy is projected onto a corresponding volume of the material during a build. Alternatively, if the best-fit radius of the upward-facing surface contained in the surface-level model volume is moderate, the computer system can select two interior model volumes below the surface-level model volume and nearest the centroid of the upward-facing surface contained in the surface-level model volume. For the first interior model volume in this set, the computer system can: calculate a first exposure energy predicted to yield a first three-dimensional polymerization geometry intersecting the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a first volume of the material corresponding to this first interior model volume during the build; and write this first exposure energy to a corresponding pixel in a print image for this model layer. Similarly, the second interior model volume in this set, the computer system can: calculate a second exposure energy predicted to yield a second three-dimensional polymerization geometry intersecting the upward-facing surface contained within the surface-level model volume when this exposure energy is projected onto a second volume of the material corresponding to this second interior model volume during the build; and write this second exposure energy to a corresponding pixel in the print image for this model layer. Yet alternatively, if the best-fit radius of the upward-facing surface contained in the surface-level model volume is large, the computer system can: select three interior model volumes below the surface-level model volume and nearest the centroid of the upward-facing surface contained in the surface-level model volume; and implement the foregoing methods and techniques to calculate exposure energies for these interior model volumes. 12. Non-Spherical Polymerization Model In the foregoing implementations, the computer system can implement the material model to predict hemispherical polymerization: that extends upwardly and outwardly from a center of a volume of the material when irradiated with a target exposure energy by the additive manufacturing system; and that is characterized by a spherical radius proportional to this target exposure energy. Alternatively, the computer system can implement methods and techniques similar to those described above based on a material model that predicts a three-dimensional squircle polymerization geometry, such as a smoothed three-dimensional composition (e.g., union) of many polymerization hemispheres, each centered on a distinct subarea on the bottom of a voxel and characterized by a spherical radius based on the working curve for the material, a total exposure energy of the voxel, and the proportion of the total exposure energy of the voxel that intersects the distinct subarea when irradiated. (Alternatively, the computer system can derive this material model based on coaxial and perpendicular working curves for the material and a known geometry of a voxel.) For example, the computer system can implement the material model that predicts hemispherical polymerization or three-dimensional squircle polymerization geometry based on selection of a material with high internal reflection and/or high polymerization sensitivity where irradiation of the bottom of a voxel of the material in a vertical direction results in both vertical polymerization through the height of the voxel and lateral polymerization outwardly from the voxel at similar rates. 12.1 Non-Spherical Polymerization Model: Pyramidal Polymerization Volume However, another material with lower internal reflection, exhibiting less attenuation of light passing therethrough, and/or exhibiting lower polymerization sensitivity may exhibit a greater rate of polymerization parallel to the direction of irradiation (i.e., normal to layers of a part printed with the material in the system, parallel to light rays incident on the material from the projection system) than polymerization moving laterally outward from an irradiated area of the material. For example, when a voxel of this material—in a layer of a part printed in the system—is irradiated, a volume of the material may polymerize around this voxel according to a geometry: defining a base that approximates the geometry of the irradiated bottom of the voxel; tapering upwardly and narrowing from the base at an angle inversely proportional to the internal reflection of the material; and scaled according to the exposure energy. Therefore, a material exhibiting less internal reflection may yield a taller, steeper polymerization volume than a material with greater internal reflection given similar exposure energy over a similar voxel area. Accordingly, the computer system can retrieve a material model that predicts a pyramidal polymerization geometry, wherein the height of a polymerization pyramid is a function of exposure energy and wherein steepness of the polymerization pyramid is a function of internal reflection and/or polymerization sensitivity of the material. (Alternatively, the computer system can derive this material model based on coaxial and perpendicular working curves for the material and a known geometry of a voxel.) 12.2 Non-Spherical Polymerization Model: Conical Polymerization Volume In another example, the computer system can implement a material model that predicts a polymerization volume prescribed by a smoothed three-dimensional composition (e.g., union, a sum forming a convex hull) of many polymerization cones, each: centered on a distinct subarea on the bottom of a voxel; characterized by a radius at the bottom of the voxel based on a perpendicular working curve for the material (or lateral polymerization width as a function of axial exposure energy), a total exposure energy of the voxel, and the proportion of the total exposure energy of the voxel that intersects the distinct subarea when irradiated; and characterized by a total height above the bottom of the voxel based on a coaxial working curve for the material (or vertical polymerization depth as a function of axial exposure energy), a total exposure energy of the voxel, and the proportion of the total exposure energy of the voxel that intersects the distinct subarea when irradiated. (Alternatively, the computer system can derive this material model based on coaxial and perpendicular working curves for the material and a known geometry of a voxel.) 12.3 Non-Spherical Polymerization Model: Semi-Ellipsoidal Polymerization Volume Yet alternatively and as shown inFIG.3B, the computer system can implement methods and techniques similar to those described above based on a material model that predicts a three-dimensional semi-ellipsoidal polymerization geometry with a base of the semi-ellipsoidal polymerization geometry coincident a bottom horizontal plane of a layer of a part. For example, the material profile can define: horizontal polymerization at a first rate proportional to exposure energy within a volume of the material; and vertical polymerization at a second rate—less than the first rate—proportional to exposure energy within the volume of the material. 12.4 Non-Spherical Polymerization Model: Interior Voxel Selection and Exposure Energy In these variations, the computer system can then implement methods and techniques described above to: isolate a surface-level voxel that intersects a surface of the virtual volume defined in the part model; and identify a set of interior voxels—in the same or lower layer of the part—for which a target exposure energy is predicted to a) (fully) polymerize material within these interior voxels and b) polymerize material up to an edge that approximates the surface contained in the surface-level voxel based on the material model (or based on coaxial and perpendicular working curves for the material and a known geometry of the surface-level voxel). 13. Print Images+Print File Blocks S154and S160of the method S100recite: populating a first print image with the first exposure energy in a first image area corresponding to the first model volume in the first model layer in Block S154; and storing the first print image in a print file for the part in Block S160. Generally, in Block S154and S160, the computer system can: generate a sequence of print images containing exposure energies (e.g., irradiation intensity and exposure timing data) for interior voxels and select surface-level voxels; and aggregate these print images into a print file for the part, as shown inFIGS.1,2, and4 In one implementation, upon ingest of the part model, the computer system: segments the part model into model layers and discrete volumes in Blocks S130and S132; and isolate a subset of interior model volumes—across all model layers of the part model—that are fully contained within the part model. Based on the material profile, the computer system calculates a nominal exposure energy predicted to polymerize a volume of the material contained in an interior model volume, to achieve a minimal green strength of material within the interior model volume, and to fully (or sufficiently) bond material within the interior model volume to polymerized material in adjacent interior model volumes on the same and preceding layers of the part. Then, for each model layer in the part model, the computer system: implements methods and techniques described above to update energy exposures—greater than the nominal energy exposure—for a target subset of interior model volumes adjacent a target subset of surface-level model volumes containing upward-facing surfaces in the part model; initializes a print image for the model layer; populates image areas (e.g., pixels) corresponding to the target subset of interior model volumes with updated exposure energies of these interior model volumes; populates the first print image with null exposure energies in image areas corresponding to the target subset of surface-level model volumes containing upward-facing surfaces; and writes the nominal energy exposure to image areas corresponding to each other interior model volume in the model layer. The computer system then aggregates this print image into a print file executable by the additive manufacturing system to manufacture an instance of the part. 14. Variable Model Volumes In one variation shown inFIG.2, the computer system implements the foregoing methods and techniques to calculate an energy exposure for a model volume—adjacent or intersecting an upward-facing surface in the part model—corresponding to a cluster of pixels in a print image rather than for a voxel corresponding to an individual pixel in this print image. More specifically, in this variation, the computer system can implement the foregoing methods and techniques to calculate energy exposures for larger model volumes adjacent or intersecting upward-facing surfaces within the part model. In one example, the computer system: projects fields of view of an array of pixels, in the projection system of the additive manufacturing system, onto a first model layer of the part model in Block S132; selects a superficial model volume containing fields of view of a first cluster of (e.g., two, ten) pixels—in the array of pixels of the projection system projected onto the first model layer—intersecting an upward-facing surface of the part model in Block S140; selects an interior model volume bounded by fields of view of a second cluster of (e.g., two, twenty) pixels adjacent the superficial model volume in Block S150; and implements methods and techniques described above to calculate an exposure energy that, when cooperatively projected onto a layer of the material by the second cluster of pixels of the projection system during a build, polymerizes a volume of the material that approximates the upward-facing surface. The computer system can then write this energy exposure to pixels—corresponding to the second cluster of pixels of the projection system—in a print image for this model layer of the part model. 15. Print Process The computer system can then serve the print file to the additive manufacturing system for execution of the build to form the part. In particular, the computer system can upload the print file to the additive manufacturing system, which can then implement methods and techniques described in U.S. patent application Ser. No. 16/672,410 to selectively irradiate sequential layers of the material—arranged across a build window—according to print images contained in the print file. For example, the additive manufacturing system can implement methods and techniques described in U.S. patent application Ser. No. 16/672,410 to: load a first layer of the material into a first interstitial volume over a build window (e.g., between the build window and a previous layer of the part or between the build window and a build platform of the additive manufacturing system); load a first print image corresponding to a first model layer of the part model on the projection system; and project the first print image onto the first volume of the material. The additive manufacturing system can thus: selectively expose each subvolume of the material—corresponding to an interior region of the first model layer in the part model—to a nominal energy predicted to achieve a minimum green strength within the subvolume and minimum cross linking to adjacent polymerized subvolumes in the first layer of the material; and selectively expose perimeter subvolumes of the material—corresponding to interior model volumes adjacent upward-facing surfaces in the first layer of the part model—to increased energies predicted to yield three-dimensional polymerization geometries that approximate contours of these upward-facing surfaces. 16. Temperature-based Print Process In one variation, the additive manufacturing system includes a thermal sensor (e.g., an infrared camera) configured to track temperatures across the build window during a build cycle. In this variation, the computer system can access a material profile that links material temperature or temperature gradient during and/or after irradiation to polymerization or polymerization gradient. Accordingly, in this variation, the computer system can implement methods and techniques similar to those described above: to set target temperatures within surface-level and interior voxels—rather than target radiation intensities of these voxels—predicted to yield partial polymerization within surface-level voxels that approximates the surfaces defined in these surface-level voxels; and to store these target temperatures in target temperature images of each layer of the part model. The additive manufacturing system then: tracks temperatures across layers of a part during a build cycle via the thermal sensor; and implements closed-loop controls to selectively irradiate each layer of the part in order to minimize differences between actual and target temperatures across these layers of the part during the build cycle. The additive manufacturing systems and methods described herein can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the application, applet, host, server, network, website, communication service, communication interface, hardware/firmware/software elements of a user computer or mobile device, wristband, smartphone, or any suitable combination thereof. Other systems and methods of the embodiment can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated by computer-executable components integrated with apparatuses and networks of the type described above. The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component can be a processor but any suitable dedicated hardware device can (alternatively or additionally) execute the instructions. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the embodiments of the invention without departing from the scope of this invention as defined in the following claims. | 87,691 |
11861272 | While the disclosure is described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the disclosure is not limited to embodiments or drawings described. It should be understood that the drawings and detailed description hereto are not intended to limit the disclosure to the particular form disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. Any headings used herein are for organizational purposes only and are not meant to limit the scope of the description or the claims. As used herein, the word “may” is used in a permissive sense (i.e., meaning having the potential to) rather than the mandatory sense (i.e. meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. SUMMARY Described herein are systems, methods, mechanisms and techniques for implementing Comprehensive Contention-Based Thread Allocation and Placement. Comprehensive Contention-Based Thread Allocation and Placement, as described herein, may include scheduling multi-threaded workloads in which a configurable number of threads run concurrently. Comprehensive Contention-Based Thread Allocation and Placement may be used, in some embodiments, to optimize performance of a given workload, such as for instance, identifying whether multiple sockets should be used, and whether a given workload may benefit from using multiple threads per core. In addition, the techniques described herein may be used to identify opportunities for reducing resource consumption where additional resources are not matched by additional performance, such as for instance, limiting a workload to a small number of cores when its scaling is poor, according to various embodiments. A system configured to implement Comprehensive Contention-Based Thread Allocation and Placement may predict the performance of parallel workloads on shared-memory multi-core/multi-socket machines and may select how many threads to use when running a given workload. Additionally, systems described herein may determine where to place these threads within the machine (e.g., on which CPU and/or sockets on which to execute thread). For example, some workloads may benefit from having their threads kept within a single CPU in a multi-core/multi-socket machine whereas other workloads may benefit from being spread widely across multiple sockets. Additionally, some workloads may benefit from using multiple hardware threads in each core, whereas others may not. Thus, a system implementing Comprehensive Contention-Based Thread Allocation and Placement may determine where to place threads for a given workload with a given system, according to various example embodiments. A system configured to implement Comprehensive Contention-Based Thread Allocation and Placement, may generate a description of a workload from multiple profiling runs and may then combine this workload description with a description of the machine's hardware to model the workload's performance over thousands of alternative thread placements, according to one embodiment. The systems, methods, mechanisms and/or techniques described herein, may include modeling the complete machine as (i.e., in terms of) resources with different bandwidths between them. For instance, the links between caches and memory may have bandwidths reflecting possible transfer rates. Similarly, CPU cores may have pipelines modeled by rates at which instructions can be fetched into the processor and the rates at which different kinds of instruction can be completed, according to various embodiments. The systems, methods, mechanisms and/or techniques described herein may also include characterizing a workload's performance, such as by utilizing one or more profiling techniques, according to various embodiments. For example, a system configured to implement Comprehensive Contention-Based Thread Allocation and Placement may determine the use of different functional units by observing behavior of test programs co-scheduled on the same cores, and/or quantifying the impact of inter-socket and inter-core communication, according to some embodiments. A system configured to implement Comprehensive Contention-Based Thread Allocation and Placement may generate a machine description based on executing stress applications and machine performance counters monitoring various performance indicators during execution of a synthetic workload. Such a system may also generate a workload description based on profiling sessions and the performance counters. Additionally, behavior of a workload with a proposed thread placement may be modeled based on the machine description and workload description and a prediction of the workload's resource demands and/or performance may be generated. DETAILED DESCRIPTION OF EMBODIMENTS Comprehensive Contention-Based Thread Allocation and Placement is described herein mainly using examples from in-memory graph analytics, in-memory parallel join operations (database kernels), along with traditional parallel computing workloads such as those using the OpenMP runtime system or other parallel runtime systems. For brevity, these may be referred to herein as “analytics workloads”. In some embodiments, performance of analytics workloads may be determined primarily by the processor(s) and the memory system. For instance, data may remain in memory and there may be little use of I/O devices or system calls into the operating system. Furthermore, the number of active threads may be selected to optimize performance (in contrast, say, to a general-purpose workload where a program may create distinct threads for distinct tasks). For example, manual configuration may frequently be used, such as to select the number of threads to create and/or to determine how to place them in the machine. For instance, in one example embodiment using OpenMP, there are settings to select whether threads are placed on nearby cores or distributed across many cores. Similarly, Solaris provides “thread affinity groups” to allow a program to indicate these kinds of preference with a high degree of flexibility. These features may allow a workload to control its scheduling, but may not automate the selection of a good scheduling policy. Additionally, workloads may be selected which will perform either well or poorly when sharing these specific resources within a machine. These techniques may be useful when deciding when to co-locate workloads, however, the techniques described herein may apply to the threads within a single workload (such as threads handling part of the same parallel query within a server system), according to some embodiments. As noted above, a system configured to implement Comprehensive Contention-Based Thread Allocation and Placement, which may be referred to herein as “the system”, may generate a description of a workload from multiple, (e.g., 6) profiling runs, and may combine this workload description with a description of the machine's hardware to model the workload's performance over thousands of alternative thread placements, according to one embodiment. The techniques described herein may be considered “comprehensive” in that they may account for contention of multiple different resources within the machine, such as both processor functional units and memory channels. For example, the point of contention for a single workload may shift between resources as the degree of parallelism and thread placement changes. The techniques described herein may account for these changes and may provide a close correspondence between predicted performance and actual performance. INTRODUCTION The techniques described herein may, in some embodiments, implement a system for modeling the performance characteristics and resource demands of parallel in-memory workloads. For instance, based on multiple (e.g., 6 in some embodiments) profiling runs, a workload may be modeled and the workload's performance may be quantitatively predicted across different numbers of threads and different placements of those threads within a machine. The results of the modelling and predicting may be used to predict (and/or determine) the best thread allocation for a given workload and/or resources needed for a workload to meet a specified performance target, according to some embodiments. FIG.1is a logical block diagram illustrating a system configured to implement Comprehensive Contention-Based Thread Allocation and Placement, as described herein. The illustrated computer system may model the performance of in-memory parallel workloads with differing thread counts and placements, according to one embodiment. As illustrated inFIG.1, a computer system100, which may in some embodiments be a multi-core system or a multi-socket system, may include an in-memory parallel workload performance modeler110configured to model the performance of in-memory parallel workloads with differing thread counts and placements. The in-memory parallel workload performance modeler110may include, and/or utilize, various components when modeling workload performance, such as machine description generator120, workload description generator130and/or performance predictor140, according to one embodiments. While illustrated inFIG.1as being a part of, or included within, in-memory parallel workload performance modeler110, in some embodiments, machine description generator120, workload description generator130and/or performance predictor140may be separate, distinct and/or external to, in-memory parallel workload performance modeler110. In some embodiments, computer system100may be configured to implement a method for Comprehensive Contention-Based Thread Allocation and Placement including, for example, generating a machine description for a system (e.g., a multi-core or multi-socket system) based on executing one or more stress applications on the system and values of one or more machine performance counters configured to monitor one or more performance indicators during execution of a synthetic workload on the system. Such method may also include generating a workload description for the system based on results from executing multiple profiling session on the system and the values of the one or more machine performance counters. Additionally, computer system100may be configured to model behavior of a workload with a proposed thread placement based on the machine description and the workload description. Based on results of modeling the workload behavior, computer system100may generate a prediction of the workload's resource demands and performance for the system. FIG.2illustrates, according to one embodiment, an example set of results collected on a molecular dynamics (MD) simulation running on a 2-socket Intel Haswell system.FIG.2illustrates example predicted performance compared with measured performance for MD. The x-axis shows different thread placements with placements using the same number of threads being grouped together. The y-axis shows the performance of different thread allocations, exploring different numbers of threads and different placements of those threads on the h/w contexts in the machine. The dashed points show the measured performance of the workload when testing all of these allocations. The dotted points show the performance predicted from 6 profiling runs, according to this example embodiment. As noted above, computer system100may generate a machine description, generate a description of a workload, and from these model the performance of a given thread placement.FIG.3is a flowchart illustrating one embodiment of a method for implementing Comprehensive Contention-Based Thread Allocation and Placement as described herein. In some embodiments, computer system100and/or machine description generator120may include and/or rely on one or more stress applications, such as one or more test applications that stress the use of different resources in the machines. For instance, as illustrated by block310, machine description generator120may generate a machine description based, at least in part, on results of executing one or more stress applications and/or the values of one or more performance counters. For example, in some embodiments machine description generator120may be, or may be considered, a collection of stress applications (e.g., one or more test applications that stress the use of different resources in the machines), that in conjunction with one or more machine performance counters may determine the structure and properties of the machine they are running on. The results of these queries may be combined into a machine description which may be used by other components. Additionally, a workload description generator130may generate a workload description based, at least in part, on results from executing multiple profiling sessions and on the performance counter values, as shown in block320. For instance, workload description generator130may execute a workload in multiple (e.g., 6 in some embodiments) profiling experiments during which performance counters may be monitored and the workload may be collocated with stress applications. Thereafter, a workload description may be generated modelling the particular workload. Furthermore, a performance predictor140may model behavior of a workload with a given thread placement based, at least in part, on the machine description, the workload description and a proposed thread placement, as in block330. For example, performance predictor140may utilize the previously generated machine description, workload description, and a proposed thread placement and model the behavior of the workload with the given thread placement. The performance predictor140may then proceed to generate a prediction of the workload's performance and resource demands, as in block340. In some embodiments, an initial assumption (possibly a naïve assumption) for a given thread placement may be used in which each thread in the parallel workload will impose the same load on the machine as a single thread from a profiling run. This load may, in some embodiments, then be iteratively refined by (i) identifying any bottleneck resources for each thread, scaling back the thread's performance accordingly, and (ii) predicting any overheads incurred by communication and synchronization, scaling back the thread's execution rate to account for these costs. Note that in some embodiments, there may be complex interactions between the demands for different resources. The load may be iteratively refined until a fixed point is reached, according to some embodiments. The techniques described herein may also be utilized to predict performance of parallel workloads on shared-memory multi-core and/or multi-socket machines. A system configured to implement the techniques described herein may identify a set of hardware and software assumptions which permit performance predictions to be made without requiring detailed models of how the cache is implemented, or how threads will compete within it, as opposed to generating miss-rate curves, according to some embodiments. In addition, the techniques described herein may include techniques for predicting performances based on iteratively predicting the slowdowns that a workload will experience due to contention-based and/or synchronization-based overheads, according to some embodiments. Overview FIG.4is a logical block diagram illustrating various components that may be included as part of a system configured to implement Comprehensive Contention-Based Thread Allocation and Placement as well as data flow and/or dependencies that link them, according to one embodiment. For instance, machine description generator120may be, or include, a collection of stress applications440. Machine description generator120may utilize the stress application in conjunction with machine performance counters to determine structure and properties of a machine on which they are executing. The results of these queries may be combined into a machine description400which may be used by other components. Machine descriptions may be created once for each machine, and may be independent of the workload being scheduled, according to various embodiments. Thus, a machine description400may be constructed through a combination of information obtained from the operating system (OS), and measurements taken from the performance of synthetic applications, such as stress applications440. Stress applications440may be configured (e.g., either individually or collectively) to stress particular resources of the system. In some embodiments, the OS may provide various types of information, such as the number of processor sockets, cores per socket and hardware threads per core as well as the structure of links between each level in the cache hierarchy, the topology of the interconnect between the sockets, and/or locations of links to main memory. The OS-provided information may then be combined with performance measurements from the stress applications, such as describing the bandwidth of different links in the memory system as well as the performance of the cores, to generate the machine description400, according to some embodiments. Additionally, in some embodiments rather than being measured (e.g., by the one or more stress applications) the performance measurements may be provided by the OS (e.g., along with the topology of the machine) or they may be configured manually for the machine. Workload description generator130may execute a workload in multiple (e.g., 6 in some embodiments) profiling experiments. During profile experiments, performance counters may be monitored and the workload may be collocated with stress applications. A workload description410modelling the particular workload may then be generated. In some embodiments, information required to model a given workload may be collected and encoded in workload description410. Workload description410may be specific to a given workload on a given machine and thus may be regenerated when moving a workload to different hardware. Performance predictor140may take machine description400, workload description410, and a proposed thread placement420and may use them to model the behavior of the workload with the given thread placement, and provide a performance prediction430of the workload's resource demands and performance. In some embodiments, performance prediction430may be constructed from a combination of an anticipated speed-up assuming perfect scaling of parallel sections of the workload and a slowdown reflecting the impact of resource contention and synchronization. Performance predictor140may proceed iteratively until reaching stable prediction. In some embodiments, the aspects of a modeled workload may be split into multiple groups, including, but not limited to, resource demands, parallel fraction, inter-socket latency, load balancing factors, core burstiness, and/or thread utilization, as summarized below. Resource Demands. The hardware resources expected to be consumed by threads may be modeled. These resources may include bandwidths at each level of the memory hierarchy as well as the compute resources within a core. Parallel Fraction. How the workload is expected to scale (e.g., an expected workload scaling) with a thread count of n in the absence of other constraints may also be modeled. This scaling may, in some embodiments, be provided by Amdahl's law, such as based on the notion that some fraction of the execution p is parallel and the rest is serial: Speedup=1(1-p)+pn Inter-Socket Latency. Bandwidth consumed by inter-socket communication may be recorded as part of the resource demands, but latency introduced by inter-socket communication may also be modeled. When threads access shared memory locations the performance may depend on the relative locations of the threads, if atomic operations are used, if there is any sharing, and if so what kind, such as false or genuine sharing. This variation may make it unfeasible, in some embodiments, to model the detailed interactions at the hardware level. These effects may be measured in aggregate for a workload by determining the workload's sensitivity to having its threads placed on different sockets. This may capture the sensitivity of the workload it is to such costs, and the actual latencies introduced by the hardware, according to some embodiments. For instance, if threads communicate rarely the values may remain low even if the hardware introduces high latencies between sockets, according to some embodiments. Load Balancing Factor. In cases where threads are not placed symmetrically it may be important to determine the effect on the overall speed of a workload if some threads slow down more than others. For example, some workloads may use static work distribution in which if a slow thread becomes a straggler, it may delay completion of the complete workload. In such a case each thread may perform an equal amount of work, but the time individual threads spend performing the work may differ. Other workloads may use work stealing to distribute work dynamically between threads, possibly allowing any slowness by one thread to be compensated for by other threads picking up its work. In this case the overall performance of the workload may be determined by the aggregate throughput of all of its threads as threads may be active for the same amount of time, but some threads may be making progress more quickly than others. In practice, workloads may be somewhere between these points. In some embodiments, this may be expressed by a load balancing factor indicating where a workload lies between these extremes. Core Burstiness. Core burstiness may quantify the extent to which the workload's demands for resources in a core may be spread over time (e.g., from being spread evenly over time or occurring in synchronized bursts), according to some embodiments. Synchronized bursts may coincide when threads are collocated, possibly producing additional resource contention. Thus, it may be misleading to rely on simple average demands captured over a time, as a low average may reflect a steady demand which may be accommodated from multiple threads without penalty, or it could reflect coinciding peaks and troughs. Thread Utilization. If applications fail to scale perfectly (such as due to sequential sections, or waiting on slower threads and communication) then resource demands for threads may be reduced accordingly, according to some embodiments. Likewise, if a thread is waiting on resources (or waiting on other threads), some latency may be hidden in time lost waiting for resources. Thus, in some embodiments, a thread utilization factor f usable to scale the requirements, may be introduced to accommodate this. A thread utilization factor may be calculated for each thread and be recomputed at each step during the model's execution, according to some embodiments. Machine Description According to various embodiments various pieces of information may be collected and encoded into a machine description modelling structure and/or performance of a machine. In some embodiments, machine descriptions may be created once for each machine and may be independent of the particular workload being scheduled. A machine description, such as machine description400, may be constructed through a combination of information obtained from the operating system (OS) as well as measurements taken from performance of synthetic applications, such as stress applications440, which may stress particular resources. For example, in one embodiment the OS may provide various types of information such as: the number of processor sockets, the number of cores per socket, and/or the number of hardware threads per core. In some embodiments, a machine description may include information describing the structure of links between levels in the cache hierarchy, the topology of the interconnect between the sockets, and/or the locations of the links to main memory. For example, a machine description may include, and/or be represented as, a graph of the hardware components of the machine and the relationships between them. Additionally, this information may be combined with performance measurements describing bandwidth between and/or among the components. For instance, in some embodiments, a hardware description may include regarding performance of the different links in a memory system and/or performance of the cores. Thus, a machine description may include information identifying (and/or describing) multiple system components, information identifying (and/or describing) relationships between (and/or among) those components, and/or information identifying (and/or describing) performance (absolute or relative performance) along communication links and/or interconnects between the components. FIG.5illustrates an example machine description400for a system comprising two dual-core processors and no caches, according to one embodiment. WhileFIG.5illustrates a machine description graphically, the information illustrated inFIG.5may be represented in any of various manners, according to different embodiments. A machine description400may indicate bandwidth510on the memory links (e.g., 100), bandwidth520on an interconnect between sockets530, along with the maximum instruction throughput500per core, according to some embodiments. For instance, according to the example embodiment illustrated inFIG.5, machine description400indicates memory link bandwidth of 100, an interconnect bandwidth of 50, as well as maximum instruction throughputs per core of 10. Note that for brevity units are omitted. In general, since consistent units may be used when modeling a machine and workload, the exact scale may not be relevant, according to some embodiments. Measuring Link Bandwidth Starting from a machine topology, one or more stress applications440may be executed, such as to determine the maximum bandwidth achieved on links in the memory system (e.g., between levels in the cache hierarchy or on interconnects between sockets). In some embodiments, results obtained from workloads running on the machine itself may be used for some or all of these measurements, rather than numbers obtained other sources, such as from data sheets. This empirical approach may allow the same measurement techniques to be used for profiling the machine and for profiling the performance of a workload, according to some embodiments. In some embodiments, the implementation of the stress applications may be optimized to increase the consumption of the resource being stressed. For example, in one embodiment stress applications440may allocate an array of a parameterizable size accessible linearly with a single value read and/or written per cache line with accesses in an unwound loop with constant arguments to allow for effective prefetching and branch prediction. When multiple threads are used, each thread may have a unique set of cache lines that it will access. In some embodiments, the size of the array may be chosen to almost fill the size of the component at the far end of the link, without spilling into the next level. For example, according to one embodiment an array allocated in main memory may be at least 100 times the size of the last level of cache. This may ensure that most (or almost all) of the accesses miss in the cache, even if the cache is adaptively holding a portion of the array. When placing in memory, tools such as numactl, may be used to ensure the correct placement of memory allocations, according to some embodiments. When measuring the bandwidth of shared caches, it may be important to measure the maximum bandwidth of each link to the cache and the maximum cumulative bandwidth that the cache can sustain. For example, the L3 cache may not be able to support sufficient bandwidth for the maximum access rate from a core to be sustained to all cores simultaneously. So on an 18 core chip each core may achieve a peak bandwidth of 360, but the L3 cache as a whole may only provide 5000, according to some embodiments. Measuring Core Performance Maximum core performance may be measured by monitoring counters (e.g., performance counters) while executing a synthetic workload. The workload may perform operations on a sufficiently small dataset such that it fits into the L1 cache without incurring cache misses during execution, according to one embodiment. In some embodiments, pipeline stalls may be avoided by providing a large number of independent operations, and branch stalls may be reduced by unwinding the loop and using constant values on the loop guard to allow good prediction. The operations may be integer based to enable peak performance. There may be variation in peak performance based on the type of operation used. Additionally, in some embodiments, two threads may be executed on the core, such as to assess if the core suffers a loss in peak performance when co-scheduling multiple software threads on different hardware thread contexts provided by the core. Performance may be measured in instructions executed per unit time. Workload Model As noted above, a system configured to implement Comprehensive Contention-Based Thread Allocation and Placement may collect information to model a given workload. This information may be encoded in a workload description410. A workload description may be specific to a given workload on a given machine and may ideally be regenerated when moving to different hardware. However, predictions may remain useful across similar hardware platforms, according to some embodiments. According to one embodiment, information about the workload's performance may be gathered by making multiple (e.g., 6 in some embodiments) test runs of the workload with different thread placements designed to elucidate particular details, such as for example, the sensitivity to having threads collocated on the same cores. Note that different factors may be inter-related, according to various embodiments. In some embodiments, the workload model may be built up incrementally in multiple (e.g., 5) steps, with each step providing a successively more detailed description of the workload's behavior. The experimental runs may be organized so that the behavior being considered in a given step may depend (only) on factors already determined in preceding steps. For example, a step 2 may depend on step 1, while steps 3-5 may be independent of each other, but may depend on steps 1 and 2. In some embodiments, aside from a first step (e.g., step 1) these dependencies may exist only in the calculation of the model parameters—the actual experimental runs may proceed concurrently if multiple machines are available. Please note that while the various determinations and calculations described herein regarding generating a workload model represent merely one example according to one embodiment. The use of the term “step” is for ease of explanation only and is not meant to convey absolute or necessary divisions between the features described. For example, the following table illustrates 5 example steps that may, in some embodiments, be used to generate a workload model. The following table also includes possible values for the steps for an example system illustrated inFIG.5. ExampleStepPropertyValue1Single thread resource demands (d) - The single thread[7, 40]execution time t1and a vector of resource demandsfor that one thread2Parallel fraction (p) - The fraction of the workload0.9which runs in parallel3Inter-socket overhead (0s) - the latency relative to t10.1for inter-socket communication when threads areplaced on different sockets.4Load balancing factor (l) - The extent to which the0.5workload can be rebalanced dynamically betweenthreads based on their progress5Core burstiness (b) - The sensitivity to collocation of0.5threads in a core Thread Utilization If applications fail to scale perfectly, such as due to sequential sections, or waiting on slower threads and communication, then their execution time may increase while the total resources they will require may remain constant. This means the rate of resource consumption for threads may be reduced accordingly. Likewise, if a thread is waiting on other threads or on resources, then some latency may be hidden in the time lost waiting for resources. As noted above, a thread utilization factor f usable to scale requirements may be introduced in some embodiments to accommodate this. This thread utilization factor may be calculated for each thread at each step based on the results of the proceeding steps.FIG.6, discussed below demonstrates this. FIG.6illustrates examples of possible loadings when calculating thread utilization, according to some embodiments. The grey boxes show the resources used in each scenario and all have the same area, and the dashed box in the third graph shows the resources available. In the first graph 1 thread is executing, in the second graph, 2 threads are executing with ideal scaling and in the third graph, 2 threads are executing with non-ideal scaling. The dashed box represents the resources available and the gray box represents the actual resources used. The utilization factor is the ratio between these two. In the calculations described herein, according to some embodiments, the thread utilization factor is recomputed at each step. The utilization factor f may be annotated as fx to identify the value at the start of step x. Thread utilization may be necessary, in some embodiments, to remove the scaling from values when generating the workload description as well as to add it when performing performance predictions. FIG.7is a logical block diagram illustrating 6 example workload test runs used, such as by workload description generator130, to generate a description of an example workload in one embodiment. In the illustrated test runs, arrows represent threads and crosses represent stress applications. The details of the example test runs will be described in more details subsequently regarding the properties and calculations performs during test runs. FIG.8is a flowchart illustrating one embodiment of a method for generating a workload description. As illustrated in block810, workload description generator130may determine execution time and/or resource demands for a single thread. For example, the workload may be run with a single thread to obtain (e.g., calculate) an instruction execution rate and bandwidth requirements to each level of cache hierarchy as well as to main memory, as will be described in more detail subsequently. Workload description generator130may also determine a fraction of the workload that may be executed in parallel, as in block820. For instance, in one embodiment workload description generator130may perform an additional workload run with threads placed to avoid contention and the number of threads set sufficiently low to avoid over-subscribing resources. From timing this run, the parallel fraction may be calculated, as will be described in more detail subsequently. In some embodiments, over-subscription may be avoided based on the machine description's record of the resources available in the machine, and the single-thread resource usage determined at block810. Threads may be placed on each core in turn such that the total load on the machine remains below resource availability. Workload description generator130may further determine the latency (or may determine a value indicating the latency) for inter-socket communications when threads are placed on different sockets, as in block830. For example, in some embodiments inter-socket latency may be defined as the additional time penalty a given thread incurs for each of the threads on a different socket (e.g., different to the given thread). To determine inter-socket latency workload description generator130may, in some embodiments, perform another workload run using the same placement as that used for determining the parallel fraction, but moving a portion (e.g., half) of the threads onto the other socket. The inter-socket latency may then be determined based on results of this additional workload run, as will be described in more detail subsequently. Workload description generator130may also determine an extent to which the workload may be re-balanced between threads based on their progress, as in block840. For example, to determine a load-balancing factor, workload description generator130may deliberately slow down threads and observe how the workload's execution changes, as will be described in more detail subsequently. Workload description generator130may further determine the sensitivity to collocation of threads within a core, as in block850. For example, workload description generator130may compare the performance of two workload runs that differ only in the collocation of threads on cores. Core burstiness, or the percentage extra time required due to collocation may be calculated, as will be described in more detail subsequently. While the various steps illustrated inFIG.8are shown in particular order, they may be performed in different orders according to various embodiments. Single Thread Time and Resource Demands (Step 1) First, the workload may be run with a single thread to get the time t1 along with the instruction execution rate and the bandwidth requirements for a single thread between each level of the cache hierarchy and between the last level cache and the main memory. These metrics may provide the basic sequential resource demands of the workload. They may be measured using the same performance counters described above during a single run. Since, there is only 1 thread, scaling due to the results from other steps described below may not be required for this step. Run 1 ofFIG.7shows an example of the results collected according to an example embodiment. In each subsequent step the execution time recorded at step x (tx) may be normalized relative to this sequential execution time rx=tx/t1and this relative time rxmay be the product of the known factors (kx) already accounted for in previous steps, and the unknown factors (ux) which are not yet determined. In some embodiments kxmay be calculated based on the workload description from the existing steps. uxmay then be calculated by ux=rx/kx Parallel Fraction (Step 2) The parallel fraction (e.g., an expected workload scaling in the absence of other constraints) may be determined with an extra run, as illustrated inFIG.7(Run 2). This thread placement may use only 1 thread per core, and may constrain those threads to a single socket. This may, in some embodiments, avoid dependencies on any other as-yet-uncalculated parts of the workload model. The placement may also be constrained to avoid over-subscription so that information from subsequent modelling steps does not need to be incorporated, thereby ensuring only 1 valid value of p. In practice choosing such a placement may not represent a problem on any of the hardware looked at so far as the existing constants may mean it is only required to not overload the cumulative L3 cache bandwidth and the main memory bandwidth. In some embodiments, when selecting this placement, the largest number of threads that can be placed on a single socket while still satisfying the above conditions may be used and the placement may be made to use an even number of threads so that the result may be reused. p may then be derived from Amdahl's law with the equation: u2=1-p+Pn Inter-Socket Latency (Step 3) Following the workload assumptions, each thread may be assumed to communicate equally with every other thread, according to some embodiments. Each of the links between threads may incur a latency (os) if it crosses an inter-socket boundary. Thread placement may be chosen to measure this maintains symmetry for all thread communication, as in run 3 ofFIG.7. Adding the n2 links in this placement that incur the latency osto the parallel fraction model results in: r3=(1+n2×osfcomm)(1-p+pn), according to this example embodiment. The links osmay be scaled by fcommas described above regarding thread utilization. Removing the known factors this becomes: u3=1+n2×osfcomm from which osmay be solved according to this example embodiment. Load Balancing Factor (Step 4) The profiling runs for steps 1-3 may use symmetric placements (e.g., in the sense that each thread may experience the same (or similar) contention as other threads. For instance, in some embodiments using symmetric placements may involve having the same number of threads per core and the same number of threads per socket across all cores and sockets hosting threads. However, in some embodiments, the workload description may be extended to describe cases where threads are not placed symmetrically. In these cases, it may be important to determine the effect on the overall speed of a workload if some threads slow down more than others. For instance, some workloads may use static work distribution, and a slow thread may become a straggler, possibly delaying completion of workload. Other workloads may use work stealing to distribute work dynamically between threads, thereby possibly allowing any slowness by one thread to be compensated for by other threads picking up its work, so performance may be the aggregate throughput. In some embodiments, this may be expressed by using a load balancing factor l ∈[0 . . . 1] indicating where a workload lies between these extremes. If l=0 then there is no dynamic load balancing, and the threads proceed in lock-step. If l=1 then they may proceed independently, according to some embodiments. In practice, workloads may be somewhere between these points. The load balancing factor l may be measured by considering how the performance of one thread impacts the performance of the complete workload. To do this, in some embodiments, threads may be deliberately slowed down and how the workload's execution changes may be observed. In a given run, simay be considered the slowdown of thread i, and smin=mini=1i=nsi. If there are n threads, and a parallel fraction p, then the relative execution rate in the two extreme cases is: lock-step: slock=((1−p)×smin+p×maxi=1i=nsi) Load-balanced: sbal=((1-p)×smin+np/(∑i=1n1si)) For a run in between these extremes the relative execution rate (sl) is: sl=(1−l)×slock+l×sbal In some embodiments, l may be calculated from multiple (e.g., 3) runs, with all runs possibly using the same thread layout, as inFIG.7(Runs 2, 4, & 5). In run 2 the threads may execute as normal, so for all i, si=1. In run 4 all threads may compete against a simple CPU-bound loop which will delay their execution. The ratio between these relative times gives u4/u2=sstresser>1. Using this value values for slockand sbalmay be constructed for the case where n−1 threads have si=1 and 1 thread has si=sstresser. In run 5 only one thread may be slowed. The slowdown experienced is u5/u2=Slallowing the above equation to be solved for l. Core Burstiness (Step 5) To account for core burstiness the performance of two runs may be compared which differ only in the collocation of multiple threads per core,FIG.7(Runs 2, & 6). The first run may use one thread per core across a single socket, while the second run may use the same number of threads packed into half the number of cores, according to one example embodiment. Taking the unknown factors remaining in these two runs burstiness may be defined as the percentage extra time required due to collocation: Burstiness:b=1fb×(u6u2-1) In the above burstiness equation, 1fb is used since there is no scaling (i.e., u2=1) in this example embodiment. However, in other embodiments, scaling may need to be included (e.g., from whichever run replaces u2), replacing 1fb with the scaling factor divided by fb. Performance Prediction Given a machine description, workload description and a proposed thread placement, the performance for the proposed thread placement may be predicted. The performance may be constructed from two elements: (i) an anticipated speed-up based on Amdahl's law assuming perfect scaling of the parallel section of the workload, and (ii) a slowdown reflecting the impact of resource contention and synchronization, according to some embodiments. Speedup. As discussed above a speedup may be calculated (e.g., via Amdahl's law) based on the parallel fraction of the workload (p) and the number of threads in use (n). For example, using an example workload described above (p=0.9) and using the placement inFIG.9, n=3 so speedup=2.5. Slowdown. The slowdown may then be predicted by considering the resource-contention, communication, and synchronization introduced by the threads. These factors may be considered interdependent. In some embodiments, these different factors may be handled by proceeding iteratively until a stable prediction is reached (in practice only a few iteration steps are needed for the workloads we have studied). FIG.9is a flow chart illustrating one embodiment of a method for performance prediction, for three threads U, V, W, running the workload fromFIG.7. First, a proposed thread placement may be determined, as in block910. Then a predicted slowdown may be calculated from resource as in block920. For instance, in one embodiment a naive set of resource demands based on the per-thread resource usage may be combined with the machine model based on the locations of the threads and used to model contention for hardware resources. Additionally, as in block930, a predicted penalty for inter-socket communication may be calculated. For example, to predict the performance impact of inter-socket communication, the system may consider the locations of the threads being modeled and the amount of work that will be performed by each thread. An overhead value representing additional latency may be determined as the latency incurred by a given thread when communicating with another thread. Additionally, the slowdown incurred by the placement of threads on different sockets as well as the prevalence of lockstep execution between threads may both be accounted for. The predicted penalty for poor load balancing may also be calculated as in block940. For example, in some embodiments, the workload's load balancing factor may be used to interpolate between the extreme case and the workload's current predicted slowdown. As illustrated by the negative output of decision block950, if the per-thread predictions have not converged, the results from the communication and synchronization phased (described above regarding blocks920,930and940) may be feed back into the contention phases. For example, each time around the loop inFIG.9, new values may be calculated for the contention-based slowdown which may be used to estimate the costs of communication and synchronization, which in turn may be fed back into the next iteration. Additionally, as in block960, the resource requirements may be adjusted each time through the loop, such as to allow for slowdowns from interconnect as one example, as will be described in more detail below regarding iterating. After the per-thread predictions have converged, as indicated by the positive output of decision block950, the final predicted speedup may be calculated, as in block970. For example, in some embodiments, the final predicted speedup may be calculated by combining the speedup from Amdahl's law with the average slowdown predicted for the threads. Thus, for each thread, there may be maintained (i) an overall predicted slowdown, and (ii) the thread utilization factor (fd) used to scale resources to the time the thread is working. Initially, a factor of the Amdahl's law speedup divided by the ideal speedup may be used. Additionally, alternating between (i) modeling the contention for hardware resources occurring as the threads execute, and (ii) Adding or removing slowdown attributed to communication and synchronization may also be used, in some embodiments. The following table illustrates, according to one example, the start of the first iteration: ThreadUVWResource slowdown +1.001.001.00communication penalty +0.000.000.00load balance penalty0.000.000.00Overall slowdown1.001.001.00New thread utilization0.830.830.83 In some embodiments, the thread utilization factors may be initialized as the Amdahl's law speedup divided by the ideal speedup, the number of threads. This reflects the fraction of the time that a thread would be busy if the Amdahl's law speedup is achieved. For instance, if n=3, and the Amdahl's law speedup is 2.5, then the threads will be busy in parallel work for 0.83 of their time. This first estimate may be referred to herein as finitial. Note that the same value may be used across all threads rather than distinguishing a main thread which executes sequential sections. This may reflect an assumption of generally-parallel workloads in which sequential work is scattered across all threads in critical sections. Slowdown from Resource Contention In some embodiments, contention for hardware resources may be modeled by starting from a naïve set of resource demands based on the vector d in the workload description. For instance, the values in the vector may represent rates and therefore may be added at each of the locations running a thread from the workload. These values may be scaled by the respective thread utilization factors. Thus, for each resource, contributions of the individual threads may be summed and the aggregate rate demanded may be shown. For example, while the aggregate required bandwidth to DRAM is 3×40=120, it is scaled 0.83×120=100, as illustrated by the example machine description1000inFIG.10. Based on the resource demands, the overall predicted slowdowns for each thread may be initialized. The vector may be initialized to the maximum factor by which any resource used by the thread is over-subscribed. In the example, this is the interconnect link between the two sockets which is oversubscribed by a factor of 100/50=2. In more complex examples, according to different embodiments, different threads may see different bottlenecks. This basic model of contention may be applied for all of the resources in the machine. However, in addition, the workload model's core burstiness factor (b) may be incorporated in cases where threads share a core. The following table illustrates example slowdowns updated based on the most over-subscribed resource used by each thread and to reflect the fact that U and V share a core: ThreadUVWResource slowdown +2.832.832.00communication penalty +0.000.000.00load balance penalty0.000.000.00Overall slowdown2.832.832.00New thread utilization0.290.290.42 Threads U and V may be slowed by b′ in this example workload model because they share a core, whereas W is not. b′ is the scaled value of b, and is calculated by: b′=1+b×fbfb=0.83 As described above, this may reflect the fact that some workloads show significant interference between threads on the same core even though the average resource demands for functional units are well within the limits supported by the hardware, according to various embodiments. The thread utilization factors may then be recomputed reflected these new slowdowns. For instance, while initially calculated by the Amhahl's law speed up divided by the ideal speed up, the slowdown may now be included, such as by dividing the Amdahl's law speedup by the expected slowdown. This in this example, (2.5/2.83)/3=0.29 and (2.5/2)/3=0.42. Penalties for Off-Socket Communication The overheads introduced by synchronization between threads may also be accounted for. For example, there may be two factors to consider. First, the slowdown incurred by the placement of threads on different sockets, leading to increased latency in their use of shared-memory data structures for communication. Second, prevalence of lockstep execution between threads, requiring threads performing at different rates to wait for one another. Quantitatively, the overhead value osmay represent the additional latency for each pair of threads that is split between different sockets, such as under the assumption that the work performed is distributed evenly between the threads (as it is in the profiling runs). To predict the performance impact of communication, the system may consider (i) the locations of the threads being modeled, and hence the number of pairs which span sockets, and (ii) the amount of work that will be performed by each thread, and hence how significant or not a given link will be. In some embodiments, oi,jmay be defined to be the latency incurred by thread i for communication between threads i and j—this is equal to osif the threads are on different sockets and 0 otherwise. To model the amount of work performed by each thread the load balancing factor may be considered. For example, if the threads proceed in lockstep then the amount of work they perform may be equal, whereas if they are completely independent then faster threads may perform more of the work. The communication in these two extreme cases may be considered and interpolated linearly between them based on the load balancing factor l. Completely Lock-Step Execution. When execution proceeds without any dynamic load balancing, each of the threads may perform an equal amount of work so additional slowdown for communication for thread i is: lockstep(i)=Σj-1j=noi,j In the example: lockstep(U)=lockstep(V)=0.0+0.0+0.1 lockstep(W)=0.1+0.1+0.0 Completely Independent Execution. When execution is completely independent, the amount of work performed by the threads may differ. The busier threads may communicate more, and their links with other threads may be more significant. Given the current predicted slowdowns for each thread s1. . . sn, the weight wiof a thread may be defined as the fraction of the total work that thread i will perform: worki=1siwi=worki∑j=1j=nworkj In the example, given slowdowns 3, 3 and 2 for the three threads, we have weights 2/7, 2/7 and 3/7 respectively. The fastest thread may perform more of the work than the slower threads, and the communication it performs is likely to be more significant. For instance, in a system with caches, it may be stealing cache lines from the other threads more frequently. Given these weights the communication cost is then: independent(i)=nΣj-1j=nwjoi,j In the example: independent(U)=independent(V)=(0.88×0.0+0.88×0.0+1.24×0.1)=0.124independent(W)=(0.88×0.1+0.88×0.1+1.24×0.0)=0.176 Combining the Results. Given the two extreme cases, we may interpolate linearly between them based on the load balancing factor to obtain an additional slowdown factor: comm.slowdown(i)=lindependent(i)+(1−l) lockstep(i) In the example: comm.slowdown(U)=comm.slowdown(V)=0.5×0.1+0.5×0.124=0×112comm.slowdown(W)=0.5×0.2+0.5×0.1760.188 Each of these may then be scaled by fl(0.29, 0.29 & 0.42), such as to allow for the extra time available for communication if the other operations are slowed down by other conflicts. These may then be added to the existing slowdowns for each of the threads. Penalties for Poor Load Balancing Additionally, whether or not the workload can dynamically rebalance work between the threads may be accounted for. In one extreme case, if the threads proceed completely in lock-step, then they may have to wait for one another to complete work and so the overall performance may be governed by the slowest thread. In the example, thread W would be slowed down to match U and V if they operated completely in lockstep, and all three threads would have slowdown 2.88. In some embodiments, the workload's load balancing factor l may be used to interpolate between the extreme case and the workload's current predicted slowdown. The following two tables illustrate, according to this example embodiment, where l=0.5, W being slowed down to the point 50% of the way between 2.08 and 2.87: The following table illustrates example slowdowns updated to include predicted cross-socket communication where U and V communicate with lower overhead than U and W: ThreadUVWResource slowdown +2.832.832.00communication penalty +0.030.030.08load balance penalty0.000.000.00Overall slowdown2.872.872.08New thread utilization0.290.290.40 The following table illustrates how, after the first iteration, slowdowns updated to include the effect of dynamic load balancing between the threads: ThreadUVWResource slowdown +2.832.832.00communication penalty +0.030.030.08load balance penalty0.000.000.40Overall slowdown2.872.872.48New thread utilization0.290.290.34 Iterating In some embodiments, the system may alternate between updating the slowdown estimates based on resource contention and updating the estimates for the impact of communication and synchronization. Each time around the loop inFIG.9, new values may be calculated for the contention-based slowdown, then these may be used to estimate the costs of communication and synchronization, which in turn may be fed back into the next iteration. In some embodiments, only a few iteration steps may be needed for the workloads. For example, information may be fed from iteration i to i+1 by updating the thread utilization factors used as the starting point for i+1. For each thread, the system may determine the amount of overall slowdown in iteration i that was due to the penalties incurred. In some embodiments, this may be the ratio of the thread's slowdown due to resource contention to its overall slowdown. In the ongoing example, threads U and V have 2.83/2.87=0.99, and thread W has 2.0/2.48=0.81. This difference may reflect the fact that thread W is harmed by poor load balancing. The new iteration (i+1) may be started by resetting the thread utilization factors to finitialscaled by the penalties. this may be considered as transferring the lessons learned about synchronization behavior in iterations 1 . . . i into the starting point for iteration i+1. To feed results from the communication and synchronization phase back into the contention phase the system may, in some embodiments, calculate new thread usage factors, such as to reflect any changes to the performance limitations of each thread from synchronization or communications delays. Following the ongoing example, the thread utilizations for thread U and V are updated to finitial*0.99=0.83*0.99=0.82, and W is updated to 0.83*0.81=0.67, as in the following table, which illustrates the state at the start of the second iteration: ThreadUVWResource slowdown +1.001.001.00communication penalty +0.000.000.00load balance penalty0.000.000.00Overall slowdown1.001.001.00New thread utilization0.820.820.67 Thus, in the ongoing example, the new thread utilization factors are 0.82 for U and V, and 0.67 for W. The other parts of the prediction are reset and the system may continue by computing the new resource demands based on the new thread utilization factors, as illustrated for the example machine model1100inFIG.11. Comparing the resource demands illustrated inFIG.10with those inFIG.11, the load imposed by thread W is reduced (e.g., significantly), but the interconnect remains the bottleneck. Final Predictions After the per-thread predictions have converged, the final predicted speedup may be calculated, such as by combining the speedup from Amdahl's law with the average slowdown predicted for the threads using our model: speedup=Amdahl’slawspeedup×∑i=1n1sin In the example, this gives a predicted speedup of 1.005 after 4 iterations. This extremely poor performance may be considered as primarily due to the inter-socket link being almost completely saturated by a single thread. Evaluation Comprehensive Contention-Based Thread Allocation and Placement, as described herein, may be implemented for various types of machines, according to various embodiments. For instance, in some embodiments, Comprehensive Contention-Based Thread Allocation and Placement may be implemented for cache-coherent shared-memory multi-processor machines. The performance of Comprehensive Contention-Based Thread Allocation and Placement, as described herein was tested on 22 test benchmark workloads. The evaluation described herein was carried out using, according to one example embodiment, 2-socket Intel Haswell systems with 18 cores per socket (72 total hardware threads) in which parallelism is exposed by multiple threads within each core, multiple cores within each chip, and two chips within the complete machine. For each benchmark the 6 runs required to generate the workload model were performed. When performing the example evaluation described herein, it may be assumed that the hardware is homogeneous in that each core is identical, each chip is identical, and the interconnect between the sockets is the same from the viewpoint of each chip. However, other systems, such as systems that may be considered typical of machines used in current data centers, both for scale-out workloads using multiple 1-socket and 2-socket systems, and for scale-up workloads using large multiprocessor machines, may be utilized in different embodiments. Comprehensive Contention-Based Thread Allocation and Placement is described herein mainly in terms of stand-alone benchmarks, but the techniques described herein may also be applicable for use within other systems, such as within a server application for coordinating the allocation of resources to different concurrent queries, according to various embodiments. The following assumptions may be made about these workloads:Programs comprising parallel sections executed with a configurable number of threads and plentiful work to distribute between/among the threads.Homogeneous behavior between the threads in a parallel region—e.g., if executing loop iterations in parallel, there are similar resource demands for each iteration.Low algorithmic cost of adding extra threads—e.g., introducing additional parallel threads does not significantly extend sequential work between parallel regions.Workload determined primarily based on the use of resources being modeled, such as the rate at which CPU cores execute instructions, the bandwidths achieved on communication within the memory system, and/or bandwidth use on external networks, according to various embodiments. The properties described above may cover many analytics workloads where the degree of parallelism is configurable, and execution proceeds by iterating over shared-memory data structures such as graph vertices or database columns. In the evaluation described herein a range of in-memory database join operators, a graph analytics workload, and existing parallel computing benchmarks using OpenMP are used, according to various embodiments. To evaluate the accuracy of the predictions made with these descriptions, 72,448 timed runs were performed covering approximately 20% of the possible placements of each workload, with a performance prediction generated for each placement. For most workloads, the measured and predicted results are visually close. Any error in these predictions may be qualified using two metrics:Error: The first is the difference between the two predictions as a percentage of the measured value. The absolute value of the difference from each prediction is used to construct the mean and median values.Offset Error: In the second metric the mean difference between the two sets of values is added to the predicted line before measuring the differences in the resulting output. This technique may remove errors introduced when the two datasets are some constant value apart, thereby possibly providing a better measure of how accurate the output is at predicting performance trends if not exact values. According to the example evaluation, the median error across the runs is 8% and the median offset error is 4%. To assess portability of the techniques described herein, the experiments where repeated on a two socket Intel Sandy Bridge machine with 8 cores per socket, providing 32 hardware threads in total, according to one example evaluation embodiment. The smaller number of core in this example allowed all placements to be tested exhaustively. To test the portability of the workload descriptions between different machines the Haswell workload descriptions with the Sandy Bridge machine description were used to generate predictions for the Haswell machine, and the Haswell workload descriptions to generate results for the Haswell Machine. The resulting errors for these show that while the relative error increases, the results are still useful, according to the example evaluation embodiments. Power Management. Modern processors may use sophisticated dynamic power management techniques. These techniques may include features such as the Turbo Boost technology in Intel processors which allows cores to run faster when only a small number of them are active, and dynamic adaptation between different frequency—voltage operating points (Pstates) depending on processor temperature. It may be considered common to attempt to disable these features. However, doing so may be unrealistic for several reasons. First, these features are generally enabled by default and used in production settings. Second, the performance with Turbo Boost disabled may be considered strictly worse than with it enabled—that is, we see a performance cost disabling this features even when all threads are active and no boost is naively expected. The approach described herein according to one embodiment, may leave all hardware power management features enabled, but may remove these effects from measurements by filling any otherwise-idle cores during profiling with a core-local workload. Extensions. The techniques described herein may be considered to have two principle limitations: Multiple thread types; and Discontinuous scaling. Multiple Thread Types. Many applications may consist of multiple thread types, the most common of these is a master thread and n−1 slave threads, but there are other applications with more complex separation of threads. In some embodiments, is may be assumed that all threads have similar behavior. More heterogeneous workloads may be considered in some embodiments, such as by identifying groups of threads. For example, in one embodiment, separate groups of threads may be identified from the data collected from the machine counters using techniques such as Principle Component Analysis (PCA) it may be possible to use techniques such as Principle Component Analysis (PCA). In other embodiments, thread groupings may be exposed explicitly from the runtime system. The techniques described herein may construct the elements of the job description that differ from thread to thread, thereby allowing the modelling of this more complex environment, according to some embodiments. CONCLUSION Described herein are techniques for implementing a tool able to measure hardware and workloads in order to construct a model from 6 runs that predicts the performance and resource demands of the workload with different thread placements on the hardware, according to some embodiments. Testing this on a set of 22 workloads across many thousands of placements has shown a high degree of accuracy from most workloads, according to some embodiments. This means that the results may be used to make real decisions about the placements of workloads. As the measurements made by the techniques described herein may be comparable between workloads, they may be extended the collocation of multiple workloads. The model may be built around measuring the CPU and bandwidth resource demands, coupled with measurements of the interactions between threads, according to some embodiments. The simple bandwidth-based level of detail may be considered effective for the workloads describe herein. This may be considered in contrast to much prior work which has generally focused on more detailed models of how workloads interact through competition for shared caches. The trend appears to be that while individual cache architectures are possibly becoming more complex, the necessity to model them in detail is possibly being diminished. One reason for this difference may be that hardware may now be more effective in avoiding pathologically bad behavior. This kind of technique may make workloads less susceptible to “performance cliffs”. In some embodiments, the techniques described herein may be operated at the level of rack-scale clusters. The number of possible placements of threads on even a single 36 cores node with hyper-threading may exceed 1.5×1018, and even with symmetry taken into account there may still be 18144 possible thread placements. The techniques as described herein were discussed in referenced to applications running on a single cluster node, such as to allow for the generation of a set of job placements to compare the model against that covers approximately 20% of the possible placements, according to various example embodiments. Example Computing System The techniques and methods described herein for Comprehensive Contention-Based Thread Allocation and Placement may be implemented on or by any of a variety of computing systems, in different embodiments. For example,FIG.12is a block diagram illustrating one embodiment of a computing system that is configured to implement such techniques and methods, as described herein, according to various embodiments. The computer system1200may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, a peripheral device such as a switch, modem, router, etc., or in general any type of computing device. Some of the mechanisms for Comprehensive Contention-Based Thread Allocation and Placement, as described herein, may be provided as a computer program product, or software, that may include a non-transitory, computer-readable storage medium having stored thereon instructions, which may be used to program a computer system1200(or other electronic devices) to perform a process according to various embodiments. A computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable storage medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; electrical, or other types of medium suitable for storing program instructions. In addition, program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) In various embodiments, computer system1200may include one or more processors1270; each may include multiple cores, any of which may be single- or multi-threaded. For example, multiple processor cores may be included in a single processor chip (e.g., a single processor1270), and multiple processor chips may be included in computer system1200. Each of the processors1270may include a cache or a hierarchy of caches1275, in various embodiments. For example, each processor chip1270may include multiple L1 caches (e.g., one per processor core) and one or more other caches (which may be shared by the processor cores on a single processor). The computer system1200may also include one or more storage devices1250(e.g. optical storage, magnetic storage, hard drive, tape drive, solid state memory, etc.) and one or more system memories1210(e.g., one or more of cache, SRAM, DRAM, RDRAM, EDO RAM, DDR 10 RAM, SDRAM, Rambus RAM, EEPROM, etc.). In some embodiments, one or more of the storage device(s)1250may be implemented as a module on a memory bus (e.g., on interconnect1240) that is similar in form and/or function to a single in-line memory module (SIMM) or to a dual in-line memory module (DIMM). Various embodiments may include fewer or additional components not illustrated inFIG.12(e.g., video cards, audio cards, additional network interfaces, peripheral devices, a network interface such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.) The one or more processors1270, the storage device(s)1250, and the system memory1210may be coupled to the system interconnect1240. One or more of the system memories1210may contain program instructions1220. Program instructions1220may be executable to implement machine description generator120, or workload description generator130, and/or performance predictor140. In various embodiments, machine description generator120may be same as, or may represent, workload description generator130and/or performance predictor140. Similarly, workload description generator130may be same as, or may represent, machine description generator120and/or performance predictor140while performance predictor140may be same as, or may represent, machine description generator120, and/or workload description generator130, according to various embodiments. Program instructions1220may be encoded in platform native binary, any interpreted language such as Java™ byte-code, or in any other language such as C/C++, the Java™ programming language, etc., or in any combination thereof. In various embodiments, machine description generator1222, or workload description generator1224, and/or performance predictor1226may each be implemented in any of various programming languages or methods. For example, in one embodiment, machine description generator1222, or workload description generator1224, and/or performance predictor1226may be based on the Java programming language, while in other embodiments they may be written using the C or C++ programming languages. Similarly, machine description generator1222, or workload description generator1224, and/or performance predictor1226may be written using the Java programming language, C, C++, or another programming language, according to various embodiments. Moreover, in some embodiments, machine description generator1222, or workload description generator1224, and/or performance predictor1226may not be implemented using the same programming language. For example, machine description generator1222, and/or workload description generator1224may be C++ based, while performance predictor1226may be developed using C. Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, although many of the embodiments are described in terms of particular types of operations that support synchronization within multi-threaded applications that access particular shared resources, it should be noted that the techniques and mechanisms disclosed herein for accessing and/or operating on shared resources may be applicable in other contexts in which applications access and/or operate on different types of shared resources than those described in the examples herein and in which different embodiments of the underlying hardware that supports persistent memory transactions described herein are supported or implemented. It is intended that the following claims be interpreted to embrace all such variations and modifications. | 75,874 |
11861273 | DETAILED DESCRIPTION Some examples of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all examples of the present disclosure are shown. Indeed, the present disclosure may be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these examples are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. As used herein, the terms “data,” “content,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received, and/or stored in accordance with examples of the present disclosure. Thus, use of any such terms should not be taken to limit the spirit and scope of the present disclosure. A method, computing system and computer program product are provided in order to provide a guide comprised of one or more defined set of ordered ply orientations. In this regard, reference to a defined set of ordered ply orientations is, in fact, a reference to a set of composite plies having corresponding ply orientations and ordered in a particular sequence. The guide facilitates the design of a composite structure so as to permit the composite structure to be designed in an efficient manner and to be in compliance with a plurality of stacking sequence rules. The composite structure that is designed and fabricated in reliance upon the guide that is defined in accordance with an example embodiment may be any of a wide variety of composite structures that are used in a variety of different applications, such as aerospace applications, automotive applications, construction or other structural applications, etc. By way of example of a composite structure, reference is now made toFIG.1which depicts a cross-sectional side view of a composite structure10. As shown, the composite structure includes a plurality of composite plies12, each of which has a ply orientation as designated as by the angular value with which each composite ply is labeled. As shown, the composite plies may include any of a plurality of different ply orientations, such as 0 degrees, +45 degrees, −45 degrees or 90 degrees relative to a reference. A composite structure10, such as the composite structure ofFIG.1, may include different regions that have different numbers of composite plies12. As shown inFIG.1, for example, the composite structure includes first, second and third regions14,15,16. The second region includes fewer composite plies than the first region and the third region includes fewer composite plies than the second region. In this regard, several of the composite plies of the first region of the composite structure are dropped or terminated in the transition from the first region to the second region, and similarly several of the composite plies of both the first and second regions of the composite structure are dropped or terminated in the transition from the second region to the third region. By dropping one or more plies in the transition between regions of a composite structure, the resulting structural characteristics of the different regions of the composite structure are different and the size, such as the thickness of the resulting composite structure, is also different. A composite structure10must generally be designed to satisfy a plurality of stacking sequence rules. The stacking sequence rules define a plurality of different rules governing the sequence in which composite plies may be stacked, such as the sequence of ply orientations of the stacked composite plies, to construct the composite structure. As the different ply orientations have different structural characteristics and the different sequences of ply orientations correspondingly have different structural characteristics, the stacking sequence rules are defined in order to ensure that the resulting composite structure has the desired structural characteristics, such as the desired strength and stiffness. Another representation of a composite structure formed of a plurality of composite plies having different ply orientations is shown inFIG.2in the context of a stacking sequence/ply shape optimization problem. InFIG.2, the design of a composite structure10comprised of a plurality of panels18is depicted. In this example, the composite structure includes seventy-seven panels arranged in a rectangular shape with seven panels on one side and eleven panels on the other side, that is, 7 panels×11 panels. Each panel of the composite structure is comprised of a plurality of composite plies with the order in which the composite plies are to be stacked depicted vertically inFIG.2with the vertical gaps between the composite plies representing other composite plies (such as composite plies included in one or more other panels) that have been dropped and therefore eliminated from the respective panel. As shown, each panel of the composite structure may include a different combination of composite plies so as to result in a number of composite plies being dropped in the transition from one panel to an adjacent panel with each of the panels required to satisfy the stacking sequence rules for the composite structure, thereby evidencing the complexity associated with the design of a composite structure. As a point of reference, it is noted that if the composite plies ofFIG.2were collapsed by removing the vertical gaps and layering the composite plies one on top of another, the resulting structure would be a composite structure10of the type shown inFIG.1. In this regard, a composite structure of the type shown inFIG.1represents a final output product, whileFIG.2represents the inputs that are provided to a layup machine or the like to fabricate a composite structure of the type depicted inFIG.1. In order to facilitate the design of the composite structure, such as by ensuring compliance with the stacking sequence rules and increasing the efficiency with which the composite structures are designed, the method, computing system and computer program product of an example embodiment define a guide comprised of a plurality of composite plies arranged in one or more defined sets of ordered ply orientations. In this regard, the guide defines an ordered arrangement of a plurality of composite plies having respective ply orientations. The guide of an example embodiment includes at least as many composite plies as the maximum quantity of composite plies to be included include within the composite structure and, in some embodiments, includes more composite plies than are to be included in the resulting composite structure. By way of example of a guide,FIG.2depicts a guide20that designates a plurality of composite plies having respective ply orientations and arranged in a predefined order. As represented by the association of certain composite plies of the guide with composite plies of panel7, panel7and each of the panels of the composite structure are designed so as to be compliant with the guide. In terms of being compliant with the guide and as the composite plies of panel7demonstrate, the composite plies of a panel need not include every composite ply of the guide20, but the composite plies of the panel are included in the guide and are in the same relative order defined by the guide, that is, the composite plies of a panel are in the same order as those same composite plies are arranged in the guide even though some of the intervening composite plies represented by the guide may have been dropped or eliminated from the panel. In other words, certain composite plies of the guide may be dropped with the remainder of the composite plies of the guide arranged in the order defined by the guide forming the respective panel. FIG.3provides another example of a guide20. The guide ofFIG.3is relatively small in terms of the number of composite plies and is referenced herein by way of an example. In a number of embodiments, however, the guide is comprised of many more composite plies, such as a hundred or more composite plies. As shown inFIG.3, the guide includes a plurality of composite plies in a predefined order. Each composite ply has a respective ply orientation as designated by the angular value associated with each composite ply. The plurality of composite plies that form the guide need not necessarily satisfy the plurality of stacking sequence rules that govern the composite structure that is to be defined. However, subsets of the composite plies of the guide do satisfy the stacking sequence rules and may be utilized in the design and construction of the resulting composite structure. By way of example of the different combinations of composite plies that are compliant with the guide20and that may be utilized in the design of a composite structure,FIG.3includes a plurality of the different combinations of composite plies in the columns to the right of the guide. In this regard, each column represents a different combination of composite plies from the guide with those boxes of a column that are shaded indicating the inclusion of the corresponding composite plies of the guide in the combination and those boxes that are unshaded indicating the absence of the corresponding composite plies of the guide from the combination. Although many different combinations that composite plies may be defined with each combination being compliant with, that is, satisfying the stacking sequence rules, and with each combination being compliant with the guide in terms of having composite plies with ply orientations in the same sequence as defined by the guide (even though one or more composite plies defined by the guide may have been dropped or eliminated),FIG.3illustrates seven different combinations of composite plies, all of which are compliant with the guide and with the stacking sequence rules. As noted above, a computing system30is provided for defining a guide20comprised of composite plies having one or more defined sets of ordered ply orientations. A computing system may be embodied by any of a wide variety of computers including, for example, a server, a computer workstation, a personal computer, a plurality of network computing devices or the like. Regardless of the type of computer that embodies the computing system, the computing system of an example embodiment is depicted inFIG.4and includes processing circuitry32and optionally an associated memory device34. The memory device may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory device may be an electronic storage device (for example, a computer readable storage medium) comprising gates configured to store data (for example, bits) that may be retrievable by a machine (for example, a computing device like the processor). The memory device may be configured to store information, data, content, applications, instructions, or the like for enabling the computing system to carry out various functions in accordance with an example embodiment of the present invention. For example, the memory device could be configured to buffer input data for processing by the processor. Additionally or alternatively, the memory device could be configured to store instructions for execution by the processing circuitry. The processing circuitry32may be embodied in a number of different ways and may, for example, include one or more processing devices configured to perform independently. Additionally or alternatively, the processing circuitry may include one or more processors configured in tandem via a bus to enable independent execution of software instructions, pipelining, and/or multithreading. The use of the terms “processor” or “processing circuitry” may be understood to include a single core processor, a multi-core processor, multiple processors, remote or “cloud” processors, or any combination thereof. In an example, the processing circuitry32may include one or more dedicated processors, controllers, specially configured field programmable gate arrays (FPGAs), or application specific interface circuits (ASICs) to perform its corresponding functions. The processing circuitry may additionally or alternatively be implemented using a processor executing software stored in a memory device. In this fashion, the processing circuitry may therefore be implemented using special-purpose components implemented purely via hardware design or may utilize hardware components that execute computer software designed to facilitate performance of the functions of the processing circuitry. As shown inFIG.4, the processing circuitry32may also include or be associated with the memory device34and the processing circuitry of this example may be configured to execute software instructions stored in the memory device or otherwise accessible to the processing circuitry. In this example, the memory device may be configured to store information, data, content, applications, software instructions, or the like, for enabling the processing circuitry to carry out various functions in accordance with examples contemplated herein. Alternatively or additionally, the processing circuitry may be configured to execute hard-coded functionality. As such, whether configured by hardware or software methods, or by a combination of hardware with software, the processing circuitry may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an example of the present disclosure while configured accordingly. Alternatively, as another example, when the processing circuitry is embodied as an executor of software instructions, the software instructions may specifically configure the circuitry to perform the algorithms and/or operations described herein when the software instructions are executed. Referring now toFIG.5, a flowchart depicting the operations performed, such as by the computing system30ofFIG.4, in order to define a guide20comprised of a plurality of composite plies that are arranged to include one or more defined sets of ordered ply orientations is depicted. The guide may be defined in order to arrange composite plies having respective ply orientations in various contexts, such as in the context of a stacking sequence/ply shape optimization problem as inFIG.2. As shown in block40ofFIG.5, the computing system, such as the processing circuitry32, is configured to receive a plurality of stacking sequence rules. The stacking sequence rules may be predefined, such as for the particular type of composite structure that is to be designed, and may be stored by the memory device34and provided to the processing circuitry. Alternatively, the plurality of stacking sequence rules may be provided to the processing circuitry by a user, such as a scientist or technician, responsible for designing the resulting composite structure10. As shown in block42, the computing system30, such as the processing circuitry32, is also configured to define the guide20that is comprised of a plurality of composite plies that are arranged so as to include one or more defined sets of ordered ply orientations in accordance with a constrained, linear integer optimization formulation. As a result of the reliance upon a constrained, linear integer optimization formula, the computing system and method of an example embodiment ensure that the resulting composite structure10complies with the plurality of stacking sequence rules and determine the plurality of composite plies and their respective ply orientations and the order in which the plurality of composite plies having respective ply orientations are stacked in an efficient manner. In order to define the guide20including the plurality of composite plies that are arranged so as to include one or more defined sets of ordered ply orientations, the computing system30, such as the processing circuitry32, is initially configured to divide the guide into a plurality of blocks, each containing a set of consecutive plies. See block50ofFIG.6. The size of the blocks, i.e. the number of plies contained in a block, and, thus, the number of blocks into which a guide having a predetermined number of composite plies stacked in an ordered sequence is divided may be predefined. For example, a block may have a predefined size of 10 composite plies such that a guide comprised of an ordered sequence of 80 composite plies is divided into 8 blocks designated b=1, 2, 3, . . . 8. In this example, a guide that includes an ordered sequence of composite plies designated 1, 2, . . . 80 will be divided into 8 blocks with block 1 including composite plies 1-10, block 2 including composite plies 11-20, . . . block 8 including composite plies 71-80. The guide will be formed by stacking the blocks together, such that the ordered sequence of composite plies is determined by the ordered sequences of each of the blocks and the arrangement of the blocks within the guide. While each of the blocks of the foregoing example has the same number of composite plies as a result of the number of blocks dividing evenly into the number of composite plies of the guide, the blocks of another example may include different numbers of composite plies. The number of blocks into which the guide will be divided need not be predefined, and the composition of the blocks in terms of the size of the blocks and the composite plies included in each block also need not be defined in advance. Instead, the blocks may be defined as described below in order to increase the flexibility in terms of the sublaminate stacks that are compliant with a respective block and that may be utilized to form a region of the composite structure. In relation to dividing the guide into a plurality of blocks, the computing system30, such as the processing circuitry32, is therefore configured to define a plurality of candidate blocks. The candidate blocks may have different sizes and may include different combinations of composite plies such that one or more combinations of the candidate blocks can be assembled in one or various manners so as to provide each of the composite plies of the guide. The number of candidate blocks is larger than the total number of blocks required to form the guide. Candidate blocks need not be unique. By way of further illustration,FIG.7is another, more detailed flow diagram of the operations performed by the computing system, such as the processing circuitry. As shown in block80, the computing device, such as the processing circuitry, of this example embodiment includes a library generator function in order to define a plurality of candidate blocks based upon the stacking sequence rules82. In order to define the guide20, a plurality of sublaminate stacks are initially defined. The sublaminate stacks are stacks of composite plies having a size limited by the size of the blocks, that is, the maximum size of a sublaminate stack is no greater than and, in at least some embodiments, is equal to the size of the blocks. Some of the sublaminate stacks, however, include fewer composite plies than the size of the blocks. Each sublaminate stack includes a different combination of composite plies having respective ply orientations and disposed or stacked in a particular order. The differences between the sublaminate stacks may be represented by different quantities of plies that form the sublaminate stacks such as a sublaminate stack comprised of 8 composite plies being different than a sublaminate stack of 10 composite plies regardless of the order in which the composite plies are stacked. Additionally, sublaminate stacks having the same quantity of composite plies are different from one another in an instance in which the composite plies are stacked such that one or more of the composite plies in one sublaminate stack has a different orientation than the corresponding composite ply in another sublaminate stack. In this regard, reference herein to composite plies being the same or different is in relation to the ply orientations of the composite plies, such that composite plies that have the same ply orientations are considered to be the same, while composite plies that have different ply orientations are considered to be different. In any event, the computing system, such as a processing circuitry is configured to determine a plurality of sublaminate stacks comprised of different combinations of composite plies that satisfy the stacking sequence rules and with the size of the sublaminate stacks limited by the size of the blocks, as shown in block52ofFIG.6and as shown in blocks83and85ofFIG.7based upon the stacking sequence rules82and the ply counts84per orientation for each region. The computing system30, such as the processing circuitry32, is configured to evaluate the plurality of sublaminate stacks and to eliminate the sublaminate stacks that fail to satisfy the stacking sequence rules. Thus, all of the sublaminate stacks that remain following the elimination of the sublaminate stacks that fail to satisfy the stacking sequence rules do satisfy the stacking sequence rules. The resulting sublaminate stacks that satisfy the stacking sequence rules are designated by an index s such that the first sublaminate stack is designated as s=1, the second sublaminate stack is designated as s=2, etc. In addition to this initial determination of the plurality of rule-compliant sublaminate stacks s, the computing system30, such as the processing circuitry32, defines a matrix Rkbsthat designates the number of composite plies of sublaminate stack s that is compatible with candidate block b that have orientation angle k. See block54ofFIG.6. Thus, entries in the matrix Rkbsare determined for each candidate block b. For candidate block b, each rule-compliant sublaminate stacks is evaluated to determine if the sublaminate stacks is compatible with the candidate block b. In this regard, the compatibility of a sublaminate stack s with a candidate block b is determined by whether the composite plies as ordered by the sublaminate stack s appear within the candidate block b in the same order. In making this determination, the presence of intervening composite plies in the candidate block b does not render the sublaminate stacks incompatible as the intervening composite plies of block b that do not appear within the sublaminate stacks can be dropped. However, each of the composite plies of sublaminate stack s having respective ply orientations must appear in the same order in candidate block b once the intervening composite plies of candidate block b have been dropped. Thus, in a simple example depicted inFIG.8, a sublaminate stack may include a stack of four composite plies, namely, composite ply 1 having an orientation of 0 degrees, composite ply 2 having an orientation of +45 degrees, composite ply 3 having an orientation at −45 degrees and composite ply 4 having an orientation of 90 degrees. The sublaminate stack is show in the leftmost column ofFIG.8. This sublaminate stack will be determined to be compatible with block b shown in the middle column ofFIG.8that has an ordered sequence of 5 composite plies with composite ply 1 having a 0 degree ply orientation, composite ply 2 having a +45 degree ply orientation, composite ply 3 having a 0 degree ply orientation, composite ply 4 having a −45 degree ply orientation and composite ply 5 having a 90 degree ply orientation since composite ply 3 of block b can be dropped with the remaining plies matching those of the sublaminate stack s in terms of the number of composite plies and the order of the ply orientations of the remaining composite plies. However, the same sublaminate stack s would be found not to be compatible with a block shown in the rightmost column ofFIG.8and having five composite plies ordered with composite ply 1 having an orientation of +45 degrees, composite ply 2 having an orientation of 90 degrees, composite ply 3 having an orientation of 0 degrees, composite ply 4 having an orientation of 90 degrees and composite ply 5 having an orientation of −45 degrees since regardless of which composite ply of the block is dropped, the remaining composite plies do not match those of the sublaminate stack since the resulting ply orientations will be different. Of the sublaminate stacks s that are compatible with candidate block b, the computing system30, such as the processing circuitry32, is configured to determine the matrix Rkbsthat defines for each different orientation angle k the number of plies of sublaminate stack s that has been determined to be compatible with candidate block b that are included within the sublaminate stack s. Thus, for a sublaminate stack s that is compatible with candidate block b, the computing system, such as the processing circuitry, is configured to determine the count of plies of sublaminate stack s having a ply orientation of 0 degrees, the count of composite plies of sublaminate stack s having a ply orientation of +45 degrees, the count of composite plies of sublaminate stack s having a ply orientation of −45 degrees and the count of composite plies of sublaminate stack s having a ply orientation of 90 degrees. In addition to defining the matrix Rkbs, the computing system30, such as the processing circuitry32, of this example embodiment is also configured to define a matrix Nbsthat identifies the number of variations of sublaminate stack s that remain compatible with candidate block b and that have the same ply counts Rkbsfor the plurality of orientation angles k. See block56ofFIG.6. Thus, for each candidate block b, the computing system, such as the processing circuitry, is configured to evaluate each of the compatible sublaminate stacks s to determine the quantity of other sublaminate stacks s that are also compatible with the same candidate block b and that also have the same matrix Rkbs, that is, the same entries in matrix Rkbsfor each of the orientation angles k, thereby indicating that these other sublaminate stacks s have the same quantities of plies having each of the different orientation angles k as the sublaminate stacks that is under evaluation. For example, the value of the matrix Nbsfor a respective candidate block and a respective sublaminate stack s that is compatible with the respective block defines the quantity of other sublaminate stacks that are compatible with the same candidate block b and that also have the same number of composite plies with a ply orientation of 0 degrees, the same number of composite plies with a ply orientation of 45 degrees, the same number of composite plies with a ply orientation of −45 degrees and the same number of composite plies with a ply orientation of 90 degrees as the sublaminate stack s that is under evaluation. As shown in block86ofFIG.7by way of further example, the computing system30, such as the processing circuitry32of an example embodiment is configured to generate, for each candidate block b, a compatibility map in the form of a matrix Rkbsthat designates the number of composite plies of sublaminate stack s that is compatible with candidate block b that have orientation angle k and a matrix Nbsthat identifies the number of variations of sublaminate stack s that remain compatible with candidate block b and that have the same ply counts Rkbsfor the plurality of orientation angles k. With reference now toFIG.9by way of example, there are four different ways that a sublaminate stack defined by an ordered sequence of composite plies having ply orientations of [45/−45/90/0] is compatible with a candidate block b having ply orientations of [45/0/−45/90/45/0/−45/90/45/0/−45/90]. In this regard,FIG.9depicts the candidate block b in the leftmost column and the four different ways that the sublaminate stack is compatible with the candidate block b in the four columns to the right. The designation * inFIG.9denotes a composite ply of block b that has been dropped and is not included in the respective sublaminate stack. As shown in block58ofFIG.6, the computing system30, such as the processing circuitry32, is configured to determine, separately for each of the plurality of candidate blocks, in accordance with the constrained, linear integer optimization formulation a sublaminate stack from among the plurality of sublaminate stacks that are compatible with a respective candidate block. In relation to determining the sublaminate stack in accordance with the constrained, linear integer optimization formulation, the computing system, such as the processing circuitry, is configured to select the sublaminate stack for a respective candidate block based upon a number of other sublaminate stacks that have the same ply counts Rkbsand are compatible with the respective candidate block. More particular, the computing system, such as the processing circuitry, of an example embodiment depicted in block60ofFIG.6is configured to select the sublaminate stack for the respective candidate block by selecting, separately for each of the plurality of candidate blocks, the sublaminate stack for the respective candidate block that maximizes the quantity of sublaminate stacks that have the same ply counts Rkbsand are compatible with the respective candidate block. In other words, the computing system, such as the processing circuitry, is configured to evaluate each of the rule-compliant sublaminate stacks s that have been determined to be compliant with candidate block b and to determine which one of these sublaminate stacks has the largest number of variations as defined by the matrix Nbsthat remain compatible with the candidate block b and that have the same count of plies Rkbsfor the plurality of ply orientation angles k. FIG.7depicts another example of the constrained, linear integer optimization formulation. In this example embodiment, the computing system30, such as the processing circuitry32, is configured to initially assign a plurality of variables and constraints and to identify the objective function that defines the constrained, linear integer optimization formulation as well as the matrices described herein that define the optimization formulation. See blocks88and89. The variables and constraints are described individually below. The computing system, such as the processing circuitry, is then configured to implement the constrained, linear integer optimization formulation as shown in block90in order to produce the solution in terms of a selection of candidate blocks to assemble to form the guide20. See blocks92and94ofFIG.7. In an example embodiment, the constrained, linear integer optimization formulation maximizes the quantity of choices of sublaminate stacks while ensuring that the desired ply counts for each region j of the composite structure10may be satisfied. In this regard, maximizing the quantity sublaminate choices increases the number of design variables that are available during a subsequent step in the design process in which the ply shapes are defined, e.g., optimized, thereby increasing the likelihood that better ply shapes will be available for selection. In this regard, a variable Gbsjmay be defined by the computing system30, such as the processing circuitry32, as follows: Gbsj∈{0,1}NjxnBxns={1,ifblockbisselectedforsublaminatesonelementj0,else wherein j designates a composite structure or a region of a composite structure, such as a panel and Njis the number of elements j. The variable Gbsjspecifies that for each block of the guide20only one candidate block b can be selected with one candidate block being selected in an instance in which Gbsjis 1 and no candidate block being selected in an instance in which Gbsjis 0. As such, the computing system, such as the processing circuitry, of this example embodiment is configured to maximize the quantity of sublaminate stacks that are compatible with respective block by maximizing the following equation: ∑j,bNbs·Gbsj As shown in block108ofFIG.7, the linear integer optimization formulation that is evaluated by the computing system30, such as the processing circuitry32, in accordance with an example embodiment is subject to a plurality of constraints. In order to define the constraints, a plurality of other variables are initially defined. In this regard, the computing system, such as the processing circuitry, is configured to determine the variables Hbjand Kbas follows: Hbj∈{0,1}nFE×nB={0,ifblockbisNOTusedinelementj1,elseKb∈{0,1}nB={0,ifblockbisNOTusedinanyelement1,else Thus, Hbjdefines whether a candidate block b is included or not in a region of a composite structure designated j, while Kbdefines whether a candidate block is included or not in the entire composite structure. Based upon these variables, the computing system30, such as the processing circuitry32, of an example embodiment is configured to maximize a quantity of sublaminate stacks in accordance with the linear integer optimization formulation subject to a first constraint of: ∑j,bGbsj+Hbj=1,∀b,j This constraint requires that only one compatible sublaminate stack is allowed to be chosen per candidate block b or else the candidate block b must not be included in the respective element j. The computing system30, such as the processing circuitry32, of this example embodiment is also configured to ensure that the guide that is formed by the blocks that are selected with the linear integer optimization can accommodate the ply counts for every element j by imposing a second constraint of: ∑bRkbs·Gbsj=Nkj,∀j,k This second constraint requires that for each region j, candidate blocks are identified that can produce a combination of compatible sublaminate stacks that match the number of plies of each orientation. In other words, the computing system, such as the processing circuitry, is configured to determine the quantity of sublaminate stacks in accordance with the linear integer optimization formulation subject to this second constraint that a total number of plies across the sublaminate stacks determined for the plurality of blocks of the guide20equals a predefined ply count, such as the required counts of composite plies of at least one region of the composite structure. The computing system30, such as the processing circuitry32, of this example embodiment is further configured to maximize the quantity of sublaminate stacks in accordance with the linear integer optimization formulation subject to a third constraint of: ∑bKb<Bmax This third constraint limits the quantity of blocks that are utilized to Bmax. Bmaxmay be a predefined number of blocks. Alternatively, the sum of Kb could be included as a penalty in the objective function, that is, the constrained, linear integer optimization formulation in order to encourage selection of a smaller number of blocks, eliminating the need to determine Bmax. The computing system30, such as the processing circuitry32, of this example embodiment may be further configured to maximize the quantity of sublaminate stacks in accordance with the linear integer optimization formulation subject to additional constraints in the form of: (1-Kb)·Nj-∑jHbj≤0Kb+∑jHbj≥0 These additional constraints define the quantity of blocks Kbbased on the matrix indicating whether respective candidate blocks b are included or not in respective elements j with candidate blocks b that are not included in an element j satisfying the following equation: ΣjHbj=Nj wherein Njis the total number of regions. By reliance upon the constrained, linear integer optimization formulation and subject to the foregoing constraints, the compatible sublaminate stacks s may be evaluated for each of the plurality of candidate blocks b and the sublaminate stack s that maximizes the quantity of sublaminate stacks that have the same ply counts Rkbsand are compatible with the respective block may be selected for each respective candidate block b. After having evaluated each of the plurality of candidate blocks b in accordance with the constrained, linear integer optimization formulation, the computing system30, such as the processing circuitry32, is configured to define the guide20to include the candidate blocks that are identified by the variable Gbsjto be included in the guide. Based upon the resulting guide20, the sublaminate stacks s that have been selected for each of the blocks b that have been included in the guide may be assembled in the same order as the respective blocks b to form the different regions of the composite structure. The guide therefore includes the plurality of sublaminate stacks s that have been identified in accordance with the constrained, linear integer optimization formulation along with, in some embodiments, one or more intervening composite plies that are not included in any of the sublaminate stacks s that have been identified. By identifying the plurality of sublaminate stacks s that comprise the blocks of the guide20based upon a maximization, for each block, of the quantity of sublaminate stacks that have the same ply counts Rkbsand are compatible with the respective block, the flexibility with which the sublaminate stacks s of the blocks of the guide may be utilized in order to construct the resulting composite structure or different regions of the composite structure is similarly maximized since the presence of a respective sublaminate stack in the guide permits the respective sublaminate stack or any of the other sublaminate stacks that have the same ply counts Rkbsfor all ply orientation angles k to be utilized. Thus, the likelihood that the resulting composite structure or various regions of the resulting composite structure will include one or more of the sublaminate stacks is maximized and the quantity of sublaminate stacks included in the resulting composite structure or different regions of the resulting composite structure is correspondingly maximized, thereby increasing the efficiency with which the resulting composite structure is designed since each of the sublaminate stacks s have previously been determined to satisfy the stacking sequence rules and, as a result, need not be further evaluated in terms of the stacking sequence rules during the design and construction of the of the composite structure itself. Moreover, by utilizing a constrained, linear integer optimization formulation, the plurality of sublaminate stacks s and the particular sublaminate stacks s that are included in the guide20are also determined in an efficient manner, particularly as the sublaminate stacks s that are rule compliant may be determined in advance along with a number of other variables, thereby enhancing the efficiency with which the evaluation of the constrained, linear integer optimization formulation is determined. Indeed, the stacking sequence rules need not serve as constraints during evaluation of the constrained, linear integer optimization formulation since the sublaminate stacks that are evaluated have previously been determined to comply with the stacking sequence rules. By performing the constrained, linear integer optimization formulation in an efficient manner without consideration of the stacking sequence rules as constraints as described above, both processing resources and processing time are conserved, thereby providing numerous technical advantages. FIGS.5-7illustrate flowcharts describing the operation of computing systems30, methods, and computer program products according to examples of the disclosure. It will be understood that each block of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by various means, embodied as hardware, firmware, circuitry, and/or other devices associated with execution of software including one or more software instructions. For example, one or more of the operations described above may be embodied by software instructions. In this regard, the software instructions which embody the procedures described above may be stored by the memory34of the computing system employing an example of the present disclosure and executed by processing circuitry32. As will be appreciated, any such software instructions may be loaded onto a computer or other programmable apparatus (e.g., hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flowchart blocks. These software instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the software instructions stored in the computer-readable memory produce an article of manufacture, the execution of which implements the functions specified in the flowchart blocks. The software instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the software instructions executed on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart blocks. The flowchart blocks support combinations of means for performing the specified functions and combinations of operations for performing the specified functions. It will be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and software instructions. In some examples, some of the operations above may be modified or further amplified. Furthermore, in some examples, additional optional operations may be included. Modifications, amplifications, or additions to the operations above may be performed in any order and in any combination. Many modifications and other examples of the present disclosure set forth herein will come to mind to one skilled in the art to which the present disclosure pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the present disclosure is not to be limited to the specific examples disclosed and that modifications and other examples are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe examples in the context of certain combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative examples without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purpose of limitation. | 43,236 |
11861274 | DETAILED DESCRIPTION OF THE INVENTION The present invention preferably uses an explicit based Finite Element Analysis (FEA) method in LS-Dyna® software to run impacts of a simulated golf ball into a simulated driver. LS-Dyna® results are preferably derived for full launch performance metrics, and manage durability, conformance, and sound. While existing techniques to run LS-Dyna® have improved, each iteration of the design output previously was guided by the stress of the solution. Simple two- or three-parameter experimental designs examined the correlation coefficients between variables and ball speed and CT, which required a great deal of analyst skill and time. The method of the present invention increases the usefulness of FEA by using more of its data and making design exploration less analyst-dependent, thereby achieving optimized results within a design space. It also provides a solution that considers any number of design parameters and is easier to use. A method100for optimizing the structure of a golf club head for ball speed is shown inFIG.1. At block101, a sampling method is used to create a range of design parameters to create multiple designs for finite element analysis. At block102, a finite element analysis is run to supply results needed to create a neural net model of a design. At block103, a neural net model is used in an optimization routine to predict an optimal point for the design based on an objective. At block104, the optimal design is run through finite element analysis and comparing it against the prediction. At block105, the steps are repeated with a new sampling until a converged design is achieved, wherein the converged design is selected from the group consisting of a golf club head component and a golf ball. This method differs from guided machine learning routines in that the training set comprises designs from the sampling method and the hold out is the optimal result. This process is continued until an accurate model is provided or the results do not change from significantly from prior iterations. The method of the present invention was utilized to design the variable thickness face design200shown inFIG.2and the face design300shown inFIG.3. In this instance, the solution comprised selecting preferably from ten to hundred design parameters, more preferably from twenty to seventy design parameters, and most preferably thirty-four design parameters and running over one thousand design variations on these design parameters to achieve an optimized solution based on the following series of constraints: (1) a 200 Ksi stress constraint based on a Titanium 6-4 sheet material, (2) a 248 μs characteristic time (CT) constraint (for conformance purposes), and (3) a 196 gram head mass. The resulting design maximizes the coefficient of restitution of the face. The sampling was performed with a space filling algorithm, modeled with a Radial basis function, and optimized with a hybrid Adaptive Simulated Annealing algorithm to find the global optimal. LFOP was used to find the optimal result in the identified global optimal region. Once the variable thickness pattern shown inFIGS.2and3was determined using the method of the present invention, forging and machining processes were used to manufacture the optimized face insert. A variable thickness face insert designed using the method of the present invention may be incorporated into a standard golf club head, or may be combined with a body having other structural, mass-properties enhancing features. For example, the insert can be placed into a body with face stress-reducing features, such as those disclosed in U.S. Pat. Nos. 9,486,677, 9,597,558, 9,597,561, 9,687,701, 9,687,702, 9,694,257, 9,757,629, 9,776,058, 9,814,947, 9,821,199, 9,855,476, and 9,889,349, the disclosure of each of which is hereby incorporated by reference in its entirety herein. The insert may, alternatively, be combined with a body comprising one or more slots, channels, or grooves, such as those disclosed in U.S. Pat. Nos. 8,403,771, 8,529,368, 8,858,360, 8,956,242, 9,468,819, and 9,776,057, the disclosure of each of which is hereby incorporated by reference in its entirety herein. The insert may also be combined with a body having one or more stationary or movable weight members, such as those disclosed in U.S. Pat. Nos. 8,257,195, 8,328,661, 8,414,420, 8,425,346, 8,900,070, 8,926,448, 9,211,451, 9,586,105, 9,782,642, 8,894,506, 9,084,921, 8,696,491, 9,387,376, 9,675,856, 9,211,453, 9,289,660, 9,364,728, 8,790,195, 8,968,116, 9,623,294, 9,694,261, 9,636,553, 9,682,296, 9,694,256, 8,690,708, 9,022,881, 9,101,811, 8,834,294, 8,956,244, 9,067,110, 9,072,951, 9,180,349, 9,216,332, and 9,308,423, the disclosure of each of which is hereby incorporated by reference in its entirety herein. When designing a golf ball using this method, accurate material models are required to achieve the level of detail needed for the results. This advanced accuracy requires a combination of lab-generated data from cyclic compression tests, drop tests, and Split-Hopkinson bar tests on the material, in addition to matching simulation results to PTM COR data on ball cores. This data is used to tune the material models by nine parameters. It uses the same techniques that are used to design the face. The only difference is that, instead of being constrained by stress, CT, and mass, the simulation objective is to minimize the difference between the test results and simulation data. The model fits where verified, using data from multilayer core tests. The result of 0.0008 COR point delta on the dual core is within two times the measurement error of the test, so combining material in the simulation can be as accurate as the physical test results. Results are provided in Table 1 below. TABLE 1TestedFEACORMaterialDiameterCORCORDelta−10 Comp0.9380.7600010.7599257.6E−05−10 Comp1.6150.769470.769612−1.40E−0490 Comp0.9380.7680.7679277.26E−0590 Comp1.6150.7830.7829683.17E−05Dual Core 90 comp1.5420.7840.7831968.04E−04outer − 10 comp inner The method of the present invention optimizes golf balls and clubs for use with each other, while keeping these products in conformance with their respective rules. Simultaneous design gives a larger design space for exploration. In alternative embodiments shown inFIGS.4and5, the thicknesses of putter face inserts400,500are optimized to minimize ball speed variation across the face on a nine point hit map, while keeping overall putter head mass between 340 and 360 grams, and more preferably between 347 and 351 grams. The face inserts400,500each have at least four sides402,404,406,408,502,504,506,508, and may be composed of any metal alloy material, but preferably are selected from the group consisting of Aluminum 6061, Titanium 6-4, and 304 SS. They may be formed, forged, metal injection molded, printed by a three-dimensional printer, cast, and/or milled. As shown in Table 2 below, the ball speed robustness of the optimized face insert400shown inFIG.4is improved when compared with existing Toulon San Diego and Microhinge Star putters, both sold by Callaway Golf Company. TABLE 2ImprovementImprovementHeadBall SpeedOverOverInsertMaterialHeadMOIRobustnessSan DiegoMHStarFace insert 400Ti 6-4BL-152000.05369%51%Face insert 4006061 AluminumBL-152000.06464%41%Face insert 400304 SSBL-152000.08551%22%Microhinge StarDSM 550BL-152000.10838%N/ANone304 SSSan Diego44370.175N/A−61% The method of the present invention optimizes golf balls and clubs for use with each other, while keeping these products in conformance with their respective rules. Simultaneous design gives a larger design space for exploration. FIG.6is an iron-type face insert with a variable thickness pattern derived from the method shown inFIG.1. From the foregoing it is believed that those skilled in the pertinent art will recognize the meritorious advancement of this invention and will readily understand that while the present invention has been described in association with a preferred embodiment thereof, and other embodiments illustrated in the accompanying drawings, numerous changes, modifications and substitutions of equivalents may be made therein without departing from the spirit and scope of this invention which is intended to be unlimited by the foregoing except as may appear in the following appended claims. Therefore, the embodiments of the invention in which an exclusive property or privilege is claimed are defined in the following appended claims. | 8,517 |
11861275 | DETAILED DESCRIPTION The RHPSO algorithm begins with an initial best candidate solution, which is selected a priori, based on a demonstration of suitability, and then placed within a collection of randomized solutions. A parameterized routine is called to iteratively move the particles through the problem hyperspace by way of a multi objective cost function that is used to update velocity vectors of each particle. Convergence is determined when the value of an iteration counter, which resets with each new update of a global best solution, reaches a given threshold, T. This threshold is a parameter that controls how much branching the algorithm performs. After convergence has been reached, the best solution is stored and recursively passed to the same parameterized routine to be used as a seed for another optimization run, in which that particle solution is combined with fresh round of randomized solutions. This process repeats recursively until an invocation of the routine is encountered where no new best particle is found. This process describes a single branch of a hierarchical search tree. In order to continue branching at each node, a parameter can define the maximum number of branches supported at each level in the tree. Furthermore, by adjusting the value of T, which can be implemented as a percentage of the total iterations allowed for convergence, it is possible for the forking of each branch to be controlled so as to create a search tree with denser or leaner branch structures. Furthermore, exhaustive searches of a given region within the solution space can be prevented by performing a linear search through past best particle solutions and prematurely terminating a branch if the branches' best solution is too similar to others. Typical PSO solutions discourage premature convergence; however, the RHPSO approach actually leverages this property of PSO to continuously fork the search operations around local optimums and accomplish exploration through the continuous insertion of random particles at each branch point. At the end of each optimization process for a branch, the best particle score is compared with that of the current system global best and set as the system global best if its score exceeds it. This approach results in a continuous process of iterative exploration of a complex solution space in a way that improves on the time needed to converge on a global optimum when compared to other implementations of PSO. The PSO algorithm has a tendency to quickly converge to a local optimum. While RHPSO leverages this property to recursively search the solution space through the injection of randomized particle solution, this continuous infusion of randomized solutions may not be sufficient to encourage exploration of the solution space in the presence of steep gradients around a local optimum. In such cases, further exploration is encouraged over exploitation by including a similarity measure, which is minimized, in the cost function. At the root of each branch, the particle of the current repetition is stored in a set comprised of the particles across all repetitions of the current branch; see, e.g.,FIGS.1and2. This set could be comprised of the parents and children of the branch; however, that will incur a significant cost in computation with diminishing gains. The cost function used in all child branches include a similarity term derived from the comparison of the current solution to all solutions within the set associated with the parent branch, see, e.g.,FIG.3. The similarity term is minimized so as to favor new solutions that are dissimilar to the solutions obtained in all the repetitions in the parent branch. Due to the stochastic nature of PSO, a significant amount of computation can be wasted on low performing local solutions that serve as the root for a sub-hierarchy that could be comprised of several levels of branching before the algorithm determines the entire root branch to be sub-optimal. To mitigate this waste in computation and time, an implementation of RHPSO could elect to generate a set number, k, candidate particle solutions for a branch. After the kthsolution is generated, the highest performing solution is selected to be the root particle for a new branch. The value of k can be statically or dynamically set. In the dynamic case, k could be dependent on the branch level since variability in solution particle fitness diminishes with the depth of the solution hierarchy. Process:1. Generate an initial best solution (particle)2. Pass best solution (as seed particle) into parameterized functiona. Embed seed particle (for current invocation of function) as one particle in a collection of randomly initialized particlesi. Randomization of initial particles could be based on proximity to seed particleb. Set the cost function to be used at current branch level (could remain constant across all branches or set to weight one or more terms as a function of the branch level)i. Store all best particle solutions at this branch level within a setii. Calculate the similarity of current solution to all those stored within the setiii. Optimize based on minimizing similarity to all solutions within the setc. Begin particle swarm optimization with a set maximum number of iterations, Mi. Reset iteration count whenever a new optimum is foundii. Exit the optimization loop when n % of M iterations has occurred (accounting for the fact that the iteration counter resets on each new optimum found);d. Repeat steps 2a-2c, K timese. Select the best particle from set of K particles (K may be a function of hierarchy depth);f. If the best solution particle is different from the initial best solution and the solution particle is not too close to one or more of the previously generated best solution particles across all branches;i. Store the current best solution particle;ii. Store the current solution score as the best branch solution if the score is better than previous scores at this branch level;iii. Recurse the function with the current best solution particle as a new seed particle, then repeat step 2 recursively;g. If no improvement on the best solution particle was found or duplicates the best solution found,i. Repeat step 2e recursively with the initial seed particle provided at current invocation (repeat k times). The invention has been implemented in the form of a software product that produces an end result, consisting of a set of optimized polynomial functions or “alphabets” which are useful in the spiral polynomial division multiplexing as described, for example, in U.S. Pat. No. 10,069,664, the disclosure of which is incorporated herein in its entirety. Specifically, there is provided according to an embodiment of the invention, a method for communication including: identifying a set of basis polynomial functions used to generate waveforms, wherein each of the basis polynomial functions in the set of basis polynomial functions is orthogonal to each of the other basis polynomial functions in the set of basis polynomial functions in a coordinate space; combining the set of basis polynomial functions into a message polynomial; convolving the message polynomial with a reference polynomial to produce a transmission polynomial; generating, from the transmission polynomial, a sequence of amplitude values; and transmitting, with a transmitter, a signal based on the sequence of amplitude values. According to another embodiment, there is provided a method for communicating comprising: identifying a set of basis polynomial functions used to generate waveforms, wherein each of the basis polynomial functions in the set of basis polynomial functions is orthogonal to each of the other basis polynomial functions in the set of basis polynomial functions in a polynomial coefficient space; combining the set of basis polynomial functions into a message polynomial; generating a transmission polynomial comprising the message polynomial; generating, from the transmission polynomial, a sequence of amplitude values; and transmitting, with a transmitter, a signal based on the sequence of amplitude values; assigning a power budget to a rising exponential and a falling exponential of a synchronization pulse; and transmitting a synchronization pulse having a length of one transmission time interval, the synchronization pulse comprising the message polynomial having a plurality of sub-channels, the plurality of sub-channels comprising the rising exponential, the falling exponential, and one or more independently-modulated sub-channels; wherein the rising exponential and the falling exponential are transmitted at a maximum allowable power, the maximum allowable power of each of the rising exponential and the falling exponential summing to the power budget. According to another embodiment, there is provided a method for communicating, comprising identifying a set of basis polynomial functions used to generate waveforms, wherein each of the basis polynomial functions in the set of basis polynomial functions is orthogonal to each of the other basis polynomial functions in the set of basis polynomial functions in a polynomial coefficient space; combining the set of basis polynomial functions into a message polynomial; performing a dimension reduction step on the message polynomial, the dimension reduction step comprising: identifying a positive integer npfor which a Taylor term tnp/np! in the message polynomial produces a higher peak value than any other tn/np! over an evaluation interval; determining, by projecting tnp/nponto Cairns space, the projection coefficients for the Cairns functions that collectively generate tnp/np; and rotating the coordinate space to ensure that tnp/npwill not be generated; generating a transmission polynomial comprising the message polynomial; and, generating, from the transmission polynomial, a sequence of amplitude values; and transmitting, with a transmitter, a signal based on the sequence of amplitude values. The present invention provides an improved way, according to the aforementioned methods of communicating, to identify the set of basis polynomial functions used to generate the waveforms comprising the following steps:1. Generate an initial best solution (particle)2. Pass best solution (as seed particle) into parameterized functiona. Embed seed particle (for current invocation of function) as one particle in a collection of randomly initialized particlesi. Randomization of initial particles could be based on proximity to seed particleb. Set the cost function to be used at current branch level (could remain constant across all branches or set to weight one or more terms as a function of the branch level)i. Store all best particle solutions at this branch level within a setii. Calculate the similarity of current solution to all those stored within the setiii. Optimize based on minimizing similarity to all solutions within the setc. Begin particle swarm optimization with a set maximum number of iterations, Mi. Reset iteration count whenever a new optimum is foundii. Exit the optimization loop when n % of M iterations has occurred (accounting for the fact that the iteration counter resets on each new optimum found);d. Repeat steps 2a-2c, K timese. Select the best particle from set of K particles (K may be a function of hierarchy depth);f. If the best solution particle is different from the initial best solution and the solution particle is not too close to one or more of the previously generated best solution particles across all branches;i. Store the current best solution particle;ii. Store the current solution score as the best branch solution if the score is better than previous scores at this branch level;iii. Recurse the function with the current best solution particle as a new seed particle, then repeat step 2 recursively;g. If no improvement on the best solution particle was found or duplicates the best solution found,i. Repeat step 2e recursively with the initial seed particle provided at current invocation (repeat k times). This type of global optimization disclosed above in connection with the generation of polynomial alphabets for use in spiral polynomial division multiplexing can be applied into other fields of use, including in 1) distributed/networked systems, finding optimal parameters such as topology, security, and routing; 2) frequency and channel assignments for telecommunication networks; and 3) code-breaking, searching a large solution space of ciphers for the one correct decryption. For example, the present method of optimization may be used in place of a Monte Carlo simulation, for example, in the case of location measurement acquisition as described in U.S. Pat. No. 8,188,920, the disclosure of which is incorporate herein in its entirety. According to this embodiment of the invention, the Recursive Hierarchical Particle Swarm Optimization method may be used to optimize the number of measurements in order to estimate dilution of precision (DOP) across a service area. Where the Monte Carlo simulation potentially requires significant processing power (depending on the number of iterations in the Monte Carlo simulation), the present invention could reduce the overall computational power required by more quickly converging on a suitable output indicative of the probable number of measurements required to produce results at varying levels of quality. Location determination in a wireless network usually starts with a coarse location based on the serving area of the base station. For any given serving area, the location determination element can use information about the surrounding network to determine the most likely number of measurements required. This uses the geometry of the surrounding base stations (radio transmitters) and the likelihood that each can be successfully measured. The DOP can vary for different points within the serving area of the base station, so determining the actual expected value could also differ greatly. To reduce the complexity of this model, the RHPSO simulation may be used, taking randomly distributed points within the serving area. This can be used to produce information on the likely distribution of the DOP within the serving area. Accordingly, there is provided according to the invention a method of optimizing the number of measurements requested in a service area, comprising: selecting a level of uncertainty; determining a set of radio transmitters that transmit signals capable of being received in the serving area; determining a metric across the service area based on at least the geometry of each of the radio transmitters within the set with respect to a plurality of randomly distributed points across the area; and determining the number of measurements required at a location within the area based on at least the metric and the level of uncertainty, wherein the step of determining the number of measurements required at a location within the area based on at least the metric and the level of uncertainty comprises:1. Selecting an initial number of measurements as a seed particle;2. Passing said seed particle into a parameterized function;a. Embedding said seed particle (for current invocation of function) as one particle in a collection of randomly initialized particles;i. Where randomization of initial particles is optionally based on proximity to the seed particle;b. Setting a cost function to be used at a current branch level, where the cost function may remain constant across all branches or set to weight one or more terms as a function of a branch level;i. Storing all best particle solutions at each branch level within a set;ii. Calculating the similarity of current solution to all solutions stored within said set;iii. Optimizing based on minimizing similarity to all solutions within said set;c. Beginning particle swarm optimization with a set maximum number of iterations, M;i. Resetting iteration count whenever a new optimum is found;ii. Exiting the optimization loop when n % of M iterations has occurred (accounting for the fact that the iteration counter resets on each new optimum found);d. Repeating steps 2a-2c, K times;e. Selecting a best particle from a set of K particles, where K may be a function of hierarchy depth;f. If the best solution particle is different from the seed particle and the solution particle is not too close to one or more of the previously generated best solution particles across all branches; theni. Storing the current best solution particle;ii. Storing a current solution score as a best branch solution if the current solution score is better than previous scores at a same branch level;iii. Recursing the function with the current best solution particle as a new seed particle, then repeat step 2 recursively;g. If no improvement on the best solution particle was found or duplicates the best solution found,i. Repeating step 2e recursively with the initial seed particle provided at current invocation (repeat K times). The foregoing method may be used in connection with any location determination technology in a wireless network that uses measurements from multiple radio transmitters, including but not limited to OTDOA (UMTS, LTE, WiMAX), U-TDOA (GSM, UMTS, LTE), RTT (UMTS, WiMAX), TA (GSM, LTE), Signal strength (any wireless network). According to yet another embodiment, the present method of optimization may be used in place of the accelerating particle-swarm algorithms described in U.S. Pat. No. 10,437,948, the disclosure of which is incorporated herein in its entirety. According to this embodiment of the invention, the Recursive Hierarchical Particle Swarm Optimization method may be used to optimize a complex solution space for any given mathematical function, where the mathematical function may be 1) finding optimal parameters such as topology, security, and routing in distributed/networked systems; 2) finding optimal frequency and channel assignments for telecommunication networks; and 3) code-breaking, searching a large solution space of ciphers for the one correct decryption. According to this embodiment, there is provided according to the invention, a method for accelerating a particle swarm optimization for a solution space for a parameterized function, comprising1. Generating, via one or more processors, an initial best solution for a parameterized function as a seed particle;2. Passing, via the one or more processors, said seed particle into said parameterized function;a. Embedding, via the one or more processors, said seed particle (for current invocation of function) as one particle in a collection of randomly initialized particles;i. Where randomization of initial particles is optionally based on proximity to the seed particle;b. Setting, via the one or more processors, a cost function to be used at a current branch level, where the cost function may remain constant across all branches or set to weight one or more terms as a function of a branch level;i. Storing, via the one or more processors, all best particle solutions at each branch level within a set;ii. Calculating, via the one or more processors, the similarity of current solution to all solutions stored within said set;iii. Optimizing, via the one or more processors, based on minimizing similarity to all solutions within said set;c. Beginning, via the one or more processors, particle swarm optimization with a set maximum number of iterations, M;i. Resetting, via the one or more processors, iteration count whenever a new optimum is found;ii. Exiting the optimization loop when n % of M iterations has occurred (accounting for the fact that the iteration counter resets on each new optimum found);d. Repeating, via the one or more processors, steps 2a-2c, K times;e. Selecting, via the one or more processors, a best particle from a set of K particles, where K may be a function of hierarchy depth;f. If the best solution particle is different from the seed particle and the solution particle is not too close to one or more of the previously generated best solution particles across all branches; theni. Storing, via the one or more processors, the current best solution particle;ii. Storing, via the one or more processors, a current solution score as a best branch solution if the current solution score is better than previous scores at a same branch level;iii. Recursing, via the one or more processors, the function with the current best solution particle as a new seed particle, then repeat step 2 recursively;g. Determining, via the one or more processors, whether no improvement on the best solution particle was found or duplicates the best solution found,i. If no improvement on the best solution particle was determined or duplicates the best solution found; Repeating, via the one or more processors, step 2e recursively with the initial seed particle provided at current invocation (repeat K times). According to a related embodiment, there is provided an apparatus for accelerating a particle swarm optimization comprising: a processor; and a computer readable storage medium having computer usable program code embodied therewith, the computer usable program code executable by the processor to cause the apparatus to accelerate a particle swarm optimization, the program code including:1. Program code to generate, via the processor, an initial best solution for a parameterized function as a seed particle;2. Program code to pass, via the processor, said seed particle into said parameterized function;a. Program code to embed, via the processors, said seed particle (for current invocation of function) as one particle in a collection of randomly initialized particles;i. Where randomization of initial particles is optionally based on proximity to the seed particle;b. Program code to set, via the processor, a cost function to be used at a current branch level, where the cost function may remain constant across all branches or set to weight one or more terms as a function of a branch level;i. Program code to store, via the processor, all best particle solutions at each branch level within a set;ii. Program code to calculate, via the processor, the similarity of current solution to all solutions stored within said set;iii. Program code to optimize, via the processor, based on minimizing similarity to all solutions within said set;c. Program code to begin, via the processor, particle swarm optimization with a set maximum number of iterations, M;i. Program code to reset, via the processor, iteration count whenever a new optimum is found;ii. Program code to exit the optimization loop when n % of M iterations has occurred (accounting for the fact that the iteration counter resets on each new optimum found);d. Program code to repeat, via the processor, steps 2a-2c, K times;e. Program code to select, via the processor, a best particle from a set of K particles, where K may be a function of hierarchy depth;f. Program code to determine, via the processor, whether the best solution particle is different from the seed particle and the solution particle is not too close to one or more of the previously generated best solution particles across all branches; then if the best solution particle is different from the seed particle and the solution particle is not too close to one or more of the previously generated best solution particles across all branches,i. Program code to store, via the processor, the current best solution particle;ii. Program code to storing, via the processor, a current solution score as a best branch solution if the current solution score is better than previous scores at a same branch level;iii. Program code to recurse, via the processor, the function with the current best solution particle as a new seed particle, then repeat step 2 recursively;g. Program code to determine, via the processor, whether no improvement on the best solution particle was found or duplicates the best solution found,i. If no improvement on the best solution particle was determined or duplicates the best solution found; program code to repeat via the processor, step 2e recursively with the initial seed particle provided at current invocation (repeat K times). According to a related embodiment, there is provided a computer program product for accelerating a particle swarm optimization, the computer program product comprising: a computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising a computer usable program code to:1. Generate, via at least one processor, an initial best solution for a parameterized function as a seed particle;2. Pass, via said at least one processor, said seed particle into said parameterized function;a. Embed, via said at least one processor, said seed particle (for current invocation of function) as one particle in a collection of randomly initialized particles;i. Where randomization of initial particles is optionally based on proximity to the seed particle;b. Setting, via said at least one processor, a cost function to be used at a current branch level, where the cost function may remain constant across all branches or set to weight one or more terms as a function of a branch level;i. Storing, via said at least one processor, all best particle solutions at each branch level within a set;ii. Calculating, via said at least one processor, the similarity of current solution to all solutions stored within said set;iii. Optimizing, via said at least one processor, based on minimizing similarity to all solutions within said set;c. Beginning, via said at least one processor, particle swarm optimization with a set maximum number of iterations, M;i. Resetting, via said at least one processor, iteration count whenever a new optimum is found;ii. Exiting the optimization loop when n % of M iterations has occurred (accounting for the fact that the iteration counter resets on each new optimum found);d. Repeating, via said at least one processor, steps 2a-2c, K times;e. Selecting, via said at least one processor, a best particle from a set of K particles, where K may be a function of hierarchy depth;f. If the best solution particle is different from the seed particle and the solution particle is not too close to one or more of the previously generated best solution particles across all branches; theni. Storing, via said at least one processor, the current best solution particle;ii. Storing, via said at least one processor, a current solution score as a best branch solution if the current solution score is better than previous scores at a same branch level;iii. Recursing, via said at least one processor, the function with the current best solution particle as a new seed particle, then repeating step 2 recursively;g. Determining, via said at least one processor, whether no improvement on the best solution particle was found or duplicates the best solution found,i. If no improvement on the best solution particle was determined or duplicates the best solution found; repeating, via said at least one processor, step 2e recursively with the initial seed particle provided at current invocation (repeat K times). As will be appreciated by one skilled in the art, aspects of the inventive subject matter may be embodied as a system, method or computer program product. Accordingly, aspects of the inventive subject matter may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the inventive subject matter may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the inventive subject matter may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of the inventive subject matter are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the inventive subject matter. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. FIG.4depicts an example computer system with a particle swarm optimization accelerator. A computer system includes a processor unit602(possibly including multiple processors, multiple cores, multiple nodes, and/or implementing multi-threading, etc.). The computer system includes memory606. The memory606may be system memory (e.g., one or more of cache, SRAM, DRAM, zero capacitor RAM, Twin Transistor RAM, eDRAM, EDO RAM, DDR RAM, EEPROM, NRAM, RRAM, SONOS, PRAM, etc.) or any one or more of the above already described possible realizations of machine-readable media. The computer system also includes a bus604(e.g., PCI, ISA, PCI-Express, HyperTransport® bus, InfiniBand® bus, NuBus bus, etc.), a network interface618(e.g., an ATM interface, an Ethernet interface, a Frame Relay interface, SONET interface, wireless interface, etc.), and a storage device(s)620(e.g., optical storage, magnetic storage, etc.). The computer system also includes a particle swarm optimization accelerator (“accelerator”)608. This accelerator608uses recursive hierarchical approach to accelerate the particle swarm optimization. Any one of the functionalities may be partially (or entirely) implemented in hardware and/or on the processing unit602. For example, the functionality may be implemented with an application specific integrated circuit, in logic implemented in the processing unit602, in a co-processor on a peripheral device or card, etc. Further, realizations may include fewer or additional components not illustrated inFIG.4(e.g., video cards, audio cards, additional network interfaces, peripheral devices, etc.). The processor unit602, the storage device(s)620, and the network interface618are coupled to the bus604. Although illustrated as being coupled to the bus604, the memory606may be coupled to the processor unit602. While the embodiments are described with reference to various implementations and exploitations, it will be understood that these embodiments are illustrative and that the scope of the inventive subject matter is not limited to them. In general, techniques for accelerating a particle swarm optimization as described herein may be implemented with facilities consistent with any hardware system or hardware systems. Many variations, modifications, additions, and improvements are possible. Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the inventive subject matter. In general, structures and functionality presented as separate components in the example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the inventive subject matter. | 35,815 |
11861276 | DETAILED DESCRIPTION Various embodiments and aspects of the disclosures will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present disclosure. Although these embodiments are described in sufficient detail to enable one skilled in the art to practice the disclosed embodiments, it is understood that these examples are not limiting, such that other embodiments may be used and changes may be made without departing from their spirit and scope. For example, the operations of methods shown and described herein are not necessarily performed in the order indicated and may be performed in parallel. It should also be understood that the methods may include more or fewer operations than are indicated. In some embodiments, operations described herein as separate operations may be combined. Conversely, what may be described herein as a single operation may be implemented in multiple operations. Reference in the specification to “one embodiment” or “an embodiment” or “some embodiments,” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase “embodiment” in various places in the specification do not necessarily all refer to the same embodiment. In some embodiments, described is a system (and method) for locating a center point of a lug nut for an automated vehicle wheel removal system. The system may be used in conjunction with a robotic apparatus to aid in the precision and efficiency of the automated removal of a lug nut from a vehicle wheel. To improve the accuracy of the center point, the system may leverage multiple types of image data. For example, the system may perform machine learning inferences using two-dimensional (2D) and three-dimensional (3D) image data to determine a center point of a lug nut. In some embodiments, the system may process a 2D image to infer an initial center point. Although the initial center point inference may provide a high degree of accuracy, the accuracy may be improved upon in certain circumstances by leveraging a 3D image and existing knowledge of the general shape of a lug nut. More particularly, the system may also process a 3D image to infer a location of one or more edges (or edge points) around the perimeter of the lug nut. When detecting an edge, a machine learning model may leverage existing knowledge about the general shape and/or topography of the lug nut. For example, the system may know there is typically a distinct change in depth (e.g., discontinuity) between the face portion and the side portion that creates an edge. Once a set of edges (or edge points) are located. The system may further refine the initial center point. For example, due the generally circular shape of a lug nut, determining various edge points around the perimeter of the lug nut may correspond to points around the circumference of the lug nut. Accordingly, the geometry of the generally circular shape may be leveraged. For example, once the set of edges are identified, the system may measure a set of distances between the initial center point and the located set of edges. In other words, the system may determine whether the initial center point is the true center point based on a radius measurement of the lug at different angles. Put another way, if the initial center point is at true center, then the radius measurements at the different angles would be equal (or substantially equal). Accordingly, if the initial center point is not true, the system may refine (or adjust) the center point based on the radius measurements. For example, the system may determine an average radius measurement, and refine the center point such that a distance from the refined center point to each of the set of edges corresponds to the determined average. As a result, the system may improve lug nut location accuracy. For example, even slight improvements to a determined center point of a lug nut provide significant increases in accuracy of guidance systems used for automated wheel removal. System Overview Referring toFIG.1, an exemplary system100for the automated removal and replacement of a wheel and tire is disclosed. The system100can be a system of one or more computers102,104,106,108,110(generally referred to as102) including software executing a method system on one or more computers102, which is in communication with, or maintains one or more databases112of information. While the database112is depicted as coupled with one computer110, the database may be distributed, replicated in whole or part, and communicatively coupled to other computers102. For example, portions or subsets of data may be distributed to various computers102to allow for local database access of information stored on database112. The information stored by the system100may include, but is not limited to, the following databases: Customer Database, including fields such as cust_record_id, customer name, customer address, customer_phone_number. Customer Vehicle Database, including fields such as cust_veh_record_id, vehicle_make, vehicle_model, vehicle_identification_number, vehicle_license_plate, vehicle_year, vehicle_color, desired_tire_pressure, desired_gas_type, wheel_locks. General Vehicle Database, including fields gen_veh_record_id, vehicle_make, vehicle_model, vehicle_year, lifting_point_coordinates, lifting_height, axle_distance, tpms_type, lugnut_configuration. Inventory Database, including fields such as inv_record_id, tire_quantity, tire_size, tire_brand, manufacturer, speed_rating, pressure_setting, location_stored, location coordinates. Scheduling Database including fields such as sched_record_id, cust_record_id, cust_veh_record_id, schedule_appointment_date_and_time, front_tire_SKU_numbers, rear_tire_SKU_numbers. The system100may include other tables, and database files, and may store images and other sensor data obtained by the system100as described herein. The system100generates tire change jobs based on received customer information and customer vehicle information. The system100may use the received information as control parameters to direct the control or operation of a vehicle lifting device for lifting vehicles and robotic apparatus for lug nut and wheel removal and replacement as disclosed herein. The system100may receive and store images associated with a customer vehicle in a database112. The system100uses image evaluation processes to perform object detection, and/or create 3-dimensional model of a wheel of a vehicle. The system100interacts and is communicatively coupled with one or more vehicle lifting devices140,142,144(generally referred to as140), with one or more robotic apparatus150,152,154(generally referred to as150) and one or more tire removal/replacement machines160, and one or more tire balancing machines170. The system100may include multiple interfaces122,124,126,128(generally referred to as122) based on the particular functionality to be performed by the system100. For example, the system100may include a customer interface for receiving customer and vehicle information; an operator interface for control and operation of the vehicle lifting device140, the robotic apparatus150, the tire removal/replacement machines160, and/or the tire balancing machines170. Additionally, other interfaces may be utilized. In one example, a user interface receives information to schedule the replacement of tires for a vehicle. This information is stored in a data repository of the system. The architecture of the system allows for interaction with multiple remote devices, such as tablet, cellular phone, laptop, other mobile internet connected devices, and the like. A software application, program, web page or other processes may be executed on the remote devices. The system100retrieves and stores the information which is obtained from multiple users with respect to scheduling a tire change job. Each of the users are providing control parameters for the operation of the robotic apparatus150that will be later used to perform the automated lifting, and wheel removal and replacement onto their vehicle. The system100may use a computer network126for communication to one or more computers102of the system100. As described herein, the computer network120, may include, for example, a local area network (LAN), a virtual LAN (VLAN), a wireless local area network (WLAN), a virtual private network (VPN), cellular network, wireless network, the Internet, or the like, or a combination thereof. Communication among devices of may be performed using any suitable communications protocol such as TCP/IP or EtherNET IP. Vehicle lifting devices140may be communicatively coupled to the system100via computer network120. The vehicle lifting devices140may receive instructions, commands and other data from the system100. The vehicle lifting device140may include different types of sensors to obtain sensor data describing a vehicle. The sensor data obtained by the vehicle lifting device140may be transmitted to the system100for analysis and/or storage into a database112. The vehicle lifting device140provides a mechanism to physically lift a vehicle in a vertical manner according to a predetermined height value. Robotic apparatus150may be communicatively coupled to the system100via computer network120. The robotic apparatus150may receive instructions, commands and other data from the system100. The robotic apparatus150is further described herein. The robotic apparatus150may include different types of sensors integrated into the robotic apparatus150to obtain sensor data describing the vehicle. The sensor data obtained by the robotic apparatus150may be transmitted to the system100for analysis and/or storage into a database112. The robotic apparatus150provides a mechanism to physically remove a wheel from a vehicle and physically replace the wheel back onto the vehicle. As further described, the robotic apparatus150may have different configurations of tooling ends that allow for the removal and replacement of wheel fasteners and the removal and replacement of the wheel from a vehicle wheel hub. One or more tire removal machines160may be communicatively coupled to the system100via computer network120. The tire removal machine160may receive instructions, commands and other data from the system100. The tire removal machine160may include different types of sensors integrated into the tire removal machine160to obtain sensor data describing a wheel and/or tire. The sensor data obtained by the tire removal machine160may be transmitted to the system100for analysis and/or storage into a database112. The tire removal machine160may receive one or more parameters, such as wheel size, tire size, tire pressure monitoring system (TPMS) location, desired tire inflation PSI value and/or a value for a type of gas such as air, or nitrogen to be used for tire inflation. One or more tire balancing machines170may be communicatively coupled to the system100via computer network120. The tire balancing machine170may receive instructions, commands and other data from the system100. The tire balancing machine170may include different types of sensors integrated into the tire balancing machine170to obtain sensor data describing a wheel and/or tire. The sensor data obtained by the tire removal machine170may be transmitted to the system100for analysis and/or storage into a database112. FIG.2Aillustrates an example of an automated wheel removal and wheel replacement station200. The example illustrates a vehicle210positioned over a vehicle lifting device140(not shown). In one embodiment of the station200, two robotic apparatus250(e.g., also referred to as robotic apparatus150) are positioned in a proximate location where the robotic apparatus250can interact with a vehicle210and manipulate the wheel fasteners, remove the wheels, and replace the wheels. Additionally, depicted are wheel holding stations256where the robotic apparatus250may place a removed wheel onto the wheel holding station256, and/or where a wheel may be positioned in advance of the wheel being placed back onto the vehicle210. The actual location of the wheel holding station256may be positioned in any convenient location for operation of the robotic apparatus250. Additionally, a control station258may be used for control and operation of the robotic apparatus250. The control station may be used for manual and/or automated control of the robotic apparatus250. The control station258may receive instructions, commands and other data from the system (e.g., system100). For example, a user interface of the system may provide for instructions to directly control the robotic apparatus250. The control station258may be communicatively coupled to control the robotic apparatus250. Also, depicted are tire balancing machines270that are communicatively coupled to the system (e.g., system100). FIG.2Billustrates an example of automated wheel removal and wheel replacement stations. This example illustrates a physical structure220with three bays222,224,226. The physical structure220includes multiple robotic apparatus250(also referred to as robotic apparatus150), multiple control stations258, multiple wheel holding stations256, and multiple tire balancing machines270. This example illustrates a configuration where multiple vehicles210may be serviced by the robotic apparatus250for automated wheel removal, tire change and wheel replacement. Robotic Apparatus FIG.3shows an example robotic apparatus150for wheel removal (and replacement). The robotic apparatus150is in electronic communication with the system (e.g., one or more components of system100). The robotic apparatus150may receive instructions, commands and data from the system. Likewise, the robotic apparatus may send data, and other information to the system. In some embodiments, the robotic apparatus150has control circuitry, processors, and data storage. While the disclosure discusses operable communication with the system, the robotic apparatus150may perform the methods described herein without interaction with the system. For example, the robotic apparatus150may include a computing system having one or more processing units that may perform wheel remove and replacement without interaction with the system. The robotic apparatus150may be programmed and configured to perform operations in a stand-alone manner. A complete or partial copy of data from the database (e.g., database112) may be locally stored in the robotic apparatus150data storage. The robotic apparatus150may include different types of sensors for the inspection of a vehicle's wheel, these may include proximity sensors, video or still image cameras, LIDAR, thermal sensors, lighting, pressure sensors, and any combination thereof. These sensors may be arranged in various configurations. The robotic apparatus150may obtain sensor data describing the wheel of a vehicle. For example, the sensors may obtain image information for a wheel, and the system may analyze the image to determine orientation of the lug nuts, to determine physical geometry of the wheel, and to determine other aspects of the wheel. The sensor information obtained by the robotic apparatus150may be stored by the system and may be associated with the particular vehicle and/or tire change job. In some embodiments, the robotic apparatus150is a 6-axis robot, or articulated robot, that allows articulated and interpolated movement to any point within a working envelope. At axis 1, the robot rotates the base310of the robot. At axis 2, the robot extends forward and backward the robot's lower arm. At axis 3, the robot raises and lowers the robot's upper arm. At axis 4, the robot's upper arm can wrist roll. At axis 5, the robot's lowers wrist of the robot's arm. At axis 6, the robot rotates wrist of the arm. The arm may have a tooling end340with sensors, a torque wrench, and/or other devices attached. The robotic apparatus150may include proximity sensors to detect objects within a working envelope, or within a threshold distance, of the robotic apparatus150. The working envelope is a physical volume of space of movement and/or operation of the robotic apparatus150. For example, a sensor may detect movement of a person that walks near or into the working envelope of the robotic apparatus150. The system may determine that the detected object is with a certain distance of the robotic apparatus150. If the detected object is determined to be within a threshold distance of the robotic apparatus or the working envelope, then the system may direct the robotic apparatus150to cease movement and/or other operations. The system may generate an error condition, and display the error condition to a user interface of the system. In one example, the robotic apparatus150may automatically resume operation once the system determines that the detected object is no longer within the working envelope, or within the threshold distance of the robotic apparatus150. In another example, to resume operations, the user interface receives an input to resume operations. In response to the received input, the robotic apparatus150resumes operation. Additionally, proximity sensors may be placed in a working environment, such as a vehicle bay, and the proximity sensors are communicatively coupled to the system. Similar to the discussion above, the system may receive sensor data from the proximity sensors and detect an object within a working space, the system may in response to detecting the object, cause one or more robotic apparatus150to cease operations when the object moves into the working environment. Robotic Apparatus Placement The number of robotic apparatus150may be positioned in different locations for operation and access to vehicle wheels. The following illustrates exemplary placement of a robotic apparatus and is not meant to be limiting. For example, one robotic apparatus may be positioned at two locations for access to a left and right side of a vehicle. The robotic apparatus may include a multipurpose tool for tire and lug nut removal. The robotic apparatus150may be affixed to a rail360thereby allowing linear movement of the robotic apparatus along the rail. In another example, two robotic apparatus150may be attached to a guide of rail360. In this configuration, one of the robotic apparatus is tooled for lug nut removal, and the other for wheel removal. The robotic apparatus may move in a linear fashion to access the front and rear wheel on a particular side of the vehicle. In another example, four robotic apparatus150may be positioned with two robotic apparatus on each side of a vehicle. One robotic apparatus may be configured for lug nut removal and another for wheel removal. A robotic apparatus150may be located in a position where the robotic apparatus150may be able to perform operations on two vehicles. The ability of a robotic apparatus150to interleave work between two vehicles is discussed further below in the section on tire change job coordination. The system may execute tire change job operations on two vehicles. For example, the system may instruct a particular robotic apparatus150to perform a wheel removal operation for a first vehicle. The robotic apparatus after taking off the wheel, may hand the wheel off for further procession. After handing off the wheel, the system may direct the robotic apparatus to rotate toward a second vehicle. The system may instruct the robotic apparatus150to perform a wheel replacement operation for the second vehicle. The robotic apparatus150may pick up a wheel that was previously taken off of the second vehicle. The robotic apparatus then may perform a tire replacement of the wheel onto the second vehicle. In other words, a particular robotic apparatus may perform operations on one vehicle in one bay, and then turn or rotate the robotic apparatus150to perform operations on a second vehicle in a second bay. Robotic Tooling Head The robotic apparatus150may include a multipurpose tool head340that is equipped with a gripping mechanism, torque wrench and/or sensing system to detect or confirm lug nut position and lug nut type. The tool head340is configured to remove the lug nuts thereby allowing removal of a wheel. The tool head340may also replace lug nuts after the wheel is replaced onto a wheel hub. The tooling end of the robotic apparatus150may be configured to remove lug nuts for a 4-lug nut, 5-lug nut, 6-lug nut or 8-lug nut configuration. The tooling end may include multiple attachment ends for different socket configurations. In one embodiment, the tooling end includes a singular socket that is moved to each determined lug nut position. In another embodiment, the tooling end uses multiple sockets to concurrently remove or replace two or more lug nuts simultaneously. In one example, the robotic apparatus150may include two independent tool heads with a sensing system that will either grip the wheel for removal and install, or removal and install lug nuts. A cleaning system may be added to the robotic apparatus150or as a stand-alone system to clean the wheel thereby providing a surface of the wheel and lug nuts for better visibility by a sensor of the robotic apparatus, such as a digital camera. The cleaning system may be controlled via the robotic apparatus via the system100. It should be noted that the robotic apparatus150and various other components of the system (e.g., system100) may be used to perform various automated functions such as vehicle wheel and lug nut removal and replacement, socket selection, tire removal and mounting, etc. Further details of some of these function is further described in commonly owned U.S. Pat. No. 10,773,550, filed on Apr. 17, 2019, and titled, “AUTOMATED REMOVAL AND REPLACEMENT OF VEHICLE WHEELS AND TIRES,” which is incorporated in its entirety herein for all purposes. Accordingly, the center point detection of a lug nut as further described herein may aid in accurately locating a lug nut to perform such automated functions. Computer Vision The system (e.g., system100) may include a computer vision system (or module) that processes obtained images (e.g., 2D or 3D images). As described herein, various components may use computer vision cameras, or other sensors to assist in the location determination of physical aspects of the vehicle such as the physical geometry of physical aspects of the wheels. FIG.4illustrates a schematic illustration of an image capture system400obtaining one or more images410(e.g., images505, and605as referred to herein) via a computer vision camera420and performing one more inferences using one or more machine learning models (or algorithms). It should be noted that the image410may be obtained in the form of a point cloud. As shown, the system may process the image410using machine learning. For example, the system may use a trained neural network to identify features of a lug nut (e.g., center point, edges, etc.). For example, using machine learning training techniques, the system may be trained with multiple images of a lug nut and corresponding center points and/or edges (or perimeter edges). Using the trained model in a production mode, the system may identify a lug nut center point, edges, or other features from a received image410as an input to the trained neural network. Accordingly, machine learning inferences may be performed to identify various features of a lug nut. As shown, the system may obtain an image410of a vehicle wheel. As described, the image410may be obtained from different devices or computers of the system, for example, one or more digital cameras coupled to the robotic apparatus150, or via a mobile device communicatively coupled to the system. The system may process the obtained image410via the trained neural network as a data input, and an image classifier may then determine lug nut features such as center points, edges, lug nut pattern (e.g., number of bolts), nut type, etc. Additionally, fiducial markers may be placed on a wheel fastener to indicate a location of a lug nut. As an example, stickers with certain patterns, colors, shapes, or a combination thereof, may be placed on the wheel. This may help a robotic apparatus in determining one or more positions of lug nuts of the vehicle. Fiducial markers may be wireless devices that may be affixed to the vehicle. The wireless device may be for example a blue-tooth enabled socket that is placed onto the lug nut. The socket size of the blue-tooth enabled socket may be for example in the sizes of (SAE ¾ inch, % inch, 13/16 inch; Metric 17 mm, 19 mm, 21 mm). Each of the wireless devices may emit a unique signal or signature that may be recognized by the system. Using multiple fiducial markers on the lug nuts the system may determine the lug nut configuration of the wheel. The system may detect the position of fiducial markers placed adjacently to one another, or placed across from one another, or placed on the second or third lug nut. The system may then determine the center or centroid of two markers (as further described herein) and calculate the distance between the markers. Additionally, the system may determine the angle of two lines from a first fiducial marker and to a second fiducial marker, and then from the second fiducial marker to a third fiducial marker that have been placed on the lug nuts. Moreover, the system may identify the fiducial markers in an image taken by a camera (e.g., camera420), for example, an image taken by a camera of the vehicle lifting device or a camera of the robotic apparatus. The system processes the image to detect objects in the image. Based on the detected object, the system may identify the position of the marker. A fiducial maker may be associated with a particular meaning or action by the system. For example, based on a pattern or color of the marker, the system may identify the marker as lug nut location, a lifting point location, etc. Lug Nut Center Point Detection and Refinement As described, the system may be used in conjunction with a robotic apparatus (e.g., robotic apparatus150) to more accurately identify a center point of a lug nut to aid in the precision and efficiency of the automated removal of a lug nut from a vehicle wheel. FIG.5is an example of an image from which an initial center point of a lug nut is inferred from the two-dimensional information according to one or more embodiments of the disclosure. As shown, the system may obtain a two-dimensional (2D) image505depicting a side elevation view of at least a portion of a vehicle wheel including one or more lug nuts. As shown, the face portion of a first lug nut508may be captured by the image505. As referred to herein, a “lug nut” may refer to various types of lug nuts (e.g., cone seat, bulge cone seat, under hub cap, spline drive, etc.), as well as various types of lug bolts. It should be noted that for simplicity, this example illustrates the system determining an initial center point510for a particular lug nut, but the system may perform such a determination for multiple, or all of the lug nuts on the wheel sequentially, or simultaneously (or substantially simultaneously). The system may determine (e.g., infer) an initial center point (or centroid)510of the first lug nut508from the image505. The system may determine the initial center point510by processing the image505using one or more machine learning models (or algorithms). For example, the system may input image505into a machine learning model, which then outputs the inferred initial center point510. In some embodiments, the machine learning model may perform object detection to identify the first lug nut508, and as shown in this example, create a bounding box515for the identified lug nut. Although various methods may be used to determine the initial center point510, in some embodiments, the initial center point510may be determined based on the created bounding box515. For example, the center point510may correspond to the center point (or centroid) of the bounding box515. In some embodiments, the initial center point510may be expressed using 2D coordinates (e.g., Cartesian coordinates) such as (x,y) coordinates. These coordinates may be mapped onto a three-dimensional (3D) image (e.g., point cloud) as further described herein. In some embodiments, the image505may be obtained from a computer vision system that includes one or more cameras and various other sensors or devices (e.g., LIDAR, structured light 3d imaging, or other 3d image capture device) to capture various types of information including 2D and 3D images/information. For example, although 3D information may be captured by the computer vision system, only the 2D information in the form of 2D image505may be provided to a machine learning model to infer the initial center point510. In some embodiments, although the center point510may be inferred with a certain degree of accuracy, the system may refine (or adjust) the location of the initial center point510by processing a corresponding 3D image as further described with reference toFIG.6. FIG.6is an example of an image from which a location of one or more lug nut edges are inferred from three-dimensional information according to one or more embodiments of the disclosure. It should be noted that although a generally 2D image is shown in the example ofFIG.6, the points discussed below may represent points within a 3D space (e.g., point cloud). Image605represents a 3D image depicting a side elevation view of the portion of the vehicle wheel that includes the first lug nut508. As described, an initial center point (e.g., center point510) for the first lug nut508was inferred from a 2D image (e.g., image505). Accordingly, the initial center point may be mapped to 3D image605. More particularly, as shown, mapped center point610is the point within the 3D image605that corresponds to the initially inferred center point from the 2D image. In some embodiments, the 2D coordinates (e.g., (x,y) coordinates) representing the inferred center point may be mapped to 3D coordinates (e.g., (x, y, z) coordinates) representing mapped center point610. For example, the 2D coordinates of the initial center point may be mapped to a point cloud representing the 3D image. The system may then refine center point610using the 3D image information. More particularly, the system may use computer vision to determine various features of the lug nut from the 3D information. The system may locate one or more edges of a lug nut to generally determine various points around the perimeter of the lug nut viewed from a side elevation of a wheel as shown in this example. As shown, the system may locate a set of edges (or edge points, or points of an edge)620(e.g., edges620A-620D) at various points around a perimeter of the first lug nut508by processing the 3D image. In some embodiments, the processing may include performing object detection to identify one or more lug nuts including the first lug nut508. In addition, as part of the object detection, or a separate processing, the system may locate the set of edges620generally around (or forming) the perimeter of the first lug nut508. In some embodiments, an edge may correspond to an edge of the nut portion of a lug nut, an edge of the seat portion of the lug nut, or combination thereof. In some embodiments, an edge of the lug nut may correspond to (or be defined by) the portion of the lug nut in which the face of the lug nut transitions to (or connects with) a side of the nut portion. When detecting an edge the system may leverage existing knowledge about the general shape and/or topography of the lug nut. For example, the system may know there is typically a distinct change in depth (e.g., discontinuity) between the face portion and the side portion that creates an edge. Accordingly, the system may measure the depth at various points of the lug nut to identify an edge. Due to the generally circular shape of the face of a lug nut (or cylindrical shape of the entire lug nut), determining various edge points around the perimeter of the lug nut may correspond to points around the circumference of the lug nut. As further described herein, the geometry of the generally circular shape may be leveraged. For example, once the set of edges620are identified, the system may refine the location of mapped center point610as further described with reference toFIG.7. FIG.7is an example of an image in which an inferred center point of a lug nut may be refined based on a set of located lug nut edges. As shown, after determining a set of edges620(e.g., edges620A-D) of the first lug nut508, the system may determine a set of distances750(e.g.,750A-750D) by determining (or measuring, calculating, etc.) distances between center point610and one or more of the set of edges620(e.g.,620A-620D). More particularly, as shown, the set of distances750may include distances750A,750B,750C, and750D, which are the distances between mapped center point610and points on edges620A620B,620C, and620D respectively. Based on the determined set of distances750, the system may determine whether the accuracy of center point610may be improved. More particularly, the system may leverage the generally circular shape of a lug nut to determine whether the distance between center point610and various points around the circumference (or perimeter) of the lug nut are equal (or substantially equal). In other words, the system may determine whether center point610is the true (or accurate) center point based on measuring the radius of the lug nut from center point610at different angles. Put another way, if center point610is at the true center, then the radius measurements at the different angles would be equal (or substantially equal). The system may refine (adjust, update, improve, etc.) center point610in various ways based on the determined distances750. For example, the system may perform a refinement based on one or more of the distances750. In some embodiments, the system may refine center point610based on an average of the distances750. For example, if distances for750A,750B,750C, and750D are 0.30, 0.40, 0.40, and 0.30 inches respectively, the center point610may be positioned (or located) such that the distance from center point610to each of the edges620A-D is 0.35 inches. In other words, the center point610may be refined such that a radius measurement from the updated position to each of the determined edges (e.g., edges620A-D) are the same. When determining average distance, the system may use of all of the determined distances (e.g.,750A-D as used in the above example), or less than all (e.g.,750A-C). For instance, the system may discard certain distances that appear to be outliers (e.g., based on a threshold variance). As another example, the system may discard the shortest and longest distances and base the average on the remaining distances. As another example, the system may use distances determined to the location of edges deemed most accurate. For instance, the system may take into account confidence scores or intervals for a distance or associated edge when refining the center point610. FIG.8is an example of an image showing an updated center point according to one or more embodiments of the disclosure. As shown, the system may determine an updated (or refined, true, improved, more accurate, etc.) center point855for the first lug nut508within image605. As described, the distance from center point855and one or more of the previously identified points on edges620A-D, would be substantially the same. FIG.9is a process flow diagram illustrating an example method of updating an initial center point of a lug nut according to one or more embodiments of the disclosure. Process900may use processing logic, which may include software, hardware, or a combination thereof. For example, process900may be performed by a system including one or more components described in system100. In901, the system (e.g., computer102) may obtain a two-dimensional (2D) image (e.g., image505) of a vehicle wheel including a set of lug nuts. The system may obtain the 2D image (e.g., still or video/live image) from a digital camera, for example, operatively connected to, or part of, the system. For example, the digital camera may be part of the robotic apparatus (e.g., robotic apparatus150). In902, the system may determine an initial center point (e.g., center point510) of at least a first lug nut (e.g., lug nut508) by processing the two-dimensional (2D) image using a first machine learning (ML) model. For example, the system may determine an initial center point of at least a first lug nut using the 2D image to perform (or initiate) a first machine learning inference. In some embodiments, the first machine learning model (or first set of ML models) may be trained to infer features of a vehicle wheel from 2D information. For example, the first machine learning model may be trained to perform lug nut detection and determine an initial center point (e.g., center point510) for the detected lug nut. In other words, the system may provide (e.g., input) the 2D image to the machine learning model to perform an inference. The machine learning model may then process the 2D image and output (e.g., provide) the initial (e.g., predicted) center point. In some embodiments, the processing may be performed remotely. For example, the system may provide the obtained 2D image to a remote computing device to perform machine learning inferences and the remote computing device may return the inferred data. In903, the system may obtain a three-dimensional (3D) image (e.g., image605) of at least a portion of the vehicle wheel including the first lug nut. The system may obtain the 3D image (e.g., still or video/live image) from a digital camera. In some embodiments, the same digital camera may capture the 2D image and 3D image. In some embodiments, the 3D image may be provided as point cloud. Accordingly, the topography of the first lug nut may be analyzed with one or more machine learning models to identify one or more edges as further described. In some embodiments, the system may map a set of two-dimensional coordinates representing the initial center point to a set of three-dimensional coordinates representing the point (e.g., mapped center point610) within the three-dimensional image that corresponds to the initial center point. For example, if the inferred initial center point from the 2D image has coordinates (435, 565), the corresponding point within the 3D image may be mapped to coordinates (435, 565, 0). In904, the system may locate (or identify, determine, etc.) a set of edges (e.g., edges620) at various points around a perimeter of the first lug nut by processing the three-dimensional (3D) image using a second machine learning model. For example, the system may locate the set of edges using the 3D image to perform (or initiate) a second machine learning inference. In other embodiments, the system uses 3d point cloud data, and analyzes the said point cloud data to determine various points of edges to determine ‘true’ center of the first lug nut. In some embodiments, the system may identify the first lug nut and the set of edges using a computer vision system. In some embodiments, the computer vision system may use a second machine learning model (or second set of ML models) that is/are trained to infer features of a vehicle wheel from 3D information. For example, the computer vison system may use various machine learning algorithms to perform object detection when identifying one or more lug nuts. In addition, the computer vison system may use one or more machine learning algorithms to perform edge detection to locate one or more edges of the identified lug nut. In some embodiments, the processing may be performed remotely. For example, the system may provide the obtained 3D image to a remote computing device to perform machine learning inferences and the remote computing device may return the inferred data As described, the system (e.g., computer vision system) may locate the set of edges by performing edge detection. For example, the system may use any suitable edge detection algorithm (or model). For instance, the edge detection algorithm may identify discontinuities in depth when locating the edges of a lug nut. Accordingly, in some embodiments, locating the set of edges may include determining a change of depth between a plurality of points proximate to the various points around the perimeter of the first lug nut, and locating the set of edges at the various points around the perimeter of the first lug nut based on the determined change of depth. In some embodiments, locating the set of edges may include determining a confidence score for each edge of the set of edges, and retaining only those edges satisfying a predetermined confidence score as part of the located set of edges. In some embodiments, the system may leverage existing knowledge of the general shape of the nut portion of a lug nut. For example, the shape of the nut portion may be a hexagon. Accordingly, the system may rely on the nut portion having six relatively flat sides when locating one or more edges. For instance, the system may locate a point on an edge of each of the six sides when locating the set of edges. In905, the system may determine (or calculate, measure, etc.) a set of distances (e.g., distances750) between a point (e.g., mapped center point610) within the three-dimensional (3D) image that corresponds to the initial center point and the located set of edges. As described, the initial center point inferred from the 2D image (e.g., center point510) may be mapped into the 3D image (e.g., point cloud) to obtain a mapped point (e.g., mapped center point610). The obtained 3D image may be a still image or set of images (e.g., live or video image). In906, the system may update the initial center point of the first lug nut based on the determined set of distances (e.g., update the initial center point510/610to center point855). In some embodiments, updating the initial center point may include determining an average distance of the determined set of distances (e.g., average of distances750A-D), and updating the initial center point such that a distance from the updated center point (e.g., center point855) to each of the set of edges (e.g., edges620A-D) corresponds to the determined average distance. In some embodiments, determining the average distance may be based on two or more of the determined distances. For example, in some embodiments, the system may discard certain distances, or distances from edges, that have a confidence score that is below a predefined threshold. Conversely, the system may include only those distances, or distances from edges, that have a confidence score above a predefined threshold. For example, if edges620A,620B, and620C have a confidence score above a predefined threshold, the system may only include distances750A,750B, and750C when determining the average distance. In some embodiments, the system may position a robotic apparatus (e.g., robotic apparatus150) based on the updated center point. More particularly, in907, the system may direct a robotic apparatus operatively connected to the system to maneuver to a position based on the updated center point of the first lug nut. In some embodiments, the robotic apparatus may be directed to maneuver to the positon as part of a sequence of operations to remove the first lug nut from the vehicle wheel. In some embodiments, the system may determine one or more dimensions of the first lug nut based on the updated center point and the determined set of distances. For example, the system may use a measurement from the updated center point to one or more of the located edges to determine a size of the nut portion of the lug nut. As another example, the system may determine a dimension of the nut portion based on the determined average distance of the set of distances (e.g., average of distances750A-D). Accordingly, the system may determine a size of the lug nut (e.g., hex size) based on the determined one or more dimensions of the lug nut. Accordingly, the system may also direct a robotic apparatus to select a socket corresponding to the determined lug nut size. In some embodiments, the robotic apparatus may select the socket/socket size (e.g., SAE ¾ inch, % inch, 13/16 inch; Metric 17 mm, 19 mm, 21 mm) as part of a sequence of operations to remove the first lug nut from the vehicle wheel. As noted, operations described above may be performed in parallel. For example, the system may perform inferences on the 2D image and the 3D image at the same time. Example Computing System FIG.10shows a block diagram of an example of a computing system that may be used in conjunction with one or more embodiments of the disclosure. For example, computing system1100(or system, or server, or computing device, or device) may represent any of the devices or systems (e.g., system100, computer102, robotic apparatus150, etc.) described herein that perform any of the processes, operations, or methods of the disclosure. Note that while the computing system1100illustrates various components, it is not intended to represent any particular architecture or manner of interconnecting the components as such details are not germane to the present disclosure. It will also be appreciated that other types of systems that have fewer or more components than shown may also be used with the present disclosure. As shown, the computing system1100may include a bus1105which may be coupled to a processor1110, ROM (Read Only Memory)1120, RAM (or volatile memory)1125, and storage (or non-volatile memory)1130. The processor(s)1110may retrieve stored instructions from one or more of the memories1120,1125, and1130and execute the instructions to perform processes, operations, or methods described herein. These memories represent examples of a non-transitory computer-readable medium (or machine-readable medium, a computer program product, etc.) containing instructions (or program code) which when executed by a processor (or system, device, etc.), cause the processor to perform operations, processes, or methods described herein. As referred to herein, for example, with reference to the claims, a processor may include one or more processors. Moreover, the one or more processors1110may perform operations in an on-demand or “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). Accordingly, the performance of operations may be distributed among the one or more processors1110, whether residing only within a single machine or deployed across a number of machines. For example, the one or more processors1110may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm), or may be distributed across a number of geographic locations. The RAM1125may be implemented as, for example, dynamic RAM (DRAM), or other types of memory that require power continually in order to refresh or maintain the data in the memory. Storage1130may include, for example, magnetic, semiconductor, tape, optical, removable, non-removable, and other types of storage that maintain data even after power is removed from the system. It should be appreciated that storage1130may be remote from the system (e.g., accessible via a network). A display controller1150may be coupled to the bus1105in order to receive display data to be displayed on a display device1155, which can display any one of the user interface features or embodiments described herein and may be a local or a remote display device. The computing system1100may also include one or more input/output (I/O) components1165including cameras (e.g., camera420), mice, keyboards, touch screen, network interfaces, printers, speakers, and other devices. Typically, the input/output components1165are coupled to the system through an input/output controller1160. In addition, a robotic apparatus (e.g., robotic apparatus150) may be coupled to the system via controller1160. Program code1170may represent any of the instructions, applications, software, libraries, toolkits, modules, components, engines, units, functions, logic, etc. as described herein (e.g., computer102, machine learning models, computer vision models, etc.). Program code1170may reside, completely or at least partially, within the memories described herein (e.g., non-transitory computer-readable media), or within a processor during execution thereof by the computing system. Program code1170may include both machine code, such as produced by a compiler, and files containing higher-level or intermediate code that may be executed by a computing system or other data processing apparatus (or machine) using an interpreter. In addition, program code1170can be implemented as software, firmware, or functional circuitry within the computing system, or as combinations thereof. Program code1170may also be downloaded, in whole or in part, through the use of a software development kit or toolkit that enables the creation and implementation of the described embodiments. Moreover, any of the disclosed embodiments may be embodied in various types of hardware, software, firmware, and combinations thereof. For example, some techniques disclosed herein may be implemented, at least in part, by non-transitory computer-readable media that include program instructions, state information, etc., for performing various methods and operations described herein. It should be noted that references to ordinal numbers such as “first,” “second,” “third,” etc., may indicate an adjective for an element (e.g., any noun in the application). The use of ordinal numbers does not necessarily imply or create any particular ordering of the elements nor limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements. In addition, the use of the term “or” indicates an inclusive or (e.g., and/or) unless otherwise specified. For example, the phrase “at least one of x, y, or z” means any one of x, y, and z, as well as any combination thereof. In addition, the term “based on” is used to describe one or more factors that affect a determination. These terms do not foreclose additional factors that may affect a determination. For example, the phrase “determining A based on B” includes B being a factor that affects the determination of A, and does not foreclose the determination of A from also being based on C. However, in other instances, A may be determined based solely on B, such as by the use of the terms “only,” “solely,” and other such terminology. In addition, the term “approximately” or “substantially” may be used herein and may be interpreted as “as nearly as practicable,” “within technical limitations,” and the like. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as examples only, with a true scope and spirit of the embodiments being indicated by the claims. | 52,208 |
11861277 | DETAILED DESCRIPTION As discussed above, hardware and software verification tools may need to connect multiple design entities talking simultaneously using a common exchange unit (transaction). Accordingly, embodiments of the present disclosure are directed towards a routing process that defines the common architecture for the routing logic which transfers the transactions between a sender (initiator/ingress) and a receiver (target/egress). Embodiments included herein describe the protocol of transaction exchange and its improvement upon existing technologies, and ensure that both the initiator and target are free to perform other tasks while they are waiting for the other end. In this way, multiple entities may communicate with each other simultaneously and with minimal latency between transfers. As Design Under Tests (“DUT” s) become more and more complicated, the verification scenarios warrant more complex verification components. A verification component may include a hardware and/or software entity which is not part of the design, but is a component used to test and verify functionality the design features and functionality. Occasionally, a single verification component is not enough and the design requires many of these verification components to communicate with each other, which requires internal routing between multiple instances. Accordingly, embodiments of the routing process included herein define an architecture and a protocol to implement a routing matrix to allow multiple verification components to communicate with each other with very high throughput. The proposed routing matrix utilizes multiple de-centralized memories to make the design easier to partition and has very high availability. For example if there exists a path from entity A to B, and there is another from entity C to D, then they operate completely independently and unhindered, thus no arbitration may be required. If there are multiple paths amongst entities P to Q and P to R then the bandwidth is shared, but the connection is never blocked and the paths may still operate independently, activating for transfer only when there is an available packet to be sent over. The priority and arbitration logic are completely configurable and the proposed protocol allows for architectures allowing both the sending and receiving entities to prioritize as is discussed in further detail hereinbelow. Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the present disclosure to those skilled in the art. In the drawings, the thicknesses of layers and regions may be exaggerated for clarity. Like reference numerals in the drawings may denote like elements. Referring toFIG.1, there is shown a routing process10that may reside on and may be executed by server computer12, which may be connected to network14(e.g., the internet or a local area network). Examples of server computer12may include, but are not limited to: a personal computer, a server computer, a series of server computers, a mini computer, and a mainframe computer. Server computer12may be a web server (or a series of servers) running a network operating system, examples of which may include but are not limited to: Microsoft Windows XP Server™; Novell Netware™; or Redhat Linux™, for example. Additionally and/or alternatively, routing process10may reside on a client electronic device, such as a personal computer, notebook computer, personal digital assistant, or the like. The instruction sets and subroutines of routing process10, which may be stored on storage device16coupled to server computer12, may be executed by one or more processors (not shown) and one or more memory architectures (not shown) incorporated into server computer12. Storage device16may include but is not limited to: a hard disk drive; a tape drive; an optical drive; a RAID array; a random access memory (RAM); and a read-only memory (ROM). Server computer12may execute a web server application, examples of which may include but are not limited to: Microsoft IIS™, Novell Webserver™, or Apache Webserver™, that allows for HTTP (i.e., HyperText Transfer Protocol) access to server computer12via network14. Network14may be connected to one or more secondary networks (e.g., network18), examples of which may include but are not limited to: a local area network; a wide area network; or an intranet, for example. Server computer12may execute one or more server applications (e.g., server application20), examples of which may include but are not limited to, e.g., Lotus Domino™ Server and Microsoft Exchange™ Server. Server application20may interact with one or more client applications (e.g., client applications22,24,26,28) in order to execute routing process10. Examples of client applications22,24,26,28may include, but are not limited to, design verification tools such as those available from the assignee of the present disclosure. These applications may also be executed by server computer12. In some embodiments, routing process10may be a stand-alone application that interfaces with server application20or may be an applet/application that is executed within server application20. The instruction sets and subroutines of server application20, which may be stored on storage device16coupled to server computer12, may be executed by one or more processors (not shown) and one or more memory architectures (not shown) incorporated into server computer12. As mentioned above, in addition/as an alternative to being a server-based application residing on server computer12, the routing process may be a client-side application (not shown) residing on one or more client electronic devices38,40,42,44(e.g., stored on storage devices30,32,34,36, respectively). As such, the routing process may be a stand-alone application that interfaces with a client application (e.g., client applications22,24,26,28), or may be an applet/application that is executed within a client application. As such, the routing process may be a client-side process, a server-side process, or a hybrid client-side/server-side process, which may be executed, in whole or in part, by server computer12, or one or more of client electronic devices38,40,42,44. The instruction sets and subroutines of client applications22,24,26,28, which may be stored on storage devices30,32,34,36(respectively) coupled to client electronic devices38,40,42,44(respectively), may be executed by one or more processors (not shown) and one or more memory architectures (not shown) incorporated into client electronic devices38,40,42,44(respectively). Storage devices30,32,34,36may include but are not limited to: hard disk drives; tape drives; optical drives; RAID arrays; random access memories (RAM); read-only memories (ROM), compact flash (CF) storage devices, secure digital (SD) storage devices, and memory stick storage devices. Examples of client electronic devices38,40,42,44may include, but are not limited to, personal computer38, laptop computer40, personal digital assistant42, notebook computer44, a data-enabled, cellular telephone (not shown), and a dedicated network device (not shown), for example. Using client applications22,24,26,28, users46,48,50,52may utilize formal analysis, testbench simulation, and/or hybrid technology features verify a particular integrated circuit design. Users46,48,50,52may access server application20directly through the device on which the client application (e.g., client applications22,24,26,28) is executed, namely client electronic devices38,40,42,44, for example. Users46,48,50,52may access server application20directly through network14or through secondary network18. Further, server computer12(e.g., the computer that executes server application20) may be connected to network14through secondary network18, as illustrated with phantom link line54. In some embodiments, routing process10may be a cloud-based process as any or all of the operations described herein may occur, in whole, or in part, in the cloud or as part of a cloud-based system. The various client electronic devices may be directly or indirectly coupled to network14(or network18). For example, personal computer38is shown directly coupled to network14via a hardwired network connection. Further, notebook computer44is shown directly coupled to network18via a hardwired network connection. Laptop computer40is shown wirelessly coupled to network14via wireless communication channel56established between laptop computer40and wireless access point (i.e., WAP)58, which is shown directly coupled to network14. WAP 58 may be, for example, an IEEE 802.11a, 802.11b, 802.11g, Wi-Fi, and/or Bluetooth device that is capable of establishing wireless communication channel56between laptop computer40and WAP 58. Personal digital assistant42is shown wirelessly coupled to network14via wireless communication channel60established between personal digital assistant42and cellular network/bridge62, which is shown directly coupled to network14. As is known in the art, all of the IEEE 802.11x specifications may use Ethernet protocol and carrier sense multiple access with collision avoidance (CSMA/CA) for path sharing. The various 802.11x specifications may use phase-shift keying (PSK) modulation or complementary code keying (CCK) modulation, for example. As is known in the art, Bluetooth is a telecommunications industry specification that allows e.g., mobile phones, computers, and personal digital assistants to be interconnected using a short-range wireless connection. Client electronic devices38,40,42,44may each execute an operating system, examples of which may include but are not limited to Microsoft Windows™, Microsoft Windows CE™, Redhat Linux™, Apple iOS, ANDROID, or a custom operating system. Referring now toFIG.2, a flowchart depicting an embodiment consistent with routing process10is provided. Embodiments may include enabling202data transmission between plurality of protocol adapters, each of the protocol adapters including one ingress port and one egress port, wherein the ingress port of each of the plurality of protocol adapters maintains an active connection with a single egress port at one time. Embodiments may further include transmitting204data between the plurality of protocol adapters using a distributed routing matrix that provides an interface between the plurality of protocol adapters. Numerous other operations are also within the scope of the present disclosure. Referring now toFIG.3, a diagram300showing an example USB4 router architecture is provided. While examples included herein discuss USB4 it should be noted that this is provided merely by way of example as embodiments of routing process10may be used in a variety of different applications. Modern communication interfaces like USB4 are more than a collection of simple one-to-one connections. These devices act more like network routers. When a customer wishes to test their own USB4 host or device on an emulation platform (such as those available from the Assignee of the present disclosure), they would need the corresponding pair device or host component to push traffic on to their DUT. Some verification components may be employed with the primary goal of pushing as much data as allowed on to the DUT running on the emulation platforms from the software side, and simultaneously grabbing as much data as possible from the DUT and providing the user with an application programming interface (“API”) to drive/receive this traffic on the simulator side. The DUT running on emulation platforms, typically would operate on the fastest design clock frequency it was compiled to, but with these verification components trying to interact with the software layers, the design clock has to stop to allow the software to catch up with the emulation platforms. The less frequent these interruptions happen, the faster the DUT can operate leading to lower verification turnaround. Specifically, for USB4, the protocol may tunnel various other protocols some of which may include, but are not limited to, DisplayPort, PCIe, and USB3, over the USB4 fabric. A single USB4 device may have multiple instances of these tunnelled protocol instances (called adapters/ports) connected to one USB4 adapter port. In some cases, there may be more than one USB4 Adapter in a device, for a hub like configuration, with one adapter acting as an upstream port, and the other downstream. The main controller of this hierarchy is referred to as a configuration manager (CM), which is at the top of the hierarchy operating inside a USB4 host. To help achieve this, the USB4 protocol may include a transport layer, in which all the adapters may talk to each other via packets referred to as transport layer packets (TLP), and there may be up to 64 adapters in one device. Every adapter may include two entities, one referred to herein as an “ingress”, which may buffer in all the TLPs coming in the transport layer, and another referred to herein as an “egress” which prioritizes all the packets going out of the transport layer. The USB4 protocol may also define the way routes/logical connection between adapters may be established from the CM by programming register sets within a routing table but is silent on the actual mechanism of the TLP exchange between the adapters, and how the adapters actual transfer packets among each other. Accordingly, embodiments of routing process10provide an architecture and protocol to exchange the TLPs. This approach outlines the specific constraints being used within an emulation platform, the maximum protocol throughput allowed for the DUT, and allows for many verification scenarios to mimic and validate real life designs. Using existing approaches, the routing logic is typically implemented in an emulation setting using some of the approaches discussed below. As shown inFIG.4, one approach includes using a dedicated CPU, which generally involves employing a dedicated CPU running at a very high frequency design clock. This approach has the advantage of being 100% RTL, so no hardware/software exchanges need to happen, the CPU may be programmed using firmware to adjust to different user scenarios, and the byte code may be loaded into the memory after compilation. However, this scenario is disadvantageous in terms of the actual gate area, and the CPU design clock needed would be more than the fastest USB4 clock frequency. This would slow down the entire verification environment, leading to a significantly overall lower throughput. Also, a single CPU would struggle to keep up with 64 requests coming in from all of the active ports simultaneously, which would require more than one CPU, further multiplying the disadvantages. Referring now toFIG.5, an embodiment showing a diagram500depicting another approach that involves delegating the routing logic to the software layer is provided. Since we are talking in terms of hardware/software co-design, there is a possibility to pass all the TLPs to the software layer running on the host parallel to the DUT and verification components running on the emulator. This approach has the advantage of being least costly in terms of gate area, and there is no need to use a faster clock than the USB4 design clock. However, every time the hardware/software buffers are empty, the hardware will have to stop and wait for the software to catch up. This may have an impact on the overall design throughput. Also, since there are multiple verification components for DisplayPort, PCIe and USB3 protocols, with their respective software layers running in parallel, the host connected to the emulator would get loaded very quickly making this a solution that is difficult to scale. Referring now toFIG.6, and in contrast to the approaches described above, embodiments of routing process10include a 100% RTL implementation600, including a custom protocol for TLP exchange. Since the complete protocol is RTL-based, there are no hardware/software exchanges required to properly route the packets, and the protocol may operate at the fastest USB4 clock frequency thus mitigating clock related slowdown. This implementation is completely non-blocking in nature, and may be distributed amongst the various adapters, so there is no single controlling logic. All adapters may be treated at par, and this architecture allows for simultaneous transfers between all the adapters. This protocol is more scalable and should allow for far more than 64 entities to communicate. Also, the egress and ingress adapters are unblocked to attempt sending/receiving TLPs with other adapters if the adapter they are trying to transact with are busy. Also, this approach is better from compilation perspective, and leads to a lower step count (critical path). In some embodiments, routing process10may include a routing architecture and associated protocol as are discussed in further detail below. For the transport layer, the design may be split up into two major parts. The first is the individual protocol adapters, with one ingress and one egress each. The second part is a single module referred to herein as the “routing matrix”, which may be configured to interface with all the adapters. In some embodiments, the architecture may be configured such that the ingress ports may receive the TLPs from the various adapters and store them in local buffers (as per the USB4 protocol in this example). The logical connections, referred to herein as “paths”, may be configured by the CM, and these values may be saved in the routing table. The routing table may include one bit to indicate that the path is active and one unique identifier referred to herein as “HopID”. The routing table may include one or more static parameters for a path such as the output adapter number which once set may remain fixed for the complete duration of the path's existence. The routing table may include one or more dynamic parameters that may change on the basis of the present workload. Since there may be a finite number of unique identifiers, the HopIDs may be reused between different paths. To enable this the unique identifier HopID may be changed by the ingress port and this value may be stored within the routing table. In some embodiments, the routing table may be stored within the ingress, and this may include one or more of the following parameters:1. Valid: One-bit value to indicate if the path is valid.2. HopID: This is a logical name for the Path.3. Output HopID: The Output HopID for the outgoing packet.4. Output Adapter Number: This value is the destination adapter number, to which the TLPs on this path are supposed to be sent to.5. Priority: This value is the priority of the path. This value can help decide the path priority at the egress port of the destination egress port.6. Weight: This value helps avoid a high priority path hogging all bandwidth. The Egress can only schedule TLPs equal to the weight count in one round, after that paths with lower priority are selected. In this example, the HopID is the unique identifier, output adapter number and priority are static parameters while weight is a dynamic parameter which may be changed even after the path has been activated. In some embodiments, the actual routing protocol may be split up into multiple distinct phases. Some of these may include, but are not limited to, Request, RequestAck, PktTransferRequest, and PktTransfer. Each of these is discussed in further detail hereinbelow. In some embodiments, the routing matrix and the ingress ports may be connected using the following prominent RTL signals for the Request and RequestAck phases (the same signals may also connect the routing matrix and egress port, but all directions are reversed): TABLE 1List of Signals for Request PhaseDirectionI (Ingress ->Routing-Matrix;Routing-Matrix ->Egress)O (Routing-Matrix ->Ingress;Egress ->Routing-Signal NameMatrix)TypeDescriptionr.RequestIEnumRequest Type from Ingress toEgressr.HopIDIUnsignedUnique Identifier for the Pathr.CountIUnsignedNumber of Packets for theRequestr.OutputPortIUnsignedDestination Portr.InputPortIUnsignedOriginating Portr.ParametersIUnsignedPath Parameters[Priority,Weight]r.AckOBoolAcknowledgementr.NackOBoolNot-Acknowledge The Request Enum may include multiple values, for example for USB4, we use PathSetupRequest, Path. For the PktTransferReq and PktTransfer phase, an example of a signal list is provided below: TABLE 2List of Signals for Packet Transfer PhaseDirectionI (Egress ->Routing-Matrix;Routing-Matrix ->Ingress)O (Routing-Matrix ->Egress;Ingress ->SignalRouting-NameMatrix)TypeDescriptionp.RequestIEnumRequest Type from Ingressto Egressp.HopIDIUnsignedUnique Identifier for thePathp.OutputPortIUnsignedDestination Portp.InputPortIUnsignedOriginating Portp.PacketOPacketThe Actual TransactionStructurep.PacketValidOBoolIf the Packet is Validp.AckIBoolAcknowledgement fromEgressp.NackIBoolNot-Acknowledgementfrom Egressp.DoneOBoolTerminating signal fromIngress In some embodiments, and referring also toFIGS.7-8, routing process10may include a path setup and a path teardown protocol each of which is discussed in further detail hereinbelow. Path setup and teardown processes may only employ the Request and RequestAck phases. Both may follow the same process as listed below. In the Request phase, the ingress port may monitor the routing table, and any HopID going from Valid=0 to Valid=1, triggers a path setup request, while a transition from Valid=1 to Valid=0 triggers a PathTearDown request. There may be multiple path setup/teardown requests active at one point of time, the ingress adapter monitors all open requests, and analyzes them one by one. In some embodiments, for every request, the routing matrix may be informed of the request by changing the i.r.Request from NOP [No Operation] to [PathSetup or PathTearDown] as applicable, with i.r.HopID, i.r.InputPort, i.r.OutputPort, set to valid values. For a PathTeardown request, i.r.Parameters may be unset, but it requires valid values for a PathSetup request. In some embodiments, the routing matrix may analyze all the active ports, and check the values of the request phase signals facing the ingress ports. For any port, if there is no pending request, the values may be stored in the internal buffers. The ingress adapter may continue to wait until i.r.Ack is set. In some embodiments, the routing matrix also goes through all its internal buffers and evaluates if there are any pending requests. For all pending requests, it checks if there are no requests destined to the same egress port, if so, it performs an internal priority resolution, and selects the one with more priority, or least port number. Then, it may pass on the value of the i.r.Request, i.r.HopID, i.r. InputPort, i.r.OutputPort and i.r.Parameters to the destined egress port. In some embodiments, routing process10may also include a request acknowledgement phase [RequestAck]. Here, all egress ports may evaluate the requests they receive from the routing matrix and revert back with e.r.Ack=1 or e.r.Nack=1, or they continue to hold on to the request for a pre-defined timer value. If this is saved in the internal buffers of the routing matrix, and at the next clock edge, this response (e.r.Ack and e.r.Nack) may be sent back to the waiting ingress adapter. If the routing matrix does not see any response after the specified timer elapses, the request may be cancelled, all the signals r.* facing the egress ports may be set to 0, and the e.r.Nack=1 may be sent to the ingress port, which then either reattempts the same request again, or attempts a different request. In some embodiments, and referring also toFIG.9, routing process10may include a protocol for TLP exchange. The protocol to exchange TLPs between different instances may include a multi-part approach. The first part may include a packet request phase (e.g., Packet Request [Request] shown inFIG.9). Here, the TLPs may be stored in the ingress port “i” against multiple HopIDs. When the ingress adapter is free, the buffers may be scanned and on all the paths which have active packets in wait. The ingress adapter may internally prioritize the open paths, for example, for most adapters the paths to the USB4 adapter should have a higher priority. This request may then be scheduled on to the routing matrix. i.r.Request may be changed to PacketRequest, i.r.InputPort=i, i.r.OutputPort=e, i.r.Count=C are set to valid values, and the i.r.Parameters are set to dynamic parameter values pulled out from the routing table. The routing matrix may then analyze all the active ports, checking if the particular adapter has an active request. If not, the routing matrix may copy over the values in the local buffer. In some embodiments, the protocol for TLP exchange may include a packet request acknowledgement [RequestAck] phase. Here, the routing matrix may analyze all the pending requests for egress adapters and once it identifies a pending request in the local buffer, it copies these values over to the destined egress adapter. The egress adapter may acknowledge the request with e.r.Ack=1, or discard if it is either not ready, or there is no valid path by setting e.r.Nack=1. These values may be stored in the local buffers present in the routing matrix. Then, the routing matrix may push back the result of the request from its local buffer back to the ingress adapter. The ingress adapter may read the values of the i.r.Ack and i.r.Nack. If the request is acknowledged, then the ingress adapter waits for a request from the egress adapter, otherwise it is free to request again or inform the higher layers of the failure. In some embodiments, the protocol for TLP exchange may include packet transfer request [PktTransferRequest] phase. Here, the egress adapter is aware of the number of packets pending, the HopIDs, the static and dynamic path parameters, and the egress adapter can freely prioritize the paths. The egress adapter may select one ingress adapter and the HopID from the list of all open resources after following the prioritization logic, for example, for USB4 it may perform a round robin on the priority logic but may schedule only up to weight count packets per path. The egress adapter set values for e.p.Request, e.p.HopID, e.p.InputPort, e.p.OutputPort and passes this on to the routing matrix. The routing matrix copies these over to its internal buffer (this buffer may differ from the one maintained for the requests). On the next clock these values may be copied over to the destined ingress adapter, from which the egress adapter seeks the packets by setting i.p.Request, i.p.HopID, i.p.InputPort and i.p.OutputPort. In some embodiments, the protocol for TLP exchange may include a packet transfer [PktTransfer] phase as shown inFIG.10. Here, the ingress port identifies the request coming in from the routing matrix and at the next clock edge it may start populating the i.p.Packet and i.p.PacketValid signals fetching these from its internal buffers. While the link is active, the routing matrix may maintain a connection between the egress and the ingress adapters and may transfer the packets from ingress to egress without any intermediate buffering. This implies that the signals may pass from ingress to egress without any delay, and vice versa. Every time a packet is transferred, the egress will assert e.p.Ack signal, which may be connected to the i.p.Ack signal, and e.p.Nack in the event it is unable to sink in anymore packets. The ingress may retain the values of the i.p.Packet and i.p.PacketValid until it sees either i.p.Ack or i.p.Nack signal. The transmission may be completed, when e.p.Nack is asserted. In the event the egress wants to stretch out the request, it may continue to hold e.p.Ack and e.p.Nack to low. In the event that the ingress receives a high priority request or it wish to break off the connection for any reason, or if there are no more pending packets available, it may assert the i.p.Done signal. Similarly, the transmission may be terminated, if e.p.Nack is asserted. While the transmission is in progress, the routing matrix may continue to maintain a one to one connection, with no delay, and it may continue to monitor the i.p.Done and e.p.Nack signals. When either of these is asserted, the link is broken. The link may also be broken off if the e.p.Request is de-asserted, however this is unusual. The ingress will not know if the packet which was in flight was completed or not, and it may reattempt the same again. Referring now toFIG.11, a flowchart1100showing exemplary operations consistent with embodiments of routing process10is provided. In some embodiments, some or all paths and path attributes/properties may be stored1102at a routing table. The process may determine1104whether or not a particular path is valid and, if so, the path setup process may initiate1106. During path setup some or all of the paths between adapter pairs may be configured and setup across the routing matrix using a two-pair phase protocol. If the path is not valid a path teardown operation1108may be performed where a selected valid path may be tore down using a multi-phase protocol. If the path is deemed valid any incoming TLPs in the ingress port may be stored1110at an internal buffer. This internal buffer may receive data from an adapter or external stimuli. If any packets are pending, the process may perform packet transfer which includes a multi-phase packet transfer1112in which multiple packets may be transferred simultaneously between multiple adapter pairs. Embodiments of the routing process described herein provide numerous advantages over existing approaches. Routing process10is a high availability approach as all ingress ports may maintain active connections with a single egress port at one time, for a total of N connections for N total adapters. The overall packet transmission may be broken into different phases. This implies that both the ingress and egress adapters are free to pursue more than one transfer request. This helps with the prioritization logic on the egress side, which can now receive multiple active requests and decide on the priority. Embodiments of routing process10are highly scalable, and can scale as much as the total number of ports in the router. There is no single bottleneck, the processing is modular and the implementation logic is spread through multiple units, which allows this solution to easily scale for many ports. Embodiments included herein provide many options to extend functionality as the number of parameters may be varied, including on the actual packet side. If the packets can be fragmented, then even large packets can be supported over the active link with no latency. Embodiments of routing process10are emulation friendly as there are no large multi-port memories in this solution, which makes it relatively easy to compile, place and route with great critical path (step count), and low gate area consumption. Since there are no software components which can stop emulator clocks, routing process10may be able to operate at the maximum frequency possible with no interruptions. All of the logic proposed operates at the same clock as the protocol, so there are no faster clocks introduced, and thus no slow down the design execution on the emulators. Embodiments of routing process10provide a high throughput solution. There is an initial latency while the first few requests go through, however once the transmission starts there is zero latency, and the ingress and egress adapters may continue to batch and send multiple packets across the link without any clock delays. Also, since the request and packet transmission phases are pipelined, there is no further latency added. In this way, the routing matrix can keep up with the theoretical maximum bandwidth of the protocol. Embodiments of routing process10provide a customizable solution, and since most of the logic in the ingress and egress adapters is available locally, it is very easy to customize, by adding more parameters or options in both the request and packet transmission phases. The individual ingress and egress behavior may be programmed and the routing matrix architecture does need to be altered in the event the users wish to add new kind of requests, as the routing matrix simply copies over the request from ingress to egress ports. Embodiments of routing process10is faster and more de-centralized than existing approaches, allowing for the solution to scale very easily. There is no hardware to software translation requirement and the solution can work at very low abstractions. Also, since the proposed solution uses very small independent memories to store the data, it is much easier to partition the design, which works very well for emulation use-models. The protocol is transactional in nature and allows for both initiator and target to perform other operations if the path is busy, or the other entity is busy. The packet exchange protocol allows the target to determine the priority of the incoming paths and then request the packet from the initiator. As the initiator has already notified of the number of packets available for transmission, the packet exchange phase has very high availability. Embodiments included herein require minimal buffering in the targets, while there is no need for priority management in the initiators. This greatly simplifies the RTL design logic, and makes the whole implementation highly modular and simple to implement. It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present disclosure without departing from the spirit or scope of the invention. Thus, it is intended that embodiments of the present disclosure cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. | 34,339 |
11861278 | DETAILED DESCRIPTION Some embodiments provide a CAD tool to create optimized power, performance (e.g., delay), and area (PPA) digital logic for various standard cells and functional blocks (FUBs) using various optimization approaches. In some embodiments, the CAD tool is capable of receiving a number of inputs that describe a given logic circuit. These inputs can be in a hardware description language (HDL) such as Verilog or VHDL, netlist, graph of higher-level blocks, Boolean expressions, or truth tables. The inputs also include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. Given these inputs that describe a circuit or logic circuit, the CAD tool separates out the circuit into combinational logic component or circuit and sequential logic component or circuit. For the combinational logic synthesis, the CAD tool breaks the circuit down into different blocks informed by the high-level design and optimizes and synthesizes each block separately, in accordance with some embodiments. Various embodiments use multiple ways to optimize the combinational logic using MIG synthesis and optimization including mapping portions of the optimized MIG to standard cells. The majority or minority gates here can be ferroelectric capacitor-based majority or minority gates, in accordance with some embodiments. However, the embodiments are not limited to ferroelectric capacitors-based majority or minority gates, and any technology used for making majority or minority gates are applicable here. Majority or minority gate is a universal gate and could be used to build all types of standard cells and building blocks. Depending upon a logic function, majority or minority gates (M-gates) based synthesis may also provide smaller overall gate count. The CAD tool of some embodiments use M-gates to optimize PPA in at least two ways. One way is to use M-gates as fundamental gates to replace any type of existing gate with1:1mapping if advantageous. Another way is to use these gates to reduce the number of gate counts wherever possible. For M-gate based synthesis, in one technique, majority gate is a basic building block and inverters are introduced to build minority gates as needed and also sometimes buffers and/or inverters are introduced to provide higher fan-outs for the circuits. In some embodiments, the scheme unfolds feedback loops in a sequential logic resulting in a combinational logic, and applies logic synthesis techniques to produce a few candidate solutions. For example, the CAD tool synthesizes sequential circuits by transforming them into combinational logic via unfolding loops and synthesizing the resultant combinational logic, and recreating the loops afterwards. Among the various synthesized versions of the sequential circuits, the CAD tool goes through each solution (e.g., each synthesized circuit in this context) and checks their functionality to avoid any race conditions and returns the most optimal functional solution. The scheme of various embodiments uses wide-input majority or minority (herein referred to as M-gates) in combination with CMOS gates. This leads to fewer gates and a smaller logic depth. In some embodiments, inverter minimization is performed as a post-processing activity following M-gate optimization, which does not change the number of M-gates or the logic depth. For example, the CAD tool of various embodiments minimizes the number of inverters in the circuit or along a critical timing path. In some embodiments, the CAD tool accounts for design feedback that involves adding extra CMOS buffers or inverters to drive a large fan-out (or load). Together, various mechanisms of the scheme provide an improved PPA over the known scheme of logic synthesis. The CAD tool of various embodiments has the capability to synthesize both combinational and sequential FUBs. In some embodiments, the CAD tool applies a gate pruning algorithm to facilitate both single and multiple fan-in M-gate synthesis. Some embodiments use extended satisfiability (SAT) formulation to use both majority and minority gates and wide-input M-gates. In some embodiments, the CAD tool uses binary integer linear programming (BIP) framework for logic optimization of a MIG. As such, a specialized framework is established where threshold gate weights are −1, 0, and 1, which allows for the creation of optimal majority or minority inverter graphs. The BIP framework also allows for depth optimization to be explicitly captured in the program constraints. In some embodiments, the framework allows the use of either a single fan-in or multiple fan-in M-gates for synthesis. The CAD tool of some embodiments provides inverter minimization per block (e.g., standard cell or FUB), and thus provides enough boost within the block. For example, by reducing the number of inverters, power savings can be realized. In some embodiments, inverter minimization is performed as a post-synthesis step to reduce the total number of inverters in the block or along a critical timing path of the block. In some embodiments, fan-out constraints and requirements per M-gate are performed as a post-synthesis step by adding inverters and buffers, as needed, to drive a higher fan-out. In some embodiments, hierarchical synthesis is performed to further optimize synthesized circuits by taking advantage of “don't care” input conditions in interior sub-blocks. In some embodiments, the CAD tool uses gate count initialization in optimal synthesis to accelerate the search for an optimal MIG. There are many technical effects of the various embodiments. For example, the CAD tool of various embodiments can take majority or minority gates with a large fan-in (e.g., 3, 5, 7, or higher) input M-gates and standard cell library and produces optimal synthesized logic circuits. Other technical effects will be evident from the various embodiments and figures. In the following description, numerous details are discussed to provide a more thorough explanation of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure. Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate more constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme. FIG.1illustrates top-level architecture100of a computer-aided design (CAD) tool for logic synthesis of a mix of CMOS gates and majority and minority logic gates of various fan-in and/or fan-out, in accordance with some embodiments. Architecture100comprises iterative wrapper101with logic synthesis core101a, which is the nucleus of the CAD tool. In various embodiments, iterative wrapper101has access to a variety of cells to perform logic synthesis. These cells include standard CMOS cells102, such as a CMOS-based inverter, NAND gate, NOR gate, XOR gate, flip-flop (FF), latch, multiplexer, complex gate including half-adder, multiplexer, etc. These CMOS cells can be part of a standard library for a particular process technology node. In some embodiments, iterative wrapper101has access to a standard library of majority gates103that have x fan-in and y fan-out, where ‘x’ is 3 or more, and where ‘y’ is 1 or more. In some embodiments, the majority gates comprise ferroelectric capacitors to receive 3 or more inputs, where the ferroelectric capacitors are coupled together at another end. In some embodiments, the majority gates comprise non-ferroelectric input capacitors that receive 3 or more inputs, wherein the non-ferroelectric capacitors are coupled together at another end, which is coupled to a ferroelectric capacitor. In some embodiments, iterative wrapper101has access to a standard library of minority gates104that have ‘x’ fan-in and ‘y’ fan-out, where ‘x’ is 3 or more, and where ‘y’ is 1 or more. Minority gates104are essentially majority gates with an output inverter. The majority gate103and minority gate104can include basic cells like NAND gate, NOR gate, XOR gate, flip-flop (FF), adder, etc. In some embodiments, iterative wrapper101receives inputs105representing a logic circuit that is to be synthesized. The inputs can be in a number of formats including hardware description language (HDL) such as Verilog or VHDL, netlist, graph of higher-level blocks, Boolean expressions, or truth tables. The inputs also include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. Given these inputs that describe a circuit or logic circuit, the CAD tool separates out the circuit into combinational logic component (or circuit) and sequential logic component or circuit. The output of iterative wrapper101is synthesized circuit106, which includes a mix of CMOS standard cells, majority and/or minority logic gates of various fan-in and fan-out to provide the most optimal circuit design for use in a processor or an integrated circuit (IC). FIG.2illustrates flowchart200of a method of logic synthesis using majority or minority inverter graph (MIG) having majority and minority logic gates of various fan-in and/or fan-out and existing standard cells, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart200provides a top-level view of the overall logic synthesis flow. Here, a logic synthesis scheme uses majority and/or minority inverter graph (MIG) and existing standard cells, which work for both sequential and combinational logic. The logic synthesis scheme allows wide (e.g., 3 or larger odd number) and multiple fan-in inputs (e.g., the optimized MIG can contain M-gates with varied number of inputs). Block201are inputs representing a logic circuit that is to be synthesized using a mix of CMOS and majority and/or minority gates. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. Given these inputs that describe a circuit or logic circuit, the CAD tool separates out the circuit into combinational logic component (or circuit) and sequential logic component or circuit. At block202, the CAD tool identifies the inputs of the logic, state elements (e.g., latches and flip-flops), and outputs of the logic for segmentation from input201. In some embodiments, the inputs and outputs of the logic function are assumed to require state elements such as flip-flops and latches. In some embodiments, big logic blocks are broken down and pipelining is implemented with intermediate state elements as needed to meet clocking and throughput requirements as indicated by block203. This is done keeping the requirements of delay and energy of each component in consideration. In some embodiments, the breakdown with state elements may be performed during a post-processing phase depending upon the synthesized results and delay and energy constraints of the overall logic function unit. At block203, the inputs, outputs, and state elements are classified as terminal nodes. The CAD tool then segments the logic circuit into sub-circuits with nodes (e.g., input and output ports). Each sub-circuit is combinational, in accordance with various embodiments. In various embodiments, the CAD tool creates a list of separate combinational circuits and sequential components or circuits, and initializes an empty synthesized list. Once the combinational and sequential logic blocks are identified separately, specific synthesis flows for combinational and sequential logic blocks are used to optimize for the PPA requirements. The list of combinational circuits (or components) and sequential circuits (or components) is saved as indicated by block204. At block205, a determination is made whether the list of combinational circuits (or components) and sequential circuits (or components) is exhausted. This check is made to go through each circuit in the list and classify it as combinational circuit or sequential circuit. If the list is not exhausted, the process proceeds to block206where the current circuit is assigned as the next circuit in the list, and then that circuit is analyzed at block207to determine whether it is combinational. In a circuit without state elements or with only input and output registers, a region between the inputs and outputs comprises combinational circuit(s). In a pipelined circuit, the region between consecutive pipeline registers comprises combinational circuit(s). By defining inputs, outputs, and state elements as terminal nodes, the circuit can be segmented into sub-circuits (or sub-graphs) with input and output terminals. The sub-circuits are combinational circuits, in accordance with various embodiments. For combinational circuits, combinational circuit synthesis is applied at block208. For circuits identified as sequential circuits (e.g., because they have a feedback loop), sequential component synthesis is applied at block209. The synthesized circuits from block208and block209are added to a list of synthesized circuits as indicated by block210. The process then proceeds to block205, where it is determined whether the list of circuits is exhausted. If not, the process continues iteratively as discussed herein. If the list of circuits is exhausted, the process proceeds to block211. At block211, circuits in the list of synthesized circuits are wired using input and output terminals of the original logic circuit that is read from inputs201. The resultant output after wiring the circuits in the list of synthesized circuits is the synthesized circuit of the original logic circuit, as indicated by block212. The processes for combinational circuit synthesis of block208and sequential circuit synthesis of block209are discussed with reference to subsequent figures herein, in accordance with some embodiments. FIG.3illustrates flowchart300of a method for combinational logic synthesis (e.g., block208) using a top-down approach, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart300illustrates a method that breaks down logic function into small sub-circuits, performs logic synthesis with those, and then combines them to produce the final results. Flowchart300can be used in isolation (e.g., independently) or part of flowchart200to optimize a combinational circuit. Flowchart300begins with inputs for a combinational circuit as indicated by block301. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. At block302, the CAD tool (e.g., iterative wrapper101) iteratively breaks the combinational circuit into non-overlapping smaller blocks. For example, the combinational circuit is segmented into non-overlapping smaller blocks until the blocks are small enough to be synthesized by either using only standard cells or the MIG cells from the MIG synthesis tool. The MIG synthesis tool mixes standard cells and MIG cells. At block302, the CAD tool also keeps track of input and output connections of the smaller blocks. In some embodiments, the CAD tool initializes the empty list of synthesized small blocks. Here, initializing generally refers to creating an empty list that is used to store synthesized circuits. Here, small blocks generally refer to sub-circuits with a maximum of K inputs, where K is 5 or 6. The output of block302is a list of small combinational circuit blocks indicated by block304. This list of these small combinational circuit blocks is then iteratively processed, and the best synthesized sub-circuit or block is then selected based on PPA to be the cell for the small block. This process is indicated by blocks305,306, and307. At block305, the CAD tool determines whether the list of small combinational circuit blocks is exhausted (e.g., whether all small blocks in the list are processed). As each block is processed in the list, the current block is assigned to the next block in the list so that the next block is processed as indicated by block306. This process continues till all blocks in the list304are processed. At block307, the current block is synthesized using standard cells and/or a combination of standard cells and MIG cells using MIG synthesis tools. The standard cell set also comprises circuit representation of bigger building blocks such as adders and multipliers. In some embodiments, the synthesis results are compared with synthesized blocks that implement the same functionality, and the best circuit is chosen based on PPA constraints. In one example, if there is an off-the-shelf CMOS synthesis tool available, its synthesis of the small block can be compared to the MIG synthesis tools' result and the better circuit is selected based on PPA constraints, since M-gates here are compatible with CMOS logic gates. The synthesized block that gives the best PPA (e.g., that meets the PPA objectives as close as possible) is then selected and added to a list of synthesized small block list(s) as indicated by block308. The process then proceeds to block305and the next circuit block becomes the current block, and the process is repeated and the list of synthesized small blocks is filled. When the entire list of small combinational circuit blocks (block304) is processed (or exhausted, the process proceeds to block309. At block309, the synthesized small block cells in the list of synthesized small blocks are combined to hierarchically create bigger cells and finally the full combinational circuit. For example, the small synthesized block cells are rolled-up to represent the full synthesized combinational circuit310. FIG.4illustrates flowchart400of a method for combinational logic synthesis (e.g., block208) using a bottom-up approach, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart400illustrates a method that, unlike the top-down approach ofFIG.3, may not use separation of a combinational circuit into non-overlapping subblocks prior to MIG synthesis. Rather, the entire combinational circuit is passed to the MIG synthesis tool which transforms the circuit into a majority and/or minority inverter graph (MIG) and then optimizes the graph based on the PPA requirement. After MIG optimization, subgraphs of the optimized MIG graph are functionally mapped to building block cells with the best PPA, where the building block cells can be based on existing standard cells or a combination of existing standard cells and MIG cells. In flowchart400, the full logic function is synthesized and pattern matching is used to map sections of the MIG to standard cells. Flowchart400can be used in isolation (e.g., independently) or part of flowchart200to optimize a combinational circuit. Flowchart400begins with inputs for combinational circuit as indicated by block401. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. At block402, the CAD tool performs MIG synthesis. In some embodiments, the MIG synthesis scheme assumes that the logic circuit can be synthesized using a feed-forward network of M-gates and inverters. Majority gate followed by one inverter is equivalent to a minority gate. Minority gates could be made as a fundamental building block. A minority gate with one input is an inverter. Such a network of gates is equivalent to a directed acyclic graph (DAG). MIG synthesis relies on the logic initialization, hierarchical synthesis, optimal synthesis, inverter minimization, and post synthesis algorithms. In various embodiments, MIG synthesis is a flexible algorithm that allows the use of majority and/or minority gates (M-gates), wide-input and single and/or multiple fan-in M-gates. The output of MIG synthesis is a MIG as indicated by block403. At block404, the CAD tool applies heuristic pattern matching with standard cell library. Heuristic pattern matching comprises mapping sections of the MIG to standard cells. The standard cell library comprises gates or higher-level blocks such as n-bit adder, n-bit multiplier, multiplexers, decoders, etc. as indicated by block405. These standard cell library gates or higher-level blocks are input to the heuristic pattern matching scheme of block404. The output of the heuristic pattern matching scheme is the synthesized combinational circuit as indicated by block406. FIG.5illustrates flowchart500of a method for heuristic pattern matching (e.g., block404) with standard cell library, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart500illustrates a method of heuristic pattern matching with standard cell library. Here, a pattern-matching heuristic is used in the bottom-up approach ofFIG.4for mapping sections of the MIG to standard cells. Flowchart500shows one heuristic for solving the problem, by ordering the cells in the cell library in descending order of size (e.g., number of input and output ports), selecting one cell at a time until the library is exhausted, and functionally matching the selected cell to portions of the MIG. Flowchart500begins with block501(e.g., MIG block403). Block501includes MIG input, standard cell library comprising gates and/or higher-level blocks such as n-bit adder, n-bit multiplier, etc. At block502, the CAD tool orders the standard cells according to size from largest to smallest. A person skilled in the art would appreciate that order of cells can be flipped. For example, the cells can be ordered from a smallest size to a largest size instead. The size may be determined by a total device count and/or total device size per cell. In some embodiments, the size may be determined by a layout footprint of the cell. The ordered list of standard cells is then iteratively processed for a match. This iterative process comprises blocks503,504,505,506,507, and508, that adapt a greedy algorithm. At block503, a determination is made regarding whether the ordered cells of the standard cell library are exhausted. In the beginning of the process, the library is not exhausted, and the process proceeds to block504, where the current standard cell is assigned the next (e.g., the first) standard cell in the ordered list. One by one, each cell in the list is traversed. At block505, a current pattern is used as a representation for the current standard cell in the ordered list. The current pattern comprises of characteristics of a subgraph of a MIG. The characteristics can be a set of truth tables. From the set of truth tables, the number of inputs, the number of outputs, and the functionality can be easily extracted. The characteristics could also be the Boolean formulas for the outputs, in accordance with some embodiments. At block506, the CAD tool uses the current pattern to find a matching subgraph in MIG. A determination is made regarding the match at block507. If the current pattern matches the subgraph in the MIG, the process proceeds to block508where the matching subgraph of the MIG is replaced with the current standard cell. The process then proceeds to block503and repeated till the entire list of ordered cells is exhausted with this matching process. If the current pattern does not match the subgraph in the MIG, the process proceeds to block503. After all the ordered list of cells is exhausted, the final MIG represents the synthesized combinational circuit509. FIG.6illustrates high-level flowchart600of sequential logic synthesis (e.g., block209), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart600illustrates a method where the sequential circuit is analyzed and synthesized according to its classification. If a feedback loop is found in a circuit of a logic block to be synthesized, the circuit may be a sequential circuit. Depending on the circuits' response to a clock, the sequential circuit can be edge-triggered, pulse-triggered, or level-triggered. For each type of sequential circuit classification, a particular synthesis process is used. Flowchart600can be used in isolation (e.g., independently) or part of flowchart200to optimize a sequential circuit. Flowchart600starts with the description of the sequential circuit. The description is provided as inputs601. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. In one example, the inputs for the sequential circuits are specified as netlist or Boolean expressions. Any tool that can convert an input description (e.g., HDL) into a netlist can be used as input for flowchart600, in accordance with some embodiments. One reason for using netlists and Boolean expressions is that standard truth tables and graph of higher-level blocks may not capture and reveal the feedback loop in the sequential circuit, respectively. At block602, the CAD tool determines whether the sequential circuit is level-triggered. Examples of level-triggered sequential circuits are latches. The latches can be level-high or level-low latches. If the sequential circuit is level-triggered the process proceeds to block607for level-triggered sequential synthesis. If the circuit is not level-triggered, the process proceeds to block603where a determination is made regarding whether the circuit is pulse-triggered. If the sequential circuit is pulse-triggered, the process proceeds to block605where pulse-triggered sequential synthesis is performed. Examples of pulse-triggered sequential include back-to-back coupled latches configured as a D-flip-flop (D-FF), where each latch is controlled by a different clock (e.g., a clock and an inverse of the clock). If the sequential circuit is not pulse-triggered, it is expected to be edge-triggered. In that case, the CAD tool performs edge-triggered sequential synthesis. Examples of edge-triggered sequential circuits are rising-edge D-FF and falling-edge D-FF. The sequential circuits can have scan gadgets for debug or design-for-test (DFT). The output of edge-triggered sequential synthesis604, pulse-triggered sequential synthesis605, or level-triggered sequential synthesis is a synthesized sequential circuit606. FIG.7illustrates flowchart700of a method of level-triggered sequential logic synthesis (e.g., block607), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart700provides an optimal way of synthesizing level-sensitive sequential components such as latches. Flowchart700begins with sequential circuit input701. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. In one example, the inputs for the sequential circuits are specified as a netlist or Boolean expressions. Any tool that can convert an input description (e.g., HDL) into a netlist can be used as input for flowchart600, in accordance with some embodiments. One reason for using netlists and Boolean expressions is that standard truth tables and graph of higher-level blocks may not capture and reveal the feedback loop in the sequential circuit, respectively. At block701a, the CAD tool determines whether the logic defined as part of input701is specified as HDL (e.g., Verilog). If the logic is specified as an HDL, then at block701b, the CAD tool applies logic synthesis on the HDL to obtain a netlist. Any suitable logic synthesis tool may be used (e.g., commercially available logic synthesis tools may be used). If the logic is not specified as an HDL, then at block702, the CAD tool determines whether the sequential circuit is described by a netlist. If the sequential circuit is described as a netlist, the process proceeds to block703. At block703, for each feedback connection from the output of a cell to the input of a cell, an auxiliary primary input is introduced to represent a previous output state. This makes the circuit a combinational circuit, as indicated by block705. If the sequential circuit is not a netlist, then for each previous state in the Boolean expression, an auxiliary input variable is introduced. One reason for adding the auxiliary input is to convert a directed cyclic graph to a directed acyclic graph (DAG) in MIG synthesis, which assumes the input graph to be DAG. The auxiliary input is an additional input variable or loop variable to represent a previous value of the output. As such, the auxiliary input effectively breaks a loop in the graph turning it into a combinational circuit as indicated by block705. At block706, combinational circuit synthesis is performed on the combinational circuit as described with reference toFIG.3andFIG.4. At block707, post combinational circuit synthesis is performed. In some embodiments, during post combinational circuit synthesis, the loop variables are replaced by connections from the output(s) to the gates, which receive input from the loop variables. For example, feedback wiring, from corresponding output M-gates to M-gates receiving input from the axillary input variables, are made. The resultant output is a synthesized circuit708. If the sequential circuit is not described as a netlist, the process proceeds to block704. At block704, the CAD tool introduces an auxiliary input variable for each previous state in the Boolean expression. This makes a combinational circuit as indicated by block705. FIG.8illustrates flowchart800of a method of a pulse-triggered sequential logic synthesis, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart800provides an optimal way of synthesizing pulse-triggered sequential components such as D-FFs. In a pulse-triggered sequential circuit, there are back-to-back latches (e.g., first latch coupled to a second latch) with data passing into a first latch on a high or low clock level and from the first to a second latch on a corresponding low or high clock level). In a pulse-triggered circuit, the clock signal is inverted for the second latch relative to the first latch. Flowchart800begins with sequential circuit input801. The inputs can be in in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. In one example, the inputs for the sequential circuits are specified as a netlist or Boolean expressions. Any tool that can convert an input description (e.g., HDL) into a netlist can be used as input for flowchart600, in accordance with some embodiments. One reason for using netlists and Boolean expressions is that standard truth tables and graph(s) of higher-level blocks may not capture and reveal the feedback loop in the sequential circuit, respectively. For each latch of the back-to-back latches, the CAD tool performs level-triggered sequential synthesis as described with reference toFIG.8and as indicated by block802. The output of level-triggered sequential synthesis is a latch as indicated by block803. At block804, the synthesized latch is duplicated (e.g., a copy is made) and connected back-to-back with the synthesized latch (e.g., first latch) of block803. Then, an inverted clock is provided to the duplicated latch (e.g., second latch) compared to the first latch. The resultant circuit is a synthesized pulse-triggered sequential circuit (e.g., a D-FF) as indicated by block805. FIG.9illustrates flowchart900of a method of edge-triggered sequential logic synthesis, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart900provides an optimal way of synthesizing edge-triggered sequential components such as rising-edge or falling-edge D flip-flops (D-FFs). In some embodiments, edge-triggering is accounted for by transforming to level-triggered. This transformation is done by introducing a new input variable that represents a delayed version of the clock. While some embodiments use unrolling technique for sequential circuits, it is possible that some of the synthesized results for edge-triggered circuits may contain a race condition which causes the output of the circuit to be unstable and continue to fluctuate. This happens because of the time dependence in sequential circuits from the previous clock cycle. To handle this problem, some embodiments generate multiple synthesis solutions with given PPAs. During a post-processing phase, in some embodiments, each of the circuit solutions is simulated for stability and the final result is selected based on correct functionality and according to the best PPA results. Flowchart900begins with sequential circuit input801. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. At block902, CAD tool adds an auxiliary input variable to represent a delayed clock signal. The delayed clock signal is used to capture the concept of an edge. For example, clock (clk) and delayed clock (dclk) are used to capture an edge of an input data. Here, clk and dclk are used to capture the clock edge (transition from low to high or from high to low). Starting at time t=0, assume clk is low for an interval of T/2, high for an interval of T/2, and then low for another T/2 interval. Assume a delay of τ. dclk(t)=clk(t−τ) will be high at t=0 for an interval of τ. Then it will be low for an interval of T/2, high for an interval of T/2, and then low for another T/2 interval. Consider the first rising edge, when t=T/2. During the hold time, right after the edge, clk will be high while dclk will be trailing it at a low signal. Here, (clk=high, dclk=low) represents a rising edge in the truth table. Consider the next edge, a falling edge, when t=T/2+T/2=T. During the hold time, right after the edge, clk will be low while dclk will be trailing it at a high signal. Here, (clk=low, dclk=high) represents a falling edge in the truth table. At block903, the CAD tool initializes an empty list of synthesized circuits. Here, initialize generally refers to start an empty list. Synthesized circuits will be added to the empty list later. The circuits correspond to different fan-ins and PPA. For example, a circuit can correspond to a fan-in list of [3, 5] and area or delay requirements. Synthesis is then performed using maximum fan-in of 3 with area requirement and with delay requirement. Then the CAD tool synthesizes using a maximum fan-in of 5 with area requirement and with delay requirement. This gives at most 4 synthesized circuits. For each of the 4 synthesized circuits, the CAD tool can also keep all the discarded circuits from inverter minimization. This gives a large number of circuits with the same M-gate connections but with majority and minority gates substituted and inverters added or removed. In some embodiments, the M-gate fan-in list is processed until it is exhausted as indicated by block904. Each fan-in is selected in turn to be the maximum allowed fan-in for edge-triggered sequential logic synthesis. The list of fan-ins allows the CAD tool to create candidate synthesized circuits, one (or more if all the discarded candidates of inverter minimization are considered) for each fan-in, since it is not known ahead of time if the synthesized circuit will be stable. At block905, the maximum fan-in for synthesis is assigned to be the value of the current fan-in from the M-gate fan-in list. At block906, level-triggered sequential synthesis is applied using the maximum M-gate fan-in.FIG.7illustrates a method for level-triggered sequential synthesis. The output of level-triggered sequential synthesis is then processed at block907where wire delayed clock (e.g., dclk) is connected as clock to a delay element (e.g., buffer). The resultant circuit is a synthesized circuit908. At block909, the synthesized circuit is added to the list of synthesized circuits, and the process is iteratively performed again with the next fan-in from the M-gate fan-in list, and so on till the entire list is exhausted as determined by block904. Once the list is exhausted, the process proceeds to block910, where post-processing is done to check for oscillations in each of the M-gate. The post-processing can be done using any suitable circuit simulator such as SPICE or its derivative (e.g., SPICE-like) simulators. One reason for such possible oscillations is that some of the synthesized MIGs for edge-triggered circuits may contain a race condition which causes the output of the circuit to be unstable and continue to fluctuate. This happens because of the time dependence in sequential circuits from the previous clock cycle. During block910, the edge-triggered circuit obtained at block908is checked for stability and the final edge-triggered circuit is selected based on correct functionality and according to the best PPA results (or target results). The resultant final edge-triggered circuit is the synthesized edge-triggered circuit as indicated by block911. FIGS.10A-Billustrate flowcharts1000and1030, respectively, of a method of MIG synthesis, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowcharts1000and1030comprises MIG optimization algorithms including initialization, hierarchical synthesis, optimal synthesis, and other synthesis algorithms built on top of MIGs. The algorithm is a flexible algorithm that allows the use of majority and/or minority gates (M-gates), wide-input and single/multiple fan-in M-gates. Flowcharts1000and1030form the basis of block402ofFIG.4. The MIG synthesis algorithm of flowcharts1000and1030assume that the logic circuit can be synthesized using a feed-forward network of M-gates and inverters. As discussed herein, majority gate followed by one inverter is equivalent to a minority gate. Minority gates could be made as a fundamental building block. A minority gate with one input is an inverter. Such a network of gates is equivalent to a DAG. Flowchart1000begins with a logic circuit input1001. The input(s) can be in in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, maximum number of bits for optimal synthesis, K, and/or maximum number of bits for hierarchical synthesis, H, etc. At block1002, the CAD tool computes a maximum M-gate fan-in, and ignores fan-out constraints. A user specifies whether they want majority or minority gates as the basic building blocks. The number of inputs to the M-gate (aka fan-in), are also to be specified. This could be a list of a single or multiple fan-ins. Each fan-in is an odd number. The user also needs to specify whether the primary objective is area, energy, or delay minimization. This choice determines which heuristic is used in splitting the graph into subgraphs and the scoring of the current optimized graph, in accordance with various embodiments. At block1003, the CAD tool performs the process of logic initialization. Logic initialization is described with reference toFIG.11. In various embodiments, the input circuit from input1001is transformed by the logic initialization flow into a form that can be easily optimized using optimal or hierarchical synthesis. At block1004, a determination is made whether the number of input bits is less than or equal to K. K is a small number such as in {4, 5, 6}. In some embodiments, K is less than or equal to 10. K represents the maximum bit width for which an optimal MIG can be found in a reasonable amount of time. If the number of input bits is less than or equal to K, the process proceeds to block1005where the logic circuit is synthesized optimally rather than using heuristics. Examples of methods for optimal synthesis are binary integer programming (BIP), or satisfiability (SAT) formulation and associated solvers. The optimal synthesis output is then processed for inverter minimization at block1006. The process then proceeds to block1018, as indicated by transition letter A, which indicates the resultant synthesized MIG. At block1019, after the MIG optimization, each M-gate is pruned to have the desired fan-in using a gate pruning algorithm. During post-synthesis, the fan-out requirements are honored using the buffering algorithm which introduces inverters and buffers as needed. The resultant circuit is synthesized MIG as indicated by block1020. In some embodiments, MIG synthesis ignores fan-out requirements during MIG optimization. In some embodiments, fan-out requirements are observed during the post-synthesis flow by a buffering algorithm. If the number of input bits is greater than K, the process proceeds to block1007. At block1007, the number of input bits is compared with H. H is a larger number such as 20 or more. H represents the maximum bit width for which hierarchical synthesis can improve the optimality of the synthesized MIG. If the number of input bits is less than or equal to H, the process proceeds to block1008where hierarchical synthesis is performed. For example, when the number of input bits lies in (K, H], hierarchical synthesis is used. After hierarchical synthesis, the resultant circuit is synthesized MIG as indicated by block1018. When the number of input bits exceeds H, multiple independent hierarchical synthesis are performed and the results glued together as indicated by block1009. At block1009, the graph is topologically split into non-overlapping H-MIGs subgraphs, each with H input bits. These non-overlapping H-MIGs subgraphs are listed as H-MIGs in a list as indicated by block1010. Each H-MIG is then processed till the list of H-MIG is exhausted, as indicated by decision block1011. For that, the current H-MIG in the list is assigned the next H-MIG from the list at block1012, and then hierarchical synthesis is performed on the current H-MIG as indicated by block1013. The output of hierarchical synthesis is synthesized H-MIG (hierarchical MIG) as indicated by block1014. The H-MIG is then added to a new graph at block1015. This new graph is from a hierarchical synthesis flow ofFIG.26, in accordance with some embodiments. The process is then repeated iteratively for each H-MIG in the list of H-MIGs and synthesized H-MIGs are added to the new graph. Once all the H-MIGs are exhausted, the process proceeds to block1017as indicated by marker B. At block1017, the CAD tool decides about whether the new graph has a better synthesis objective. Note, in block1009, the CAD tool also creates an empty (new) graph to which each synthesized H-MIG will be added. After processing all the H-MIGs, there should be two graphs. These two graphs include the current graph (either the initialized MIG or the graph from the previous iteration of the outside loop) and the new graph. In some embodiments, the CAD tool compares the two graphs to tell us whether to continue improving or to stop. If the new graph has a better synthesis objective, the process proceeds to block1009as indicated by marker C. If the new graph does not have a better synthesis objective, then the process has the synthesized MIG as indicated by block1018. In some embodiments, the current graph and the new graph, as discussed with reference toFIGS.10A-B, can be compared by extracting the gate count (or area, if the layout footprint of M-gates and inverters is known) or the depth (or delay, if the propagation delay of M-gates and inverters is known) from the graphs. If the new graph has improved PPA, the optimization continues otherwise it is terminated, since achieving results better than the current graph may not be feasible. FIG.11illustrates flowchart1100of a method of logic initialization flow for MIG synthesis (e.g., block1003ofFIG.10), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart1100provides methods for translating logic circuit inputs such as Verilog or netlists, graph of higher-level blocks, Boolean expressions, and truth tables into truth tables (when the circuit is small) or MIG (for larger circuits). In various embodiments, the logic initialization flow is responsible for mapping the different input forms of the logic function to the forms that the actual synthesis steps of the MIG synthesis algorithm can easily work with. For small circuits, the output of the logic initialization algorithm is a truth table, whereas for larger circuits, the output is a MIG. Flowchart1100begins with a logic circuit input1101. The input(s) can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, maximum number of bits for optimal synthesis, K, and/or maximum number of bits for hierarchical synthesis, H, etc. At block1102, the CAD tool decides whether the number of input bits is less than or equal to K. As described herein, K is a small number such as in {4, 5, 6}. In some embodiments, K is less than or equal to 10. K represents the maximum bit width for which an optimal MIG can be found in a reasonable amount of time. As such, when the number of input bits is less than or equal to K, the process proceeds to block1103where it is determined whether the logic circuit, which is input to the logic initialization flow, is specified as a truth table. If the logic circuit is specified as a truth table, the truth table is saved as indicated by block1105. If the logic circuit is not specified as a truth table, simulation is performed on the logic circuit at block1104and truth table1105is derived from the simulation. As mentioned herein, truth tables can be derived for logic circuits with fewer inputs (e.g., less than 10). If the number of inputs is large (e.g., greater than K), then the process proceeds to block1106. At block1106, the CAD tool determines whether the logic circuit is specified as Verilog (or in any other hardware description language) or a netlist. If the logic circuit is specified in a hardware description language or as a netlist, the process proceeds to block1107. At block1107, the CAD tool determines whether the logic circuit is specified as HDL (e.g., Verilog). This determination is made to obtain a netlist if such HDL is specified. As such, at block1108, if it is determined that the logic circuit is specified as HDL, the CAD tool performs standard logic synthesis using any suitable tool such as open source or commercial tools to obtain a netlist. At block1109, the CAD tool maps the netlist to MIG using M-gate standard cells to generate the MIG as indicated by block1110. As discussed herein, the M-gate standard cells include cells of various fan-in and fan-out for a number of different logic functions (e.g., AND, OR, NAND, etc.). These cells can be ferroelectric based cells or non-ferroelectric based cells (e.g., CMOS of other technologies). If it is determined that the logic is not specified as a hardware description language or a netlist, the process proceeds to block1111from block1106. At block1111, the CAD tool decides whether the logic is specified as a graph of higher-level blocks. The graph looks like a logic function. A graph of higher-level blocks is a graph containing a connection between blocks that are bigger than a gate (e.g., two or more M-gates). For example, in an array multiplier, a connection of full adders and half adders constitutes a graph of higher-level blocks. Given that the CAD tool knows the optimal MIG of a full adder and a half adder, the full adder and half adder blocks are replaced with their MIG equivalent and the MIGs are connected following the connections of the full adder and half adder blocks in the array multiplier. If the logic is specified as a graph of higher-level blocks, the process proceeds to block1111a, where the graph of the blocks is mapped to MIG using M-gate standard cells and/or functional unit blocks (FUB) cells (which are higher-level cells). The resultant circuit is a MIG as indicated by block1110. If it is determined that the logic is not specified as a graph of higher-level blocks, the process proceeds from block1111to1112. At block1112, the CAD tool decides whether the logic is specified as a truth table. If that is the case, the truth table is identified and saved as illustrated by block1113. If the logic is not specified as a truth table, then at block1114, the CAD tool parses the Boolean expressions and simulates them to generate truth tables. These truth table(s) are saved as illustrated by block1113. Once the truth tables are identified, the CAD tool performs wide input logic (WILK) initialization at block1115to generate the MIG. WILK is a heuristic for initializing a majority and/or minority inverter graph (MIG), in accordance with some embodiments. In various embodiments, WILK is a constructive approach that relies on two-level logic formulation, commutative and associative (symmetric) properties of disjunction (OR) and conjunction (AND), and the expressiveness of wide-input majority gates for initializing combinational circuits. In some embodiments, WILK uses wide-input M-gates based on a result of sum-of-products (SOP) minimization algorithm. FIGS.12A-Billustrate flowcharts1200and1230, respectively, of a method of wide-input logic initialization (WILK) flow (e.g., block1115ofFIG.11), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart1200(and flowchart1230) provide heuristic for initializing MIG using wide-input M-gates based on the result of a sum of products (SOP) minimization algorithm. Flowchart1200begins with a logic circuit input1201. The input(s) can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The truth tables can be multiple inputs and multiple outputs. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, maximum number of bits for optimal synthesis, K, and/or maximum number of bits for hierarchical synthesis, H, etc. At block1202, the CAD tool performs a method of simplifying Boolean algebra expressions (e.g., a logic function). This can be performed by a Karnaugh map (K-map), Quine McClusky (QMC) algorithm or Expresso heuristic. Any logic function can be represented by two levels of logic as given by the minterm expansion f(x1,x2,…,xn)=∨c1,c2,…,cnf(c1,c2,…,cn)∧x1c1∧x2c2∧…∧xncn, where ciis either 0 or 1. When ciis 1, xici=xi(the input is used in its original form). When ciis 0, xici={tilde over (x)}t(the input is used in its inverted form). The first level of logic is represented by at most 2nAND gates (∧), one for each of the 2npossible combinations of 0 and 1 for c1, c2, . . . , cn. The second level of logic is represented by a single OR gate (∨). Each operand of the OR gate is a representation of a row in the truth table for f(x1, x2, . . . , xn). The two-level minterm expansion is a specific example of sum of product (SOP) representation of logic. The number of literals xiciin each minterm and the number of minterms can be minimized in a process known in the literature as sum-of-product (SOP) minimization. Karnaugh maps (K-maps) can be used for SOP minimization for small number of input bits n (e.g., n≤5). Quine McCluskey (QMC) algorithm can be used for slightly larger n (e.g., 5<n<8). For much larger n (e.g., n≥8) heuristics such as the Espresso algorithm can be used for SOP minimization. Here, the techniques for simplifying Boolean expression for SOP minimization are generally referred to as the K-map algorithm. The output of the K-map algorithm is a sum (OR gate) of product of terms (AND gates). SOP can always be implemented using AND gates feeding into an OR gate. Likewise, a product-of-sums expression (POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F). The output of block1202is a list of product terms (minterms) as indicated by block1203. At block1204, the CAD tool tallies literals (e.g., terms x1, x2, . . . xn) across the list of product terms. Based on the frequency of occurrence of the literals, each product term is ordered. For example, each product term in the list of product terms is ordered in a descending order. A person skilled in the art would appreciate that the order can be ascending order and the algorithm can be modified to reuse the ordered list accordingly. In some embodiments, some other heuristic ordering can be used that brings frequent cohorts of literals in close proximity within each minterm. The output of block1204is a list of ordered product terms as indicated by block1205. Each term in the ordered product term is a AND gate. Due to the commutative and associative (symmetric) properties of OR/AND gates, a large fan-in OR/AND gate can be broken down into a sequence of smaller OR/AND gates. For example, OR(a, b, c, d)=OR(OR(a, b), OR(c, d)) or OR(a, b, c, d)=OR(OR(OR(a, b), c), d). The first breakdown (OR(OR(a, b), OR(c, d)) is logarithmic (depth oriented) whereas the second breakdown (OR(OR(OR(a, b), c, d) is linear (area oriented). A (2N−1)-input majority gate can represent an N-input AND gate, by tying (N−1) of the majority gate's inputs to a ground level. Similarly, a (2N−1)-input majority gate can represent an N-input OR gate, by tying (N−1) of the majority gate's inputs to a supply level. Since a majority gate can represent AND and OR gates, and the inputs to the AND and OR gates are either original or inverted forms of the input digital signals, any logic function can be represented by majority gates and inverters only. As such, wide-input majority gates provide flexibility is simplifying a give logic function. Given the list of input logic functions in the form of a truth table(s) with a moderate number of input bits n e.g., 16 input bits, the maximum fan-in F for the majority gate, and the desired PPA criterion, WILK initialization flow1200applies K-map at block1202for SOP minimization. The output of K-map is a list of product terms as indicated by block1203. To ensure re-use of majority gates during the construction of the initial MIG, WILK initialization flow1200tallies at block1204the literals across all product terms and order each product term based on the frequency of its literals from most frequent to least frequent as indicated by block1205. This ensures that for smaller maximum fan-in, the most frequent set of literals are grouped together fostering gate reuse. Thereafter, WILK initialization flow1200synthesizes each ordered product term using a set of majority gates using the relationship between AND gates and majority gates stated above. At block1206, the CAD tool determines whether the logic is to be optimized (or simplified) for delay minimization, area or energy. If the logic is to be optimized for delay minimization, (e.g., shallow logic depth), then the process proceeds to block1207. When the maximum fan-in F is limited relative to the product term with pliterals (when F<2p−1), more than one majority gate is needed. To ensure the depth is not adversely affected, the logarithmic breakdown of the product term as shown herein for p=7 and F=5 (which can represent 3-input AND gates) inFIG.13can be used.FIG.13illustrates graph1300for logarithmic breakdown of a product term for use in the wide-input logic initialization flow, in accordance with some embodiments. InFIG.13, one product term with seven terms is broken down into a sequence of majority gates. In this example, it takes a depth of 2 and 3 AND gates to achieve the function represented by the 7-term input product term. In this example, each AND gate is implemented as a majority gate that can be reused. Referring back toFIG.12A, if the logic is to be optimized for area or energy, the process proceeds to block1208from block1206. The linear breakdown of the product term illustrated inFIG.14increases the depth by 1.FIG.14illustrates graph1400for linear breakdown of product term for use in the wide-input logic initialization flow, in accordance with some embodiments. In this example, it takes a depth of 3 levels of majority based OR gates. In general, the linear breakdown has the advantage of keeping the high frequency literals closer (within the same majority gate) and using fewer gates. The choice between linear and logarithmic breakdown depends on the tradeoff between area and delay. Referring back toFIGS.12A-B, after synthesizing the ordered product terms, WILK initialization flow1200synthesizes the sums (OR gates) of all the product terms. Again, the product terms across all sum terms are tallied (one sum term per output logic function) at block1209. Subsequently, the list of sum terms is ordered based on the frequency of their constituent product terms. The list of ordered sum terms is the output of block1209as indicated by block1210. In some embodiments, a set of majority gates using the relationship between OR gates and majority gates stated above are utilized, to represent each sum term. When the maximum fan-in F is limited relative to the sum term with s product terms (when F<2s−1), more than one majority gate is needed. At block1211, the CAD tool determines whether the ordered sum terms are to be optimized (or simplified) for delay minimization, area or energy. If the ordered sum terms are to be optimized for delay minimization, (e.g., shallow logic depth), then the process proceeds to block1212. At block1212, the CAD tool uses logarithmic breakdown and majority gate synthesis of sum (OR) terms as illustrated inFIG.15.FIG.15illustrates graph1500for logarithmic breakdown of a sum term for use in the wide-input logic initialization flow, in accordance with some embodiments. Referring back toFIG.12B, if the logic is to be optimized for area or energy, the process proceeds to block1213from block1211. The linear breakdown and majority gate synthesis of sum (OR) term(s) is illustrated with reference toFIG.16.FIG.16illustrates graph1600for linear breakdown of sum term for use in the wide-input logic initialization flow, in accordance with some embodiments. The resultant output from blocks1212or1213is a MIG as indicated by block1214. While the WILK initialization flow is illustrated with reference to performing synthesis of product terms first and then the sum terms, the order can be reversed, in accordance with some embodiments. For example, WILK initialization flow can be accomplished with reference to performing synthesis of product-of-sum (POS) logic representation. FIG.17illustrates flowchart1700of a method for optimal synthesis flow, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart1700provides a mechanism for optimal MIG synthesis of relatively small circuits (e.g., number of inputs is less than or equal to K) using either area-oriented or delay-oriented algorithms depending on the primary synthesis objective. Flowchart1700begins with a set of inputs1701. The input(s) include MIG or truth tables, maximum fan-in, synthesis objective (e.g., PPA objectives), and maximum relative or absolute gate count. At block1702, the CAD tool decides whether to minimize delay. When delay minimization is the stated objective, the process proceeds to block1703where delay-oriented synthesis is performed as discussed with reference toFIG.20. In delay-oriented synthesis, one objective is depth minimization. If area or energy minimization or efficiency is the stated objective, the process proceeds to block1704where area-oriented optimal synthesis is performed as discussed with reference toFIG.18andFIG.19. In area-oriented optimal synthesis, one objective is to reduce gate count of a logic. The resultant output of the delay-oriented synthesis or area-oriented optimal synthesis is MIG1705. FIGS.18A-Billustrate flowcharts1800and1830for area-oriented optimal synthesis flow (e.g., block1704), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowcharts1800and1830represents the area-oriented optimal synthesis flow used in obtaining area-optimal MIGs. The CAD tool receive inputs1801. The inputs include a given MIG or truth tables. The truth tables describe the outputs of a logic as a function of inputs. The inputs also include a maximum fan-in. At block1802, a determination is made about the description of the logic circuit. To accelerate the search for area-optimal MIGs, the area-oriented optimal synthesis of flowchart1800uses two paths for selecting the initial gate counts. When a MIG is specified, the number of M-gates in it are used as an upper bound on the number of M-gates for an area-optimal MIG. When a truth table is specified, a lower bound on the number of M-gates is obtained by the gate count initialization algorithm ofFIG.19. As such, at block1803, the CAD tool initializes the gate count value using the gate count initializer flow fromFIG.19. The gate count initialization algorithm ofFIG.19takes advantage of the fact that when there are multiple outputs and they are not correlated with each other, with input variables of constant inputs, the search for an area-optimal MIG can be accelerated by skipping small gate counts and initializing the number of gates to a non-unit minimum value. At block1803, the logic depth is set to a large number (e.g., 100 or 1000 or more) so it is not a binding constraint to affect synthesis. At block1804, the CAD tool creates binary integer program (BIP) or satisfiability (SAT) problem and solves that problem using a solver. The purpose of finding a solution is to find a minimum number of gates (e.g., AND gates, OR gates, M-gates, etc.) that are needed to find a solution that obeys or complies with the truth tables. In various embodiments, inverters or buffers are not counted as gates because are they are too small compared to AND gates, OR gates, M-gates. At a later stage in the process, inverter minimization is performed to optimize (e.g., reduce) the number of inverters while meeting timing constraints and logic function. At block1805, the CAD tool decides whether the problem (of obtaining the truth table function for the logic circuit) is feasible or satisfiable with the current gate count low bound. If it is not feasible or satisfiable, the gate count is incremented (e.g., the gate count bound is increased by one or more) at block1806and the process of establishing the BIP or SAT problem and its solution is performed again. This process continues till the CAD tool determines that the problem is feasible or satisfiable with the new gate count. When the problem is feasible or satisfiable with the new gate count, the process proceeds to block1807. At block1807, the solution found by the solver is considered as the best solution that provides the least gate count to meet the function of the truth tables. The process then proceeds to get the best depth solution. Here, best depth solution refers to fastest delay possible from input to output. The process then proceeds to block1808as indicated by identifier E. Blocks1808,1809,1810,1811, and1812determine the best circuit topology in view of area and logic depth. At block1808, the initial depth value is decremented. This initial depth value may be a small number such as10. In some embodiments, this initial depth value comes from the best solution with the area objective. From the best area solution, the CAD tool extracts the circuit depth from the interconnection of M-gates specified by the BIP or SAT solution. At block1809, BIP or SAT problem is setup and solved using a solver. Any suitable solver can be used to solve the BIP or SAT problem. At block1810, the CAD tool determines whether the problem is feasible (e.g., solvable) or satisfiable using the decremented depth. The purpose of finding a solution is to find a minimum logic depth needed to find a solution that obeys or complies with the truth tables for the optimized area. If a solution is found, then it means that a better solution may be possible. For example, the depth can be further decreased beyond its current limit. As such, at block1811, the current feasible or satisfiable solution is considered as the best solution for depth optimization, and then the depth count is decremented again to see if a better solution for depth is possible. The process then repeats till the CAD tool determines that the problem is not feasible or satisfiable. In that case, the process proceeds to block1812where the current solution is marked as the best solution and MIG1813is formed using the optimized area and the updated lower depth. If at block1802, the CAD tool determines that the logic is not specified as truth tables (e.g., the CAD tool input is a MIG), then the process proceeds to block1814. At block1814, the input MIG is translated to a feasible solution for BIP or feasible satisfiable (SAT) solution. The feasible BIP or SAT solution is assigned as the best solution. In some embodiments, the initial gate count can be obtained directly from the MIG (number of M-gates in the graph) or extracted from the best feasible/SAT solution. In some embodiments, this initial gate count is obtained directly from the MIG. Like in block1803, the depth is set to a large number (e.g., 100) so that it does not become a bottleneck to find an optimized area (e.g., reduced number of gates). At block1815, the current gate count is decremented (e.g., by one or more). The decremented amount herein can be fixed or programmable. In some embodiments, the search for the optimal gate count and later the circuit depth in the current flowcharts is a linear search with a step size of 1. Other search mechanisms such as bisection search may be used, where there are two extremes (e.g., low and high gate counts or circuit depths with opposite feasibility/satisfiability) which surround the optimum and the interval between the two extremes is shrunk by a factor of 2 after each search iteration until only the optimum remains (interval size is 0). Linear search with step sizes greater than 1 need a backtracking mechanism for when the optimum is overshot. For example, if the best gate count is 2 and the current gate count is 1 and the CAD tool steps by 2, then the CAD tool ends up at a gate count of 3, which will be feasible/SAT. Note, gate count of 1 is not feasible/SAT. For feasibility or satisfiability study between gate counts 1 and 3, it is noted that since the feasibility/satisfiability of the problem at gate counts of 1 and 3 are different, the CAD tool tests the problem feasibility or satisfiability at gate count of 2, in accordance with some embodiments. In some embodiments, a flowchart for linear search with step size >1 or bisection search will be more cumbersome than the linear search with step size=1. At block1816, the BIP or SAT problem is then solved using a problem solver. The problem is to find a circuit that functions according to the logic of the MIG with reduced gate count. At block1817, the CAD tool determines whether the problem is feasible or satisfiable with the reduced gate count. If it is, this means that there may be more room for reducing the number of gates. At block1818, the current solution is assigned as the best solution and then the gate count is decremented at block1820, and the process of setting up the problem and finding the solution is repeated. This process is repeated till the problem can no longer find a feasible or satisfiable solution given the gate count. At that point, the minimum gate count is achieved. Thereafter, the process continues with finding the best depth for the logic (e.g., the lowest or shallowest depth possible given the reduced gate count). This process begins at block1819and follows blocks1808,1809,1810,1811, and1812as previously discussed. FIG.19illustrates flowchart1900for gate count initialization for area-oriented optimal synthesis flow (e.g., block1803), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. When the logic functions are truth tables, the CAD tool takes advantage of the fact that when there are multiple outputs and they are not correlated with each other, with input variables or with constant inputs, the search for an initial gate count can be accelerated by skipping small gate counts (e.g., 1, 2, etc.) and initializing the number of gates to a non-unit minimum value (e.g., 5). Flowchart1900begins with input truth tables1901. These truth tables define logic outputs as function of logic inputs. An example of a truth table is illustrated in the table below for an adder, minority function, and inverted input. Here, inputs are X1, X2, X3, while the outputs are X1_b, (which is inverse of X1), Min(X1,X2,X3), FACarry, and FASum. TABLE 1X1X2X3X1_bMin(X1, X2, X3)FACarryFASum11100110111010101001000111011100010010110110001010001100 At block1902, the CAD tool determines whether the number of outputs represented by the truth tables is equal to 1. If there is only one output, then the gate count is initialized to 1 as indicated by block1903. If the number of outputs represented by the truth tables is greater than 1, then the process proceeds to block1904to determine a gate count value that can be used as a starting point for optimizing area. At block1904, the CAD tool initializes an empty list of uncorrelated truth table outputs (LUTT). This list is populated by reviewing the truth tables (e.g., outputs of the truth tables). At block1905, the CAD tool determines whether the list of outputs of the truth tables is exhausted. This process is done to iteratively pass through each output of the truth tables and determine whether the output can be added to the list of uncorrelated truth tables (LUTT). The outputs of a truth tables can be out1, out2, out3, and so on for a number of inputs in1, in2, etc. At block1906, the first output (e.g., out1) is made the current output and then checked at block1907whether the current output of the truth table or its inverted form is a constant, one of the inputs, or in LUTT. This process is done for each output of the truth table. If the current output of the truth table or its inverted form is a constant, one of the inputs, or in LUTT, the process proceeds to block1905and the next output is made the current output and the check is made again. When the current output of the truth table or its inverted form is not a constant, is not one of the inputs, or not in LUTT, then a new unique, non-constant, and non-input truth table output is identified which is added to the LUTT at block1908. When the entire list of outputs of the truth tables are exhausted and checked for LUTT, the process identifies the gate count which is the length of LUTT as indicated by block1909. FIGS.20A-Billustrate flowchart2000and2030for delay-oriented optimal synthesis flow (e.g., block1703), in accordance with some embodiments of the disclosure. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. The delay-oriented optimal synthesis flowcharts2000and2030kick-starts the search for a delay-optimal MIG by using area-oriented optimal synthesis of flowchart1800to establish the minimum number of M-gates, maximum logic depth, and bounding the search for the delay-optimal MIG. The flowchart begins with inputs2001which include a given MIG or truth tables, given maximum fan-in, and a given maximum relative or absolute gate count. At block2002, area-oriented optimal synthesis is performed to get a minimum count of gates for a logic function. Block2002performs the flowcharts ofFIG.18A,FIG.18B, andFIG.19to arrive at a best area oriented MIG2003. Block2003gives the minimum bound on gate count and upper bound on depth (obtained from the graph). At block2004, the CAD tool translates the area-oriented MIG to a feasible or satisfiable (SAT) solution. This solution is assigned as the best solution until the next best solution is determined. At block2004, the CAD tool extracts the initial gate count and depth from the best solution, and also computes the maximum absolute gate count GCmax. In some embodiments, the maximum absolute gate count GCmaxis a finite maximum. The iterative process to find the lowest obtainable depth then starts at block2005, where the current depth number from the graph is decremented. The amount decremented may be fixed or programmable.FIGS.20A-Bshow the case of using a linear search with unit steps. In some embodiments, the search for the optimal depth can be accomplished by using a linear search with programable non-unit steps combined with backtracking in case of overshooting the optimal depth, and bisection search. At block2006, the CAD tool sets up the BIP or SAT problem and solves it (using any suitable solver) to arrive at a possible solution that satisfies the logic function for the given depth limit. If the CAD tool determines at block2007that the problem is feasible or satisfiable, then at block2008the solution is considered as the best solution and the depth is decremented to see if further delay minimization can be achieved. After an iterative process, the CAD tool will determine that the problem is not solvable because the solution is not feasible or satisfiable. In that case, the process proceeds to block2009as indicated by identifier F. Here, the gate count is incremented. The idea is that after obtaining the minimum depth for an optimized area, the gate count is increased and the depth analysis redone to find an optimal depth, thereby trading off gate count for depth. A strictly minimum depth goal may result in a very wide logic, which may not be feasible to implement. As such, in some embodiments, the area and depth optimization are done iteratively to find the optimal depth for the logic circuit within a fixed area budget. At block2010, the CAD tool determines whether the gate count is less than or equal to GCMax. If it is, then at block2011, the CAD tool creates a BIP or SAT problem and solves it. At block2012, the CAD tool determines if the problem is feasible to solve or satisfiable. If not, then the gate count is incremented again and the process is repeated. If the problem is feasible to solve or satisfiable and the gate count is still below GCMax, then the solution is considered the best solution as indicated by block2013and the process of depth decrementing starts again as indicated by identifier G. At block2013, the CAD tool assigns the feasible or satisfiable solution as the best solution. Here, GCMaxis the fixed area budget. There is an inherent tradeoff between depth and gate count. Decreasing the depth usually increases the gate count and vice versa. GCMaxserves as the overall stopping condition, so that the gate count (and area) doesn't grow ad infinitum. If the area budget has not been reached, the CAD tool can continue trying to decrease the depth. If at block2010it is determined that the gate count is greater than GCMax, then the best solution is used to generate the MIG at block2014. The final outcome is MIG2015which is delay optimized (with fewer depths) in view of the fixed area budget. FIG.21illustrates flowchart2100for synthesis problem formulation as binary integer program (BIP), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. In some embodiments, the CAD tool extends a mixed integer program to a binary integer program by restricting the weights of a threshold gate to the set {-1,0,1} to reduce them to symmetric gates (of which majority and minority gates are a subset). In some embodiments, the CAD tool introduces a new set of constraints for minority gates and a new set of constraints for delay optimization. In some embodiments, the CAD tool allows the use of single or multiple fan-in M-gates. Synthesis problem formulation as binary integer program (BIP) is a process in the area-oriented and delay-oriented optimal synthesis flows described herein. In some embodiments, the CAD tool receives inputs2101including gate count and depth of the desired MIG, and maximum fan-in. At block2102, the CAD tool formulates BIP (binary integer program) problem using objectives B1and constraints B-2 to B-11. At block2103, the CAD tool solves the BIP problem using a solver (e.g., open source or commercial solves such as GUROBI, CPLEX, SCIP, etc.). The output of the solver is the BIP solution2104. Given n binary input variables x1, x2, . . . , xnand M binary output logic functions y1, y2, . . . , yM, let x0be the constant representing the low binary state. Let there be r M-gates laid out as a feed-forward network with a depth of d. Let wikrepresent the weight of a connection from the i-th input variable to the k-th gate, αlk, the weight of a connection from the l-th gate to the k-th gate, Lim, the weight of the connection from the i-th input to the m-th output logic function, and ϕkm, the weight of a connection from the k-th gate to the m-th output logic function. If wik=1, αlk=1, πim=1, ϕkm=1, a positive connection exists. If wik=−1, αlk=−1, πim=−1, ϕkm=−1, an inverted connection exists (the input signal is inverted before connecting to the gate/output). If wik=0, αlk=0, πim=0, ϕkm=0, no connection exists. Given n binary input variables, there are a total of 2npossible input configurations, corresponding to the rows of a truth table. When there are don't care (x) conditions, the number of truth table rows is less than 2n. Let xi(j)represent the j-th truth table entry of the i-th input variable, Pk(j), the j-th truth table entry of the output from the k-th gate, and ym(j), the j-th truth table entry of the m-th output logic function. Let Tkrepresent the threshold of the k-th M-gate. Introduce the binary variables wik+, wik−, αlk+, αlk−, πim+, πim−, ϕkm+, ϕkm−, βik(j)+, βik(j)−such that wik=wik+−wik−, αlk=αlk+−αlk−, πim+=πim+−πim−, ϕkm=ϕkm+−ϕkm−, and βlk(j)=βlk(j)+−βlk(j)−. Let binary variable μikrepresent the presence of a connection from the i-th input variable to the k-th gate, vlk, the presence of a connection from the l-th gate to the k-th gate, χim, the presence of a connection from the i-th input to the m-th output logic function, and ψkm, the presence of a connection from the k-th gate to the m-th output logic function. Let Ube a large enough constant. Let dkbe the depth of the k-th gate, where d1=1. Assume D is an upper bound of the depth of the circuit. Let blkbe an auxiliary binary variable that indicates an l-th gate that is one-hop away from the k-th gate (that is, which gate achieves the maximum). Assume bkis an auxiliary binary variable that indicates a terminal gate on the critical path. The integer variables dkare encoded into binary variables turning the integer linear program into a binary integer linear program. The binary integer program is given as the minimization of the objective Σk=1rΣi=0nμik+Σk=2rΣl=1k−1vlk(B-1)subject to the following constraints Σi=0nwikxi(j)+Σl=1k−1αlkPl(j)−Tk≥Pk(j)U−U,(B-2a) and −Σi=0nwikxi(j)−Σl=1k−1αlkPl(j)+Tk−1≥−Pk(j)U,(B-2b)for each j=1, 2, . . . , 2nand k=1, 2, . . . , r, for majority gates or Σi=0nwikxi(j)+Σl=1k−1αlkPl(j)−Tk≥−Pk(j)U,(B-2c) and −Σi=0nwikxi(j)−Σl=1k−1αlkPl(j)+Tk−1≥Pk(j)U−U,(B-2d)for each j and k, for minority gates, Pk(j)+αlk+−2βlk(j)+≥0,j=1,2, . . . , 2n,k=1,2, . . . ,r,l=1,2, . . . ,k−1 (B-3a) Pk(j)+αlk−−2βlk(j)−≥0,j=1,2, . . . , 2n,k=1,2, . . . ,r,l=1,2, . . . ,k−1 (B-3b) Pk(j)+αlk+−βlk(j)+≤1,j=1,2, . . . , 2n,k=1,2, . . . ,r,l=1,2, . . . ,k−1 (B-3c) Pk(j)+αlk−−βlk(j)−≤1,j=1,2, . . . , 2n,k=1,2, . . . ,r,l=1,2, . . . ,k−1 (B-3d) wik++wik−≤μik,=0,1, . . . ,n,k=1,2, . . . ,r(B-4a) αlk+αlk−≤vlk,k=1,2, . . . ,r,l=1,2, . . . ,k−1 (B-4b) πim++πim−≤χim,i=0,1, . . . ,n,m=1,2, . . . ,M(B-4c) ϕkm++ϕkm−≤ψkm,k=1,2, . . . ,r,m=1,2, . . . ,M(B-4d) Σi=0nχim+Σk=1rψkm=1,m=1,2, . . . ,M(B-5) ym(j)≤xi(j)+(1−χim)+(1−πim+),j=1,2, . . . , 2n,i=0,1, . . . ,n,m=1,2, . . . ,M(B-6a) xm(j)≤ym(j)+(1−χim)+(1−πim+),j=1,2, . . . , 2n,i=0,1, . . . ,n,m=1,2, . . . ,M(B-6b) ym(j)≤(1−xi(j))+(1−χim)+πim+,j=1,2, . . . , 2n,i=0,1, . . . ,n,m=1,2, . . . ,M(B-6c) (1−xi(j))≤ym(j)+(1−χim)+πim+,j=1,2, . . . , 2n,i=0,1, . . . ,n,m=1,2, . . . ,M(B-6d) ym(j)≤Pk(i)+(1−ψkm)+(1−ϕkm+),j=1,2, . . . , 2n,k=1,2, . . . ,r,m=1,2, . . . ,M(B-6e) Pk(j)≤ym(j)+(1−ψkm)+(1−ϕkm+),j=1,2, . . . , 2n,k=1,2, . . . ,r,m=1,2, . . . ,M(B-6f) ym(j)<(1−Pk(j))+(1−ψkm)+ϕkm+,j=1,2, . . . , 2n,k=1,2, . . . ,r,m=1,2, . . . ,M(B-6g) (1−Pk(j))≤ym(j)+(1−ψkm)+ϕkmϕ,j=1,2, . . . , 2n,k=1,2, . . . ,r,m=1,2, . . . ,M(B-6h) Σi=0nμik+Σl=1k−1vlk≥I,k=1,2, . . . ,r(B-7) Σk=l+1rvlk≤F,l=1,2, . . . ,r−1 (B-8) Tk=0.5(Σi=0nμik+Σl=1k−1vlk+1),k=1,2, . . . ,r(B-9) ≥dl+1−D(1−vlk),k=1,2, . . . ,r,l=1,2, . . . ,k−1 (B-10a) dk≤dl+1+D(1−vlk)+D(1−blk),k=1,2, . . . ,r,l=1,2, . . . ,k−1 (B-10b) Σl=1k−1blk≤1+(k−2)(1−vlk),k=1,2, . . . ,r(B-10c) 1≤Σl=1k−1blk+(1−vlk),k=1,2, . . . , r(B-10d) d≥dk,k=1,2, . . . ,r(B-11a) d≤dk+D(1−bk),k=1,2, . . . ,r(B-11b) Σk=1rbk=1 (B-11c) While area minimization involves incrementing the number of gates r until all constraints are satisfied, depth minimization involves incrementing the circuit delay d until all constraints are satisfied. This may require a tradeoff of increasing r beyond the minimum gate count. FIG.22illustrates flowchart2200for synthesis problem formulation as Boolean Satisfiability (SAT) problem, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. SAT formulation is an alternative step to BIP formulation in the area-oriented and delay-oriented optimal synthesis flows described herein. In some embodiments, the SAT formulation extends the traditional SAT problem by allowing the use of minority gates instead of only majority gates, allowing the use of wide-input, and single/multiple fan-in M-gates. At block2201, the CAD tool receives inputs2201which include gate count and depth of desired MIG, and maximum fan-in. It then uses these inputs at block2202to formulate a SAT (satisfiability) problem using objectives S-1 and constraints S-10. At block2203, the SAT problem is satisfied using open-source or commercial solvers such as Z3solver. The output of the solver is the SAT solution (UNSAT) or variable assignment as indicated by block2204. Majority/Minority Inverter Graph synthesis can be formulated as a Boolean satisfiability problem with constraints reflective of the PPA requirements. According to the literature, an MIG of size r over input variables x1, x2, . . . , xnis a sequence (xn+1, xn+2, . . . , xn+r) of gates that combine previous gates using the majority function xi=<a1,a2,a3> (S-1)for n<i≤n+r, where the three inputs to the gate are defined as a1=xs1ip1i,a2=xs2ip2i,a3=xs3ip3i, (S-2) where 0≤s1i<s2i<s3i<i are indexes pointing to the operands and 0≤p1i,p2i,p3i≤1 with p1i+p2i+p3i≥2, (S-3) are the operands' polarities. The operands are ordered by their index and at most one of the operands is complemented (pji=0). To represent boolean functions with fan-in less than three e.g., AND and OR gates, the zero variable x0=0 is defined. The output logic functions f1, f2, . . . , fMconstrain the output of the gates through fi=xsipifor 1≤i≤M,where 0≤si≤n+r indicates which input variable or gate realizes the i-th output function and 0≤pi≤1 is the output polarity. The depth of the i-th gate is specified as li=max{ls1i,ls2i,ls3i}+1 (S-4)for n<i≤n+r, where the depth of the input variables is set to 0 (li=0, for i≤n). Our formulation extends the literature in two ways. First, we enable the use of either majority or minority gates by representing <.> as either the majority or minority voting function. The minority voting function is the negation of the majority voting function. Second, and more importantly, we allow wide-input gates with any odd number of inputs greater than or equal to three. To enable wide-input gates, we define the n-input majority function <.>neither as the conjunction (AND) of the disjunction (OR) of all (n choose (n+1)/2), (n+1)/2-sized combinations of the n inputs <a1,a2, . . . , an>n=∧(as1,∧as2∧ . . . ∧as(n+1)/2) (S-5)or the disjunction of the conjunction of all (n choose (n+1)/2), (n+1)/2-sized combinations of the n inputs <a1,a2, . . . , an>n=∧(as1,∧as2∧ . . . ∧as(n+1)/2) (S-6) where (s1, s2, . . . , s(n+1)/2) specifies the indexes of the size (n+1)/2 subset of the input variables. While it is valid to allow each input to be complemented, this will lead to excess inverters in the graph. Because the outputs from the gates can be complemented, not all the inputs should be complementable. To ensure that at most (n−1)/2 operands can be complemented we constrain the input polarities to the i-th gate by the following Boolean expression: <p1i,p2i, . . . , pni>n, (S-7)where 0≤p1i, p2i, . . . , pni≤1. To allow an M-gate to represent at least two input AND or OR gates, only three operands can be strictly ordered via sji, j=1,2, . . . , n is the input port of the M-gate while i=n+1, n+2, . . . , n+r is the gate number. The strict ordering of the last three input ports as in: 0≤s1i<s2i≤ . . . ≤s(n−2)i<s(n−1)i<sni<i.(S-8)ensures the flexibility of the M-gate while reducing the redundancy of the representation. The depth of the i-th gate is now specified as: i=max{ls1i,ls2i, . . . , lsni}+1 (S-9)for n<i≤n+r. The depth of the MIG is the maximum level over all outputs and is given as:maxfi=xsipi{lsi}and must satisfy the depth constraint: maxfi=xsipi{lsi}≤d,(S-10)where d is the desired depth of the MIG. FIG.23illustrates flowchart2300for inverter minimization flow (e.g.,1006and1008), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Inverter minimization flow is performed following optimal synthesis1005in MIG synthesis flow1000. Inverter minimization flow is the second stage of logic optimization focusing on the inverter. In some embodiments, inverter minimization flow assumes inverters are less expensive than M-gates and as such does not introduce new M-gates (beyond switching between majority and minority gates) or alter the connection between M-gates (beyond eliminating or introducing an inverter along the connection). Moreover, although the inverter minimization flow is exhaustive, it is efficient, since it is performed in a depth wise manner on small circuits. At block2301, the CAD tool receives the input. The input here comprises a MIG and synthesis objective (e.g., area, energy, delay minimization, etc.). With these inputs, the CAD tool at block2302gets a list of M-gates for each depth in descending order {depthmax, . . . , 2, 1}. As such, a list of M-gates per graph depth is collected as indicated by block2303. In some embodiments, the order for the list of M-gates for each depth is in ascending order. At block2304, the CAD tool begins an iterative process to find if the number if inverters can be reduced in the whole circuit along its critical path. In this process, the CAD tool processes the list of M-gates per graph depth and determines if inverter optimization is possible. At block2304a, the CAD tool selects the M-gates in the current depth and assumes that there are ‘r’ M-gates at this depth. Here, ‘r’ indicates the width of the current logic depth. If ‘r’ is too big, then 4rwill be huge and the search will be laborious. If all configurations of the entire MIG are considered, r will be the gate count, which can be large, making 4rnew MIGs a huge number. The depth wise approach constraints ‘r’ to be the gate count per logic depth, which will be much smaller than the full circuit's gate count. Due to the equivalence property illustrated inFIG.24, the depth wise approach gives an equivalent optimal configuration to the full circuit approach, but more efficiently in compute and space. At block2305, the CAD tool creates 4rnew MIGs from the best MIG using the four configurations illustrates inFIG.24, at each of the r M-gates. FIG.24illustrates equivalent forms2400of majority and minority function, in accordance with various embodiments. Due to the self-duality property of majority or minority functions, functions2401,2402,2403, and2404are equivalent, where x are the input bits, y are the output bits, f is the majority/minority function (gate), and f_b is the corresponding minority/majority function. Stated plainly, to maintain the same functionality, an even number of {x, y, f) can be negated. By applying this property from the last level of logic recursively to the input level and keeping track of the inverter count for each application of the self-duality property, one can select a configuration with the minimal inverter count. The levels of logic are obtained by grouping the nodes on the graph by depth. Nodes that have the same depth belong to the same level of logic. In some embodiments, it is assumed that a M-gate is more expensive in PPA than a CMOS inverter and that there is compatibility between the M-gate technology and CMOS. This is in stark contrast to other beyond CMOS technologies such as QCA which are not compatible with and cannot use CMOS inverters. Such beyond CMOS technologies like Quantum-dot cellular automata (QCA) have native inverter implementation, but such inverters are much more expensive than a majority gate. As such, some embodiments do not allow an increase in the M-gate count during inverter propagation. When counting the number of inverters, it is assumed that the inverters are connected to the source M-gate, so that multiple inverted connections to target M-gates only count as one inverter. Referring back toFIG.23, at block2306, the CAD tool simplifies each of the 4rnew MIGs by cancelling back-to-back inverters as illustrated inFIG.25.FIG.25illustrates the concept2500of inverter cancellation, in accordance with some embodiments. During inverter minimization it can happen that two inverters are between the connection of two M-gates. By the property of inversion, the following two functions are equivalent. Configuration2501can be minimized to configuration2502. As such, the back-to-back inverters cancel each other, leading to an inverter count decrease of 2. To avoid explosion in computation, inverter propagation is performed after the synthesis of each K-MIG as opposed to after the synthesis of the full logic circuit, in accordance with some embodiments. Referring back toFIG.23, at block2307, the CAD tool determines whether the synthesis objective is delay minimization. If delay minimization is the primary synthesis objective, the process proceeds to block2308where the delay along the timing critical path is computed. At block2308, a MIG is selected with the smallest delay, and the best MIG is assigned as the selected MIG. The process then proceeds to block2304to perform inverter minimization for the next graph depth. If at block2307, the CAD tool determines that delay minimization is not the primary synthesis objective (e.g., the objective is area or energy), the process proceeds to block2309. At block2309, the CAD tool counts the number of inverters in each of the 4rnew MIGs, and selects a MIG with the smallest inverter count. The CAD tool then assigns the best MIG to the selected MIG. The process then moves to block2304to check whether the entire depth list is processed for inverter minimization. If so, then the last selected MIG is the best MIG as indicated by block2310. This selected MIG is minimized for inverters. In some embodiments, a similar process can also be performed for buffers (e.g., for buffer minimization). FIG.26illustrates flowchart2600of hierarchical synthesis flow (e.g.,1008and1013), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Hierarchical synthesis flow is a process in the MIG synthesis flow ofFIGS.10A-B. Hierarchical synthesis is used for larger circuits for which optimal synthesis usingFIG.10Abecomes time consuming, in accordance with some embodiments. At block2601, the CAD tool receives inputs including MIG with n primary inputs, where n belongs to (K,H]. The inputs also include maximum fan-in and synthesis objective (e.g., area, energy, delay minimization). The CAD tool then assigns the input MIG to the best MIG at block2602. When a logic circuit has ≤H input bits, then at block2603, the CAD tool simulates the initialized MIG and annotates each edge in the graph with its truth table (signal). Note that the truth tables of the output edges from the graph represent output of the logic circuit. The CAD tool simulates the best MIG by passing all 2ninput signal configurations (truth table input rows) through the graph and annotates each node with its input signals and output signal. The output of block2603is the annotated MIG as indicated by block2604. Here, 2Hrepresents the largest truth table that should be computed and stored in memory for each edge in the graph. Hierarchical synthesis flow takes advantage of don't care conditions (rows of truth table that can be ignored during synthesis) in the internal subgraphs of MIG to further decrease PPA of the synthesized circuit. This occurs because as signals flow from the primary inputs of a circuit into its internal sections, the signals are shaped such that not all possibilities available at the periphery are present on the circuit's interior. When a logic circuit has more than H input bits, the CAD tool first topologically splits its underlying MIG into non-overlapping subgraphs, such that the number of input edges (bits) to each subgraph is less than or equal to H. Each of these subgraphs are called an H-MIG. This makes it computationally feasible for the CAD tool to simulate each H-MIG, annotate its edges with truth tables (signals), and take advantage of don't care conditions in synthesizing the H-MIG. In some embodiments, the synthesis of the H-MIGs are independent and can be performed in parallel. To ensure that the dependency structure of the logic components (nodes in the graph) is maintained, the nodes in the graph are first topologically sorted before they are segmented into H-MIGs, in accordance with some embodiments. The segmentation can be done greedily by adding nodes to a subgraph until the input bit condition of <H is satisfied. It can also be done using other graph-cut heuristics, in accordance with some embodiments. In some embodiments, each H-MIG can be considered a smaller logic circuit. The edges connected to input pins of the original logic circuit or nodes in other upstream H-MIGs represent the input pins to the current H-MIG. The edges connected to output pins of the original logic circuit or nodes in other downstream H-MIGs represent the output pins from the current H-MIG. Each H-MIG cannot be synthesized optimally due to NP hardness, so in some embodiments, the CAD tool splits it into smaller synthesizable graphs called K-MIGs (or K-feasible cones, K-subgraphs) as indicated by block2605. In block2605, the new MIG is initialized with all terminal nodes (inputs and outputs). The CAD tool then splits annotated MIG topologically into K-MIGs. In computational complexity theory, NP-hardness is the defining property of a class of problems. This class of problems are informally at least as hard as the hardest problems in NP. A simple example of an NP-hard problem is the subset sum problem. Greedy algorithm or area or delay-oriented heuristics can be used to create the set of K-MIGs, in accordance with some embodiments. The greedy algorithm splits the H-MIG into K-MIGs by adding nodes to a subgraph until the input bit condition of ≤K is satisfied. At block2606, an iterative process begins where the K-MIG list from block2605is processed till its exhausted. At block2607, the next K-MIG is selected in the list as the current K-MIG. Here, the number of input edges to the K—MIGs is i<K. At block2608, the CAD tool reduces the 2ninput truth table rows to at most 2min(i,n)row, by selecting the unique rows. When the number of unique rows is <2min(i,n), don't care conditions exist and can be taken advantage of. Each of the i inputs to a K-MIG will have 2nentries (a truth table column) because n primary inputs to an H-MIG results in 2ninput signal configurations (truth table rows). Let us consider the 2n-bit long bit strings for each of the i unique input connections (ignoring constant connections) to the K-MIG as the inputs in a new truth table for the K-MIG and the 2n-bit long bit strings for each of the output connections emanating from the K-MIG as the truth table outputs. The K-MIG's truth table has i input columns. This implies that at most 2min(i,n)of the rows can be unique. In some embodiments, the number of unique truth table rows will be less than 2min(i,n), which amounts to less restrictions in synthesizing the K-MIG and ultimately a more compact circuit. At block2609, the CAD tool performs optimal synthesis using reduced truth table and BIP or SAT formulation and associated solvers. To illustrate the reduction of the truth table from 2nrows to ≤2min(i,n)rows for a K-MIG with i unique input connections (ignoring constant connections), consider the MIG for a Majority-OR circuit2620inFIG.26B. Assume the full circuit is an H-MIG and the second majority gate (an OR gate) with its inputs and output in the dashed box is a K-MIG. In this example, n=4 and i=2 (note, here constant inputs don't count as input variables). The H-MIG truth table has 16 rows as shown in Table 2. Extracting the inputs and output columns of the K-MIG from the overall truth table, we obtain the truth table for the K-MIG is obtained as shown in Table 3. Removing duplicate rows, we obtain the reduced truth table shown in Table 4, which has 4 rows. TABLE 2Overall truth tableabcd1Y1Y21111111011111110111110011101110111101011011001101000110111101110110111101011100101001100111010010010001000000100 TABLE 3OR (second majority gate) truth table extracted from overall truth tabled1Y1Y21111111111111101111111011101110101110111011101000111010001000100 TABLE 4OR reduced truth tabled1Y1Y21111110101110100 After synthesizing each K-MIG followed by inverter minimization at block2610, the CAD tool connects the optimal K-MIG to other optimally synthesized K-MIGs within a new H-MIG, using their input and output edges (ports). At block2611, the CAD tool adds synthesized MIG to new MIG by adding missing predecessor M-gates, connecting input edges to predecessor M-gates, and output edges to successor terminal output nodes. For circuits with input bits >H, once each new H-MIG is synthesized in parallel, the H-MIGs are connected with each other to create a new bigger MIG. Once the new H-MIG (e.g., circuit with input bits <H) or MIG (e.g., circuit with input bits >H) is created, then at block2612the new H-MIGs/MIG is compared to the current best H-MIG/MIG based on the synthesis PPA objective. At block2613, the CAD tool decides about the H-MIG/MIG. If the new H-MIG/MIG is better, it becomes the new best H-MIG/MIG and the optimization is repeated as indicated by blocks2614and2606. However, if the CAD tool determines at block2613the new H-MIG/MIG is worse, the optimization is terminated, and the best H-MIG/MIG is retuned as the optimal MIG as indicated as bock2615. FIG.27illustrates flowchart2700for post-synthesis flow (e.g.,1019), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Post-synthesis flow of flowchart2700is performed by MIG synthesis ofFIGS.10A-B. The post-synthesis flow of flowchart2700ensures that the fan-in and fan-out requirements are observed by the overall synthesis flow. Flowchart2700begins with inputs2701which include a given MIG, a list of allowed M-gate fan-ins, and M-gate fan-out constraints. At block2702, the CAD tool applies a gate pruning algorithm ofFIG.28. At block2703, the CAD tool applies a buffering algorithm ofFIG.29. The output after applying the gate pruning algorithm and the buffering algorithm is the synthesized MIG as indicated by block2704. FIG.28illustrates flowchart2800for gate pruning algorithm flow, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. After the BIP/SAT solution is obtained and translated into an MIG, each gate can be simplified to the smallest possible input width (fan-in) by pruning input connections or expanded to the next larger allowed fan-in by using the gate pruning algorithm of flowchart2800. At block2801, the CAD tool receives input to simulate the MIG. The inputs include a given MIG, allowed list of M-gate fan-ins (I ist). At block2802, the CAD tool simulates the MIG, and annotates each non-terminal node with input signals. At block2803, the CAD tool initializes pruned MIG with all terminal nodes of input (e.g., primary inputs and outputs). At block2804, the CAD tool gets a list of non-terminal nodes of input MIG. The CAD tool then starts an iterative process (blocks2805through2811). Given a majority or minority inverter graph, each gate is simplified to the smallest possible width by pruning input connections using the following relation for M-gates: M(x1,x2, . . . , xj,xj+1,xj+1,xj+2, . . . , xI)=M(x1,x2, . . . , xj,xj+2, . . . , xI). (GPA-1) When a signal and its inverted form are inputs to an I-input M-gate, the pruned version will be an (I−2)-input gate. This pruning can be performed until no pair of signals and their inverted form remain. In some embodiments, if a single fan-in is desired for circuit uniformity, following the gate pruning, a pair of source and ground signals can be connected to each pruned M-gate until the maximum fan-in is achieved. This corresponds to applying equation GPA-1 in reverse, from right to left, where xj+1is the ground signal. At block2805, the CAD tool checks if the list of non-terminal nodes of input MIG is exhausted (e.g., all items in the list are processed). In the beginning of the flow, the CAD tool proceeds to block2806since it begins to process the list of non-terminal nodes. At block2806, the CAD tool selects the net node in the list as the current M-gate. At block2807, the CAD tool finds all input edge signals to the M-gate in the annotated MIG. At block2808, the CAD tool, using the cancellation property of M-gate in equation (GPA-1), eliminates pairs edges with inverse signals to obtain pruned M-gate. At block2809, the CAD tool makes a determination whether the pruned M-gate fan-in is in the I list. If the pruned M-gate fan-in is not in the I list, the process proceeds to block2810where the CAD tool adds a pair of source and ground input edges to the pruned M-gate until the fan-in is in the I list. At block2811, the CAD tool adds the pruned M-gate to the MIG. The process is repeated for all the non-terminal nodes of the input MIG. Once all the non-terminal nodes in the list are exhausted, a pruned MIG is achieved as indicated by block2812. FIG.29illustrates flowchart2900for buffering algorithm flow, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. In some embodiments, to address the fan-out constraints on M-gates, it is assumed that there are no fan-out constraints and perform the synthesis. In the buffering algorithm of the post-synthesis flow shown in flowchart2900, this assumption is modified or corrected by introducing inverters and buffers as needed to ensure the functionality of the circuit. When all outward connections (fan-out) from each M-gate are considered, if one of the connections is inverted, an inverter is already present in the circuit. As such, only one more inverter may be needed to buffer the non-inverted connections. On the other hand, if there are no inverted connections, a buffer (e.g., two inverters connected back-to-back) may be used. At block2901, the CAD tool receives input2901which includes the MIG and M-gate fan-out constraints. At block2902, the CAD tool initializes the buffered MIG with all terminal nodes of input MIG (e.g., primary inputs and outputs). At block2903, the CAD tool gets a list of non-terminal nodes of input MIG. The iterative process then begins at block2904, and the CAD tool checks whether the list of nodes is exhausted. At block2904, if the CAD tool determines that the list is not exhausted (or processed), then at block2905the CAD tool selects the next node in the list as the current M-gate. At block2906, the CAD tool finds all output edges from the current M-gate group as inverted and non-inverted. At block2907, the CAD tool determines whether there are two groups of connections. If there are two groups of connections, then at block2908, the CAD tool determines whether the fan-out constraint exceeded. If the fan-out constraint exceeded, then at block2909, the CAD tool adds one inverter after the M-gate. The CAD tool then re-wires all inverted connections after the first inverter and all the non-inverted connections from after the second inverter. At block2916, the CAD tool adds the buffered M-gate to the MIG and the process repeats. If the fan-out constrain is not exceeded, then at block2910, the CAD tool decides not to add buffers (e.g., no buffering needed). The CAD tool then rewires all inverted connections from after the inverter. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. If there are no two groups of connections (see block2907), then the process proceeds to block2911where the CAD tool determines whether the group is inverted. If the group is inverted, then at block2912, the CAD tool decides that no buffering is needed, and rewires all inverted connections from after the inverter. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. If the group is inverted (see block2911), then at block2913the CAD tool determines whether the fan-out constraint is exceeded. If the fan-out constraint is exceeded, then at block2914, the CAD tool adds buffer after M-gate and rewires all non-inverted connections from the buffer. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. If the fan-out constraint is not exceeded (see block2913), then at block2915, the CAD tool decides that no buffering or rewiring is needed. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. After the CAD tool determines that the node list is exhausted (see block2904), the buffered MIG is provided as indicated by block2917. FIG.30illustrates processor system3000with machine-readable storage media having instructions that when executed cause the processor to perform logic synthesis, in accordance with various embodiments. Elements of embodiments (e.g., the various flowcharts described herein) are also provided as a machine-readable medium (e.g., memory) for storing the computer-executable instructions (e.g., instructions to implement any other processes discussed herein). In some embodiments, computing platform3000comprises memory3001, processor3002, machine-readable storage media3003(also referred to as tangible machine-readable medium), communication interface3004(e.g., wireless or wired interface), and network bus3005coupled together as shown. In some embodiments, processor3002is a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a general-purpose Central Processing Unit (CPU), or a low power logic implementing a simple finite state machine to perform the method of the various flowcharts, etc. In some embodiments, the various logic blocks of system3000are coupled together via network bus3005. Any suitable protocol may be used to implement network bus3005. In some embodiments, machine-readable storage medium3003includes instructions (also referred to as the program software code/instructions) for logic synthesis of a mix of CMOS gates and majority and minority logic circuits as described with reference to various embodiments and flowchart. Program software code/instructions associated with the flowcharts (and/or various embodiments) and executed to implement embodiments of the disclosed subject matter may be implemented as part of an operating system or a specific application, component, program, object, module, routine, or other sequence of instructions or organization of sequences of instructions referred to as “program software code/instructions,” “operating system program software code/instructions,” “application program software code/instructions,” or simply “software” or firmware embedded in processor. In some embodiments, the program software code/instructions associated with the flowcharts of various embodiments are executed by system3000. In some embodiments, the program software code/instructions associated with the flowcharts of various embodiments are stored in a computer executable storage medium3003and executed by processor3002. Here, computer executable storage medium503is a tangible machine-readable medium that can be used to store program software code/instructions and data that, when executed by a computing device, causes one or more processors (e.g., processor3002) to perform a method(s) as may be recited in one or more accompanying claims directed to the disclosed subject matter. The tangible machine-readable medium3003may include storage of the executable software program code/instructions and data in various tangible locations, including for example ROM, volatile RAM, non-volatile memory and/or cache and/or other tangible memory as referenced in the present application. Portions of this program software code/instructions and/or data may be stored in any one of these storage and memory devices. Further, the program software code/instructions can be obtained from other storage, including, e.g., through centralized servers or peer to peer networks and the like, including the Internet. Different portions of the software program code/instructions and data can be obtained at different times and in different communication sessions or in the same communication session. The software program code/instructions associated with the various flowcharts and data can be obtained in their entirety prior to the execution of a respective software program or application by the computing device. Alternatively, portions of the software program code/instructions and data can be obtained dynamically, e.g., just in time, when needed for execution. Alternatively, some combination of these ways of obtaining the software program code/instructions and data may occur, e.g., for different applications, components, programs, objects, modules, routines or other sequences of instructions or organization of sequences of instructions, by way of example. Thus, it is not required that the data and instructions be on a tangible machine-readable medium in entirety at a particular instance of time. Examples of tangible computer-readable media3003include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others. The software program code/instructions may be temporarily stored in digital tangible communication links while implementing electrical, optical, acoustical or other forms of propagating signals, such as carrier waves, infrared signals, digital signals, etc. through such tangible communication links. In general, tangible machine-readable medium3003includes any tangible mechanism that provides (i.e., stores and/or transmits in digital form, e.g., data packets) information in a form accessible by a machine (i.e., a computing device), which may be included, e.g., in a communication device, a computing device, a network device, a personal digital assistant, a manufacturing tool, a mobile communication device, whether or not able to download and run applications and subsidized applications from the communication network, such as the Internet, e.g., an iPhone®, Galaxy®, Blackberry® Android®, or the like, or any other device including a computing device. In one embodiment, processor-based system is in a form of or included within a PDA (personal digital assistant), a cellular phone, a notebook computer, a tablet, a game console, a set top box, an embedded system, a TV (television), a personal desktop computer, etc. Alternatively, the traditional communication applications and subsidized application(s) may be used in some embodiments of the disclosed subject matter. FIG.31illustrates 3-input majority gate3100with linear input capacitors and a non-linear output capacitor, in accordance with some embodiments. Logic Gate3100comprises first, second, and third drivers3101,3102, and3103, respectively. These drivers can be analog drivers generating analog signals or digital drivers generating signals that toggle between ground and the power supply rail, or a combination of analog or digital drivers. For example, driver3101is a CMOS driver such as a buffer, inverter, a NAND gate, NOR gate, etc., while driver3102is an amplifier generating a bias signal. The drivers provide input signals Vin1(and current I1), Vin2(and current I2), and Vin3(and current I3) to the three inputs of 3-input majority gate3104. In various embodiments, 3-input majority gate3104comprises three input nodes Vin1, Vin2, and Vin3. Here, signal names and node names are interchangeably used. For example, Vin1refers to node Vin1or signal Vin1depending on the context of the sentence. 3-input majority gate3103further comprises capacitors C1, C2, and C3. Here, resistors R1, R2, and R3are interconnect parasitic resistances coupled to capacitors C1, C2, and C3respectively. In various embodiments, capacitors C1, C2, and C3are non-ferroelectric capacitors. In some embodiments, the non-ferroelectric capacitor includes one of: dielectric capacitor, para-electric capacitor, or non-linear dielectric capacitor. A dielectric capacitor comprises first and second metal plates with a dielectric between them. Examples of such dielectrics are: HfO, ABO3 perovskites, nitrides, oxy-fluorides, oxides, etc. A para-electric capacitor comprises first and second metal plates with a para-electric material between them. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric materials to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95)), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. A dielectric capacitor comprises first and second metal plates with non-linear dielectric capacitor between them. The range for dielectric constant is 1.2 to 10000. The capacitors C1, C2, and C3can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, hybrid of metal capacitors or transistor capacitor. The capacitors C1, C2, and C3can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, or hybrid of metal capacitors or transistor capacitor. One terminal of the capacitors C1, C2, and C3is coupled to a common node cn. This common node is coupled to node n1, which is coupled to a first terminal of a non-linear polar capacitor3105. The majority function is performed at the common node cn, and the resulting voltage is projected on to capacitor3105. For example, the majority function of the currents (I1, I2, and I3) on node cn results in a resultant current that charges capacitor105. Table 5 illustrates the majority function f(Majority Vin1, Vin2, Vin3). TABLE 5Vin1Vin2Vin3cn (f(Majority Vin1, Vin2, Vin3))00000010010001111000101111011111 A capacitor with FE material (also referred to as a FEC) is a non-linear capacitor with its potential VF(QF) as a cubic function of its charge.FIG.32illustrates plot3200showing characteristics of a FEC. Plot3200is a charge-voltage (Q-V) plot for a block f Pb(Zr0.5Ti0.5)O3of area (100 nm)2and thickness 20 nm (nanometer). Plot3200shows local extrema at +/−Voindicated by the dashed lines. Here, the term Vcis the coercive voltage. In applying a potential V across the FEC, its charge can be unambiguously determined only for |V|>Vo. Otherwise, the charge of the FEC is subject to hysteresis effects. Referring back toFIG.31, in some embodiments, N odd number of capacitors are coupled to a single FEC to form a majority gate. In this case, N=3. The measured charge on the FEC (QF) is the output of the majority gate. Solving for a steady-state solution, the parasitic resistors are ignored and the input potentials Vi(or Vin) are assumed to be constant. In this case, the charge across each linear capacitor (C1, C2, C3) is: Qi=Ci·(Vi−VF) (1) The charge summed at node Cn and across FEC105is express as: QF=ΣiQi(2) QF=ΣiCiVi−ΣiCiVF(3) QF=ΣiCiVi−CVF(QF) (4) VF(QF)=∑icicVi-QFc(5) Here, C=ΣiCiis the sum of the capacitances. In the limit, C→∞, the following is achieved: VF(QF)=∑icicVi=V¯(6) The potential across FEC3105is the average of all the input potentials weighted by the capacitances (e.g., C1, C2, and C3). When Ci=C/N are all equal, VFis just a simple mean. To ensure that QF=VF−1(V) (7)is well defined, all possible values ofVhave magnitudes greater than Vc, the coercive potential. Assuming binary input of +/−Vs, the potential with the smallest magnitude is: V=Vs/N(8) This occurs when (N+1)/2 of the inputs are +Vsand (N−1)/2 are −Vs. Then, Vs>NVC(9) The output of the majority gate at node n1is expressed byFIG.33.FIG.33illustrates plot3300showing the output of a 3-input majority gate, in accordance with some embodiments. As an example, for N=3, the possible inputs are: V¯∈{-33Vs,-13Vs,+13Vs,+33Vs}(10) Referring back toFIG.31, since capacitor3105is a non-linear polar capacitor, both terminals of the capacitor are pre-discharged to ground or to a known predetermined voltage via n-type transistors pull-down transistors MN1and MN2, and p-type pull-up transistors. The predetermined voltage can be programmable. The pre-determined voltage can be positive or negative. In some embodiments, n-type transistor MN1is coupled to node Vout_int1(internal Vout node) and is controllable by clock or reset signal Clk1. In some embodiments, n-type transistor MN2is coupled to node Vout_int2(internal Vout node) and is controllable by clock or reset signal Clk2. In some embodiments, p-type transistor MP1is coupled to node Vout_int2, and is controllable by Clk3b. In some embodiments, the n-type transistors MN1and MN2are replaced with p-type transistors to pre-charge both terminals (Vout_int1and Vout_int2) of capacitor3105to a supply voltage or another predetermined voltage, while the p-type transistor MP1is replaced with an n-type transistor coupled to ground or a negative supply rail. The predetermined voltage can be programmable. The pre-determined voltage can be positive or negative. In some embodiments, the pre-charge or pre-discharge of the terminals of capacitor3105(or nodes cn and n1) is done periodically by a clock signals Clk1, Clk2, and Clk3b. The controls can be a non-clock signal that is generated by a control logic (not shown). For example, the control can be issued every predetermined or programmable time. In some embodiments, clock signals Clk1, Clk2, and Clk3bare issued in a reset phase, which is followed by an evaluation phase where inputs Vin1, Vin2, and Vin3are received, and majority function is performed on them.FIG.34illustrates timing diagram3400for resetting the ferroelectric capacitor for majority gate ofFIG.31, in accordance with some embodiments. Clk1has a pulse larger than the pulse widths of Clk2and Clk3b. Clk3bis an inverse of Clk3(not shown). In some embodiments, Clk1is first asserted which begins to discharge node Vout_int1. While node Vout_int1is being discharged, Clk2is asserted. Clk2may have a pulse width which is substantially half of the pulse width of Clk1. When Clk2is asserted, node Vout_int2is discharged. This sequence assures that both terminals of the non-linear polar material of capacitor3105are discharged sequentially. In various embodiments, before discharging node Vout_int2, Clk3bis de-asserted which turns on transistor MP1, causing Vout_int2to be charged to a predetermined value (e.g., supply level). The pulse width of Clk3bis smaller than the pulse width of clk1to ensure the Clk3bpulsing happens within the Clk1pulse window. This is useful to ensure non-linear polar capacitor3105is initialized to a known programmed state along with the other capacitors (e.g., C1, C2, C3) which are initialized to 0 V across them. The pulsing on Vout_int2creates the correct field across the non-linear polar capacitor3105in conjunction with Vout_int1to put it in the correct state, such that during operating mode, if Vout_int1goes higher than Vcvalue (coercive voltage value), it triggers the switching for non-linear polar capacitor3105, thereby resulting into a voltage build up on Vout_int2. In some embodiments, load capacitor CL is added to node Vout_int2. In some embodiments, load capacitor CL is a regular capacitor (e.g., a non-ferroelectric capacitor). The capacitance value of CL on Vout_int2is useful to ensure that the FE switching charge (of FE capacitor3105) provides the right voltage level. For a given FE size (area A), with polarization switching density (dP) and desired voltage swing of Vdd (supply voltage), the capacitance of CL should be approximately CL=dP*A/Vdd. There is slight deviation from the above CL value as there is charge sharing on Vout_int2due to dielectric component of FE capacitor3105. The charge sharing responds relative to voltage on Vout_int1, and capacitor divider ratio between the dielectric component of the FE capacitor3105, and load capacitor (CL). Note, the capacitance of CL can be aggregate of all the capacitances (e.g., parasitic routing capacitance on the node, gate capacitance of the output stage3106, and drain or source capacitance of the reset devices (e.g., MN2, MP1) on the Vout_int2node. In some embodiments, for a given size of non-linear polar capacitor3105, CL requirement can be met by just the load capacitance of non-FE logic3106, and parasitic component itself, and may not need to have it as a separate linear capacitor. In some embodiments, the non-linear polar material of capacitor105includes one of: ferroelectric (FE) material, para-electric material, relaxor ferroelectric, or non-linear dielectric. In various embodiments, para-electric material is the same as FE material but with chemical doping of the active ferroelectric ion by an ion with no polar distortion. In some cases, the non-polar ions are non-s orbital ions formed with p, d, f external orbitals. In some embodiments, non-linear dielectric materials are same as para-electric materials, relaxors, and dipolar glasses. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric material to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. In various embodiments, the FE material can be any suitable low voltage FE material that allows the FE material to switch its state by a low voltage (e.g., 100 mV). In some embodiments, the FE material comprises a perovskite of the type ABO3, where ‘A’ and ‘B’ are two cations of different sizes, and ‘O’ is oxygen which is an anion that bonds to both the cations. Generally, the size of A atoms is larger than the size of B atoms. In some embodiments, the perovskite can be doped (e.g., by La or Lanthanides). Perovskites can be suitably doped to achieve a spontaneous distortion in a range of 0.3 to 2%. For example, for chemically substituted lead titanate such as Zr in Ti site; La, Nb in Ti site, the concentration of these substitutes is such that it achieves the spontaneous distortion in the range of 0.3 to 2%. For chemically substituted BiFeO3, BiCrO3, BiCoO3 class of materials, La or rare earth substitution into the Bi site can tune the spontaneous distortion. Threshold in the FE material has a highly non-linear transfer function in the polarization vs. voltage response. The threshold is related to a) non-linearity of switching transfer function; and b) the squareness of the FE switching. The non-linearity of switching transfer function is the width of the derivative of the polarization vs. voltage plot. The squareness is defined by the ratio of the remnant polarization to the saturation polarization; perfect squareness will show a value of 1. The squareness of the FE switching can be suitably manipulated with chemical substitution. For example, in PbTiO3 a P-E (polarization-electric field) square loop can be modified by La or Nb substitution to create an S-shaped loop. The shape can be systematically tuned to ultimately yield a non-linear dielectric. The squareness of the FE switching can also be changed by the granularity of the FE layer. A perfect epitaxial, single crystalline FE layer will show higher squareness (e.g., ratio is closer to 1) compared to a poly crystalline FE. This perfect epitaxial can be accomplished using lattice matched bottom and top electrodes. In one example, BiFeO (BFO) can be epitaxially synthesized using a lattice matched SrRuO3 bottom electrode yielding P-E loops that are square. Progressive doping with La will reduce the squareness. In some embodiments, the FE material is contacted with a conductive metal oxide that includes one of the conducting perovskite metallic oxides exemplified by: La—Sr—CoO3, SrRuO3, La—Sr—MnO3, YBa2Cu3O7, Bi2Sr2CaCu2O8, LaNiO3, and ReO3. In some embodiments, the FE material comprises a stack of layers including low voltage FE material between (or sandwiched between) conductive oxides. In various embodiments, when FE material is a perovskite, the conductive oxides are of the type AA′BB′O3. A′ is a dopant for atomic site A, it can be an element from the Lanthanides series. B′ is a dopant for atomic site B, it can be an element from the transition metal elements especially Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn. A′ may have the same valency of site A, with a different ferroelectric polarizability. In some embodiments, the FE material comprises hexagonal ferroelectrics of the type h-RMnO3, where R is a rare earth element such as: cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y). The ferroelectric phase is characterized by a buckling of the layered MnO5 polyhedra, accompanied by displacements of the Y ions, which lead to a net electric polarization. In some embodiments, hexagonal FE includes one of: YMnO3 or LuFeO3. In various embodiments, when the FE material comprises hexagonal ferroelectrics, the conductive oxides adjacent to the FE material are of A2O3 (e.g., In2O3, Fe2O3) and AB2O3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, the FE material comprises improper FE material. An improper ferroelectric is a ferroelectric where the primary order parameter is an order mechanism such as strain or buckling of the atomic order. Examples of improper FE material are LuFeO3 class of materials or super lattice of ferroelectric and paraelectric materials PbTiO3 (PTO) and SnTiO3 (STO), respectively, and LaAlO3 (LAO) and STO, respectively. For example, a super lattice of [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. While various embodiments here are described with reference to ferroelectric material for storing the charge state, the embodiments are also applicable for paraelectric material. For example, the capacitor of various embodiments can be formed using paraelectric material instead of ferroelectric material. In some embodiments, the FE material includes one of: Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides. In some embodiments, FE material includes one of: Al(1−x)Sc(x)N, Ga(1−x)Sc(x)N, Al(1−x)Y(x)N or Al(1−x−y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction. In some embodiments, the FE material includes Bismuth ferrite (BFO), lead zirconate titanate (PZT), BFO with doping material, or PZT with doping material, wherein the doping material is one of Nb or; and relaxor ferroelectrics such as PMN-PT. In some embodiments, the FE material includes Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or any element from the lanthanide series of the periodic table. In some embodiments, the FE material105includes lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb. In some embodiments, the FE material includes a relaxor ferro-electric includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST). In some embodiments, the FE material includes Hafnium oxides of the form, Hf1-x Ex Oy where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y. In some embodiments, FE material105includes Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate. In some embodiments, the FE material comprises multiple layers. For example, alternating layers of [Bi2O2]2+, and pseudo-perovskite blocks (Bi4Ti3O12 and related Aurivillius phases), with perovskite layers that are n octahedral layers in thickness can be used. In some embodiments, the FE material comprises organic material. For example, Polyvinylidene fluoride or polyvinylidene difluoride (PVDF). The FE material is between two electrodes. These electrodes are conducting electrodes. In some embodiments, the electrodes are perovskite templated conductors. In such a templated structure, a thin layer (e.g., approximately 10 nm) of a perovskite conductor (such as SrRuO3) is coated on top of IrO2, RuO2, PdO2, or PtO2 (which have a non-perovskite structure but higher conductivity) to provide a seed or template for the growth of pure perovskite ferroelectric at low temperatures. In some embodiments, when the ferroelectric comprises hexagonal ferroelectric material, the electrodes can have hexagonal metals, spinels, or cubic metals. Examples of hexagonal metals include: PtCoO2, PdCoO2, and other delafossite structured hexagonal metallic oxides such as Al-doped ZnO. Examples of spinels include Fe3O4 and LiV2O4. Examples of cubic metals include Indium Tin Oxide (ITO) such as Sn-doped In2O3. The charge developed on node n1produces a voltage and current that is the output of the majority gate3104. Any suitable driver3106can drive this output. For example, a non-FE logic, FE logic, CMOS logic, BJT logic, etc. can be used to drive the output to a downstream logic. Examples of the drivers include inverters, buffers, NAND gates, NOR gates, XOR gates, amplifiers, comparators, digital-to-analog converters, analog-to-digital converters, etc. In some embodiments, output “out” is reset by driver106via Clk1signal. For example, NAND gate with one input coupled to Vout_int2and the other input coupled to Clk1can be used to reset “out” during a reset phase. WhileFIG.31illustrates a 3-input majority gate, the same concept can be extended to more than 3 inputs to make an N-input majority gate, where N is greater than 2. For example, a 5-input majority gate is similar to 3-input majority gate104but for additional inputs Vin4and Vin5. These inputs can come from the same drivers (e.g., any one of drivers101,102,103) or from different drivers. Input Vin4and Vin5can be analog, digital, or a combination of them. For example, Vin4is a digital signal while Vin5is an analog signal. The additional inputs Vin4and Vin5are coupled to additional non-ferroelectric capacitors C4and C5, respectively (not shown). The composition and size of the capacitors C4and C5are similar to that of C1, C2, and C3. Here, resistors R4and R5are parasitic resistors. The majority function is performed at the common node cn, and the resulting voltage is projected on to capacitor3105. For example, the majority function of the currents (I1, I2, I3, I4, and I5) on node cn results in a resultant current that charges capacitor3105. Table 6 illustrates the majority function f(Majority Vin1, Vin2, Vin3, Vin4, Vin5) of a 5-input majority gate. TABLE 6cn (f(Majority Vin1,Vin1Vin2Vin3Vin4Vin5Vin2, Vin3, Vin4, Vin5))000000000010000100000110001000001010001100001111010000010010010100010111011000011011011101011111100000100010100100100111101000101011101101101111110000110011110101110111111001111011111101111101 FIG.35illustrates 3-input minority gate3500with non-linear input capacitors, in accordance with some embodiments. In some embodiments, 3-input majority gate3500comprises non-linear input capacitors C1n1, C2n1, and C3n1that receives digital signals a, b, and c, respectively. Here, signal names and node names are interchangeably used. For example, ‘a’ refers to node ‘a’ or signal ‘a’ depending on the context of the sentence. One end or terminal of capacitor C1n1is coupled to node a while the other end of capacitor C1nlis coupled to summing node Vs. The same is true for other non-linear capacitors C2n1and C3n1as shown. In some embodiments, 3-input majority gate3500comprises a driver circuitry3501. In this example, driver circuitry3501is an inverter. In other embodiments, other types of driver circuitries can be used such as NAND gate, NOR gate, multiplexer, buffer, and other logic gates. The majority function is performed at summing node Vs as Majority(a,b,c). In this example, since driver3501is an inverter, minority function is performed at output “out” as Minority(a,b,c). In some embodiments, in addition to the gate capacitance of driver circuitry3501, an additional linear capacitor CL is coupled to summing node Vs and ground as shown. In some embodiments, this linear capacitor CL is a non-ferroelectric capacitor. In some embodiments, the non-ferroelectric capacitor includes one of: dielectric capacitor, para-electric capacitor, or non-linear dielectric capacitor. A dielectric capacitor comprises first and second metal plates with a dielectric between them. Examples of such dielectrics are: HfO, ABO3 perovskites, nitrides, oxy-fluorides, oxides, etc. A para-electric capacitor comprises first and second metal plates with a para-electric material between them. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric materials to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95)), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. A dielectric capacitor comprises first and second metal plates with non-linear dielectric capacitor between them. The range for dielectric constant is 1.2 to 10000. The capacitor CL can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, hybrid of metal capacitors or transistor capacitor. The capacitor CL can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, or hybrid of metal capacitors or transistor capacitor. In some embodiments, the non-linear input capacitors C1n1, C2n1, and C3n1comprise non-linear polar material. In some embodiments, the non-linear polar material includes one of: ferroelectric (FE) material, para-electric material, relaxor ferroelectric, or non-linear dielectric. In various embodiments, para-electric material is the same as FE material but with chemical doping of the active ferroelectric ion by an ion with no polar distortion. In some cases, the non-polar ions are non-s orbital ions formed with p, d, f external orbitals. In some embodiments, non-linear dielectric materials are same as para-electric materials, relaxors, and dipolar glasses. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric material to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, La-substituted PbTiO3, and PMN-PT based relaxor ferroelectrics. In various embodiments, the FE material can be any suitable low voltage FE material that allows the FE material to switch its state by a low voltage (e.g., 100 mV). In some embodiments, the FE material comprises a perovskite of the type ABO3, where ‘A’ and B′ are two cations of different sizes, and ‘O’ is oxygen which is an anion that bonds to both the cations. Generally, the size of A atoms is larger than the size of B atoms. In some embodiments, the perovskite can be doped (e.g., by La or Lanthanides). Perovskites can be suitably doped to achieve a spontaneous distortion in a range of 0.3 to 2%. For example, for chemically substituted lead titanate such as Zr in Ti site; La, Nb in Ti site, the concentration of these substitutes is such that it achieves the spontaneous distortion in the range of 0.3 to 2%. For chemically substituted BiFeO3, BiCrO3, BiCoO3 class of materials, La or rare earth substitution into the Bi site can tune the spontaneous distortion. In some embodiments, perovskite includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3. Threshold in the FE material has a highly non-linear transfer function in the polarization vs. voltage response. The threshold is related to: a) non-linearity of switching transfer function; and b) the squareness of the FE switching. The non-linearity of switching transfer function is the width of the derivative of the polarization vs. voltage plot. The squareness is defined by the ratio of the remnant polarization to the saturation polarization; perfect squareness will show a value of 1. The squareness of the FE switching can be suitably manipulated with chemical substitution. For example, in PbTiO3 a P-E (polarization-electric field) square loop can be modified by La or Nb substitution to create an S-shaped loop. The shape can be systematically tuned to ultimately yield a non-linear dielectric. The squareness of the FE switching can also be changed by the granularity of the FE layer. A perfect epitaxial, single crystalline FE layer will show higher squareness (e.g., ratio is closer to 1) compared to a poly crystalline FE. This perfect epitaxial can be accomplished using lattice matched bottom and top electrodes. In one example, BiFeO (BFO) can be epitaxially synthesized using a lattice matched SrRuO3 bottom electrode yielding P-E loops that are square. Progressive doping with La will reduce the squareness. In some embodiments, the FE material is contacted with a conductive metal oxide that includes one of the conducting perovskite metallic oxides exemplified by: La—Sr—CoO3, SrRuO3, La—Sr—MnO3, YBa2Cu3O7, Bi2Sr2CaCu2O8, LaNiO3, and ReO3. In some embodiments, the FE material comprises a stack of layers including low voltage FE material between (or sandwiched between) conductive oxides. In various embodiments, when FE material is a perovskite, the conductive oxides are of the type AA′BB′O 3. A′ is a dopant for atomic site A, it can be an element from the Lanthanides series. B′ is a dopant for atomic site B, it can be an element from the transition metal elements especially Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn. A′ may have the same valency of site A, with a different ferroelectric polarizability. In some embodiments, the FE material comprises hexagonal ferroelectrics of the type h-RMnO3, where R is a rare earth element such as: cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y). The ferroelectric phase is characterized by a buckling of the layered MnO5 polyhedra, accompanied by displacements of the Y ions, which lead to a net electric polarization. In some embodiments, hexagonal FE includes one of: YMnO3 or LuFeO3. In various embodiments, when the FE material comprises hexagonal ferroelectrics, the conductive oxides adjacent to the FE material are of A2O3 (e.g., In2O3, Fe2O3) and AB2O3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, FE material comprises improper FE material. An improper ferroelectric is a ferroelectric where the primary order parameter is an order mechanism such as strain or buckling of the atomic order. Examples of improper FE material are LuFeO3 class of materials or super lattice of ferroelectric and paraelectric materials PbTiO3 (PTO) and SnTiO3 (STO), respectively, and LaAlO3 (LAO) and STO, respectively. For example, a super lattice of [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. While various embodiments here are described with reference to ferroelectric material for storing the charge state, the embodiments are also applicable for paraelectric material. For example, the capacitor of various embodiments can be formed using paraelectric material instead of ferroelectric material. In some embodiments, the FE material includes one of: Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides. In some embodiments, FE material includes one of: Al(1−x)Sc(x)N, Ga(1−x)Sc(x)N, Al(1−x)Y(x)N or Al(1−x−y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction. In some embodiments, the FE material includes Bismuth ferrite (BFO), lead zirconate titanate (PZT), BFO with doping material, or PZT with doping material, wherein the doping material is one of Nb or relaxor ferroelectrics such as PMN-PT. In some embodiments, the FE material includes Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or any element from the lanthanide series of the periodic table. In some embodiments, the FE material includes lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb. In some embodiments, the FE material includes a relaxor ferro-electric including one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST). In some embodiments, the FE material includes Hafnium oxides of the form, Hf1−x Ex Oy where E can be Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y. In some embodiments, FE material105includes Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate. In some embodiments, the FE material comprises multiple layers. For example, alternating layers of [Bi2O2]2+, and pseudo-perovskite blocks (Bi4Ti3O12 and related Aurivillius phases), with perovskite layers that are n octahedral layers in thickness can be used. In some embodiments, the FE material comprises organic material. For example, Polyvinylidene fluoride or polyvinylidene difluoride (PVDF). The FE material is between two electrodes. These electrodes are conducting electrodes. In some embodiments, the electrodes are perovskite templated conductors. In such a templated structure, a thin layer (e.g., approximately 10 nm) of a perovskite conductor (such as SrRuO3) is coated on top of IrO2, RuO2, PdO2, or PtO2 (which have a non-perovskite structure but higher conductivity) to provide a seed or template for the growth of pure perovskite ferroelectric at low temperatures. In some embodiments, when the ferroelectric comprises hexagonal ferroelectric material, the electrodes can have hexagonal metals, spinels, or cubic metals. Examples of hexagonal metals include: PtCoO2, PdCoO2, and other delafossite structured hexagonal metallic oxides such as Al-doped ZnO. Examples of spinels include Fe3O4 and LiV2O4. Examples of cubic metals include Indium Tin Oxide (ITO) such as Sn-doped In2O3. The majority function is performed at the summing node Vs, and the resulting voltage is projected on to capacitance of driver circuitry3501. For example, the majority function of the currents (Ia, Ib, and Ic) on node Vs results in a resultant current that charges capacitor3501. Table 7 illustrates the majority function f(Majority a, b, c). TABLE 7abcVs (f(Majority a, b, c))00000010010001111000101111011111 The charge developed on node Vs produces a voltage and current that is the output of the majority gate3500. Any suitable driver3501can drive this output. For example, a non-FE logic, FE logic, CMOS logic, BJT logic, etc. can be used to drive the output to a downstream logic. Examples of the drivers include inverters, buffers, NAND gates, NOR gates, XOR gates, amplifiers, comparators, digital-to-analog converters, analog-to-digital converters, multiplexers, etc. WhileFIG.35illustrates a 3-input majority gate, the same concept can be extended to more than 3 inputs to make an N-input majority gate, where N is greater than 2. In various embodiments, ‘N’ is an odd number. For example, a 5-input majority gate is like an input majority gate3500but for additional inputs ‘d’ and ‘e’. These inputs can come from the same drivers or from different drivers. In some embodiments, the 3-input majority gate can be configured as a fast inverter with a much faster propagation delay compared to a similar sized (in terms of area footprint) CMOS inverter. This is particularly useful when the inputs have a significantly slower slope compared to the propagation delay through the non-linear input capacitors. One way to configurate the 3-input majority gate as an inverter is to set one input to a logic high (e.g., b=1) and set another input to a logic low (e.g., b=0). The third input is the driving input which is to be inverted. The inversion will be at the Vs node. The same technique can also be applied to N-input majority gate, where ‘N’ is 1 or any other odd number. In an N-input majority gate, (N−1)/2 inputs are set to ‘1’ and (N−1)/2 inputs are set to ‘0’, and one input is used to decide the inversion function. It will be appreciated that the various embodiments are described as a majority gate, the same concepts are applicable to a minority gate. In a minority gate the driving circuitry is an inverting circuitry coupled to the summing node Vs. The minority function is seen at the output of the inverting circuitry. In some embodiments, (2N−1) input majority gate can operate as an N-input AND gate where (N−1) inputs of the majority gate are set to zero. The AND function will be seen at the summing node Vs. Similarly, N-input NAND, OR, NOR gates can be realized. In various embodiments, the summing node Vs is driven by a driver circuitry (e.g., inverter, buffer, NAND gate, AND gate, OR gate, NOR gate, or any other logic circuitry). However, driver circuitry3501can be replaced with another majority or minority gate. In one such embodiment, the storage node Vs is directly coupled to a non-linear capacitor of another majority or minority gate. Any logic function f (x1, x2, . . . xn) can be represented by two levels of logic as given by the min-term expansion: f(x1,x2, . . . xn)=VC1,C2. . . Cnf(x1,x2, . . . xn)∧x1C1∧x2C2∧x3C3. . . ∧∧xnCn where Ciis either 0 or 1. When Ciis 1, xiCi1=xi(the input is used in its original form). When Ciis 0, xiCi=xi(the input is used in its inverted form). The first level of logic is represented by at most 2nAND gates (Δ), one for each of the 2npossible combinations of 0 and 1 for C1, C2, . . . . Cn. The second level of logic is represented by a single OR gate (∧). Each operand of the OR gate is a representation of a row in the truth table for f(x1,x2, . . . xn). A (2N−1)-input majority gate can represent an N-input AND gate, by tying (N−1) of the majority gate's inputs to a ground level. Similarly, a (2N−1)-input majority gate can represent an N-input OR gate, by tying (N−1) of the majority gate's inputs to a supply level (Vdd). Since a majority gate can represent AND and OR gates, and the inputs to the AND and OR gates are either original or inverted forms of the input digital signals, any logic function can be represented by majority gates and inverters only, in accordance with some embodiments. FIG.36illustrates 3-input majority gate3600with non-linear input capacitors, in accordance with some embodiments. In some embodiments, the summing node Vs is not coupled to a CMOS driver (e.g., buffer, inverter, NAND gate, or any other CMOS logic gate). In one example, Vs is coupled to another majority or minority gate. For instance, Vs is coupled to a terminal of another non-linear capacitor of another majority or minority gate. FIG.37illustrates 3-input majority XOR gate3700with non-linear input capacitors, in accordance with some embodiments. XOR gate3700is a 2-input XOR gate that performs XOR function on inputs a and b. In various embodiments, XOR gate3700comprises non-linear input capacitors C1n1, C2n1, C3n1, C4n1, C5n1, and C6n1, inverter3703, and non-linear output capacitors C7n1, C8n1, and C9n1. Capacitors C1n1, C2n1, and C3n1receive inputs a, b, and 0, and perform majority AND function on node Vs1. Capacitors C4n1, C5n1, and C6n1receive inputs a, b, and Vdd, and perform majority OR function on node Vs2. The NAND output on node out1is received by output capacitor C7n1. The OR output on node Vs2is received by capacitor C8n1. Capacitor C9n1receives a predetermined input0in this example. The majority function on node out3is an AND of out1, out2, and0. In some embodiments, instead of driving voltage on node Vs2to out2, buffer3701is used between nodes Vs2and out2. In some embodiments, instead of driving output out3as the XOR output, buffer3702is used to output the XOR output on node out. In some embodiments, Vs2is directly connected to node out2. In some embodiments, out3is directly connected to node out. In some embodiments, linear or non-linear capacitors CL1, CL2, and CL3are added on the summing nodes Vs1, Vs2, and out3, respectively. By swapping the voltages ‘0’ and ‘Vdd’ different logic functions can be realized, in accordance with various embodiments. FIG.38illustrates a system-on-chip3800having logic which is synthesized using the CAD tool of various embodiments. In some embodiments, SOC3800comprises memory3801having static random-access memory (SRAM) or FE based random access memory FE-RAM, or any other suitable memory. The memory can be non-volatile (NV) or volatile memory. Memory3801may also comprise logic3803to control memory3802. For example, write and read drivers are part of logic3803. These drivers and other logic are implemented using the majority or threshold gates of various embodiments. The logic can comprise majority or threshold gates and traditional logic (e.g., CMOS based NAND, NOR etc.). SOC further comprises a memory I/O (input-output) interface3804. The interface may be double-data rate (DDR) compliant interface or any other suitable interface to communicate with a processor. Processor3805of SOC3800can be a single core or multiple core processor. Processor3805can be a general-purpose processor (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), or an Application Specific Integrated Circuit (ASIC) processor. In some embodiments, processor3805is an artificial intelligence (AI) processor (e.g., a dedicated AI processor, a processor circuitry, a graphics processor configured as an AI processor). In various embodiments, processor3805(or processor circuitry3805) is configured to execute one or more instructions. AI is a broad area of hardware and software computations where data is analyzed, classified, and then a decision is made regarding the data. For example, a model describing classification of data for a certain property or properties is trained over time with large amounts of data. The process of training a model requires large amounts of data and processing power to analyze the data. When a model is trained, weights or weight factors are modified based on outputs of the model. Once weights for a model are computed to a high confidence level (e.g., 95% or more) by repeatedly analyzing data and modifying weights to get the expected results, the model is deemed “trained.” This trained model with fixed weights is then used to make decisions about new data. Training a model and then applying the trained model for new data is hardware intensive activity. In some embodiments, Al processor3805has reduced latency of computing the training model and using the training model, which reduces the power consumption of such AI processor systems. Processor3805may be coupled to a number of other chip-lets that can be on the same die as SOC3800or on separate dies. These chip-lets include connectivity circuitry3806, I/O controller3807, power management3808, and display system3809, and peripheral connectivity3810. Connectivity3806represents hardware devices and software components for communicating with other devices. Connectivity3806may support various connectivity circuitries and standards. For example, connectivity3806may support GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, 3rd Generation Partnership Project (3GPP) Universal Mobile Telecommunications Systems (UMTS) system or variations or derivatives, 3GPP Long-Term Evolution (LTE) system or variations or derivatives, 3GPP LTE-Advanced (LTE-A) system or variations or derivatives, Fifth Generation (5G) wireless system or variations or derivatives, 5G mobile networks system or variations or derivatives, 5G New Radio (NR) system or variations or derivatives, or other cellular service standards. In some embodiments, connectivity3806may support non-cellular standards such as WiFi. I/O controller3807represents hardware devices and software components related to interaction with a user. I/O controller3807is operable to manage hardware that is part of an audio subsystem and/or display subsystem. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of SOC3800. In some embodiments, I/O controller3807illustrates a connection point for additional devices that connect to SOC3800through which a user might interact with the system. For example, devices that can be attached to the SOC3800might include microphone devices, speaker or stereo systems, video systems or other display devices, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices. Power management3808represents hardware or software that perform power management operations, e.g., based at least in part on receiving measurements from power measurement circuitries, temperature measurement circuitries, charge level of battery, and/or any other appropriate information that may be used for power management. By using majority and threshold gates of various embodiments, non-volatility is achieved at the output of these logic. Power management3808may accordingly put such logic into low power state without the worry of losing data. Power management may select a power state according to Advanced Configuration and Power Interface (ACPI) specification for one or all components of SOC3800. Display system3809represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the processor3805. In some embodiments, display system3809includes a touch screen (or touch pad) device that provides both output and input to a user. Display system3809may include a display interface, which includes the particular screen or hardware device used to provide a display to a user. In some embodiments, the display interface includes logic separate from processor3805to perform at least some processing related to the display. Peripheral connectivity3810may represent hardware devices and/or software devices for connecting to peripheral devices such as printers, chargers, cameras, etc. Peripheral connectivity3810say support communication protocols, e.g., PCIe (Peripheral Component Interconnect Express), USB (Universal Serial Bus), Thunderbolt, High Definition Multimedia Interface (HDMI), Firewire, etc. Here, multiple non-silicon semiconductor material layers may be stacked within a single fin structure. The multiple non-silicon semiconductor material layers may include one or more “P-type” layers that are suitable (e.g., offer higher hole mobility than silicon) for P-type transistors. The multiple non-silicon semiconductor material layers may further include one or more “N-type” layers that are suitable (e.g., offer higher electron mobility than silicon) for N-type transistors. The multiple non-silicon semiconductor material layers may further include one or more intervening layers separating the N-type from the P-type layers. The intervening layers may be at least partially sacrificial, for example to allow one or more of a gate, source, or drain to wrap completely around a channel region of one or more of the N-type and P-type transistors. The multiple non-silicon semiconductor material layers may be fabricated, at least in part, with self-aligned techniques such that a stacked CMOS device may include both a high-mobility N-type and P-type transistor with a footprint of a single FET (field effect transistor). It is pointed out that those elements of the figures having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described, but are not limited to such. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic “may,” “might,” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the elements. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional elements. The term “device” may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus which comprises the device. Throughout the specification, and in the claims, the term “connected” means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term “coupled” means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The term “adjacent” here generally refers to a position of a thing being next to (e.g., immediately next to or close to with one or more things between them) or adjoining another thing (e.g., abutting it). The term “circuit” or “module” may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term “signal” may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.” The term “scaling” generally refers to converting a design (schematic and layout) from one process technology to another process technology and subsequently being reduced in layout area. The term “scaling” generally also refers to downsizing layout and devices within the same technology node. The term “scaling” may also refer to adjusting (e.g., slowing down or speeding up—i.e. scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level. The terms “substantially,” “close,” “approximately,” “near,” and “about,” generally refer to being within +/−10% of a target value. For example, unless otherwise specified in the explicit context of their use, the terms “substantially equal,” “about equal” and “approximately equal” mean that there is no more than incidental variation between among things so described. In the art, such variation is typically no more than +/−10% of a predetermined target value. Unless otherwise specified the use of the ordinal adjectives “first,” “second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. For the purposes of the present disclosure, phrases “A and/or B” and “A or B” mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms “over,” “under,” “front side,” “back side,” “top,” “bottom,” “over,” “under,” and “on” as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material “over” a second material in the context of a figure provided herein may also be “under” the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material “on” a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies. The term “between” may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material “between” two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices. Here, the term “backend” generally refers to a section of a die which is opposite of a “frontend” and where an IC (integrated circuit) package couples to IC die bumps. For example, high-level metal layers (e.g., metal layer6and above in a ten-metal stack die) and corresponding vias that are closer to a die package are considered part of the backend of the die. Conversely, the term “frontend” generally refers to a section of the die that includes the active region (e.g., where transistors are fabricated) and low-level metal layers and corresponding vias that are closer to the active region (e.g., metal layer5and below in the ten-metal stack die example). Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive. While the disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. The embodiments of the disclosure are intended to embrace all such alternatives, modifications, and variations as to fall within the broad scope of the appended claims. In addition, well known power/ground connections to integrated circuit (IC) chips and other components may or may not be shown within the presented figures, for simplicity of illustration and discussion, and so as not to obscure the disclosure. Further, arrangements may be shown in block diagram form in order to avoid obscuring the disclosure, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the present disclosure is to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting. Following examples illustrates the various embodiments. Any one example can be combined with other examples described herein. Example 1: A machine-readable storage media having machine-readable instructions stored thereon that when executed cause one or more machines to perform a method comprising: receiving one or more input files indicative of a logic function of a logic circuit; classify, from the one or more input files, inputs, outputs, and state elements as terminal nodes; segmenting the logic circuit into sub-circuits with the terminal nodes as input and output ports; generating a list of combinational circuits and sequential circuits by analyzing feedback paths in each of the sub-circuits; for each item in the list, performing combinational circuit synthesis on an individual sub-circuit of the sub-circuits if it is determined that the individual sub-circuit does not include a feedback path, for each item in the list, performing sequential circuit synthesis on the individual sub-circuit if it is determined that the individual sub-circuit includes a feedback path, adding synthesized outputs from performing the combinational circuit synthesis and from performing the sequential circuit synthesis to a list of synthesized circuits; and wiring circuits, to generate a synthesized circuit, in the list of synthesized circuits using the inputs and the outputs of the logic circuit. Example 2: The machine-readable storage media of claim1, wherein performing the combinational circuit synthesis comprises: iteratively breaking each sub-circuit of the sub-circuits into non-overlapping blocks; selecting an option that maximizes power, performance, and area for a block of the non-overlapping blocks; for each block of the non-overlapping blocks, synthesizing the block in view of the selected option using standard CMOS logic gates and majority or minority gates of any fan-in or fan-out, or a combination of them, wherein synthesizing the block results in a synthesized block; adding the synthesized block to a list of synthesized blocks; and combining synthesized blocks from the list of synthesized blocks, to hierarchically create larger cells and a complete circuit, wherein a larger cell of the larger cells is larger than a block of the non-overlapping blocks. Example 3: The machine-readable storage media of claim1, wherein performing the combinational circuit synthesis comprises: for each sub-circuit of the sub-circuits, performing majority inverter graph (MIG) synthesis to generate a MIG with connected nodes of majority gates and inverter gates; and heuristically pattern matching the MIG, with standard cell library comprising logic gates, to generate a synthesized circuit. Example 4: The machine-readable storage media of claim3, wherein the logic gates include an n-bit adder and an n-bit multiplier. Example 5: The machine-readable storage media of claim3, wherein heuristically pattern matching the MIG comprises: ordering the logic gates in the standard cell library from largest to smallest or smallest to largest, to generate ordered logic gates; defining a current pattern as a representation of a current standard cell in the ordered logic gates; determining whether a match exists between the current pattern and a subgraph of the MIG; and replacing the subgraph of the MIG with the current pattern if the match exists. Example 6: The machine-readable storage media of claim1, wherein performing the sequential circuit synthesis comprises: determining whether the individual sub-circuit of the sub-circuits is level-triggered; and performing level-triggered sequential synthesis on the individual sub-circuit if it is determined that the individual sub-circuit of the sub-circuits is level-triggered. Example 7: The machine-readable storage media of example 6, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist or specified as a hardware description language; introducing an auxiliary primary input to the individual sub-circuit for each feedback connection from an output of the individual sub-circuit to an input of the individual sub-circuit, if it is determined that the individual sub-circuit is a netlist or specified as a hardware description language; and performing combinational circuit synthesis on the individual sub-circuit after the auxiliary primary input is introduced. Example 8: The machine-readable storage media of example 7, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate, wherein performing level-triggered sequential synthesis comprises: feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary primary input; or feedback wiring from an output of the first majority or minority gate to an input of the first majority or minority gate. Example 9: The machine-readable storage media of claim6, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; for each previous state in a Boolean expression for the individual sub-circuit, introducing an auxiliary input if it is determined that individual sub-circuit is not a netlist; performing combinational circuit synthesis on the individual sub-circuit after the auxiliary input is introduced, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate; and feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary input. Example 10: The machine-readable storage media of claim1, wherein performing the sequential circuit synthesis comprises: determining whether the individual sub-circuit of the sub-circuits is pulse-triggered; and performing pulse-triggered sequential synthesis on the individual sub-circuit if it is determined that the individual sub-circuit of the sub-circuits is pulse-triggered. Example 11: The machine-readable storage media of claim10, wherein performing the pulse-triggered sequential synthesis comprises: performing level-triggered sequential synthesis to generate a latch circuit; duplicated the latch circuit to generate a duplicate latch circuit; placing the duplicate latch circuit in back-to-back configuration with the latch circuit; and wiring a first clock to the latch circuit and a second clock to the duplicate latch circuit, wherein the second clock is an inverse of the first clock. Example 12: The machine-readable storage media of claim11, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; introducing an auxiliary primary input to the individual sub-circuit for each feedback connection from an output of the individual sub-circuit to an input of the individual sub-circuit, if it is determined that the individual sub-circuit is a netlist; and performing combinational circuit synthesis on the individual sub-circuit after the auxiliary primary input is introduced. Example 13: The machine-readable storage media of claim12, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate, wherein performing level-triggered sequential synthesis comprises: feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary primary input. Example 14: The machine-readable storage media of claim11, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; for each previous state in a Boolean expression for the individual sub-circuit, introducing an auxiliary input if it is determined that individual sub-circuit is not a netlist; performing combinational circuit synthesis on the individual sub-circuit after the auxiliary input is introduced, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate; and feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary input. Example 15: The machine-readable storage media of claim1, wherein performing the sequential circuit synthesis comprises: determining whether the individual sub-circuit of the sub-circuits is edge-triggered; and performing edge-triggered sequential synthesis on the individual sub-circuit if it is determined that the individual sub-circuit of the sub-circuits is edge-triggered. Example 16: The machine-readable storage media of claim15, wherein performing the edge-triggered sequential synthesis comprises: adding an auxiliary input to the individual sub-circuit, wherein the auxiliary input represents a delayed clock signal; initializing an empty list of synthesized circuits with a plurality of majority or minority gates with different fan-in; identifying a majority or minority gate, from the plurality of majority or minority gates, having a largest fan-in; iteratively performing level-triggered sequential synthesis on the individual sub-circuit after the auxiliary input is added and using the majority or minority gate starting with the largest fan-in and then using a next largest fan-in; for each circuit output obtained after performing level-triggered sequential synthesis, adding wire delay to the delayed clock signal to generate a wire delayed clock; and for each circuit output obtained after performing level-triggered sequential synthesis, connecting the wire delayed clock to a delay element to generate a plurality of synthesized circuits. Example 17: The machine-readable storage media of claim16, wherein performing the edge-triggered sequential synthesis comprises: checking for oscillation in the plurality of synthesized circuits; and identifying a synthesized circuit, from the plurality of synthesized circuits, that meets power, performance, and area objectives. Example 18: The machine-readable storage media of claim16, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; introducing an auxiliary primary input to the individual sub-circuit for each feedback connection from an output of the individual sub-circuit to an input of the individual sub-circuit, if it is determined that the individual sub-circuit is a netlist; and performing combinational circuit synthesis on the individual sub-circuit after the auxiliary primary input is introduced. Example 19: The machine-readable storage media of claim18, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate, wherein performing level-triggered sequential synthesis comprises: feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary primary input. Example 20: The machine-readable storage media of claim16, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; for each previous state in a Boolean expression for the individual sub-circuit, introducing an auxiliary input if it is determined that individual sub-circuit is not a netlist; performing combinational circuit synthesis on the individual sub-circuit after the auxiliary input is introduced, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate; and feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary input. Example 21: The machine-readable storage media of claim3, wherein performing the majority inverter graph (MIG) synthesis, to generate a MIG with connected nodes of majority gates and inverter gates, comprises: computing a maximum fan-in for majority or minority gates while ignoring fan-out constraints, wherein a maximum fan-in for a majority or minority gate is equal to a maximum number of inputs bits of the majority or minority gate; performing logic initialization for each sub-circuit of the sub-circuits; determining whether a number of input bits of a sub-circuit of the sub-circuits is less than or equal to K; if the number of input bits is less than or equal to K, applying optimal synthesis to the sub-circuit using one or more of a truth table of the sub-circuit, a binary integer programming (BIP), or a Boolean satisfiability; and performing inverter minimization in response to applying the optimal synthesis to generate a synthesized MIG circuit. Example 22: The machine-readable storage media of claim21, wherein K is less than 10. Example 23: The machine-readable storage media of claim21, wherein performing the majority inverter graph (MIG) synthesis comprises: if the number of input bits is greater than K, determining whether the number of input bits of a sub-circuit of the sub-circuits is less than or equal to H, where H is greater than K; and performing inverter minimization on the sub-circuit to generate the synthesized MIG circuit. Example 24: The machine-readable storage media of claim23, wherein H is 20 or more. Example 25: The machine-readable storage media of claim23, wherein performing the majority inverter graph (MIG) synthesis comprises: if the number of input bits is greater than H, independently applying a plurality of hierarchical synthesis to the sub-circuit and results from the plurality of hierarchical synthesis are glued together to generate the synthesized MIG circuit. Example 26: The machine-readable storage media of claim21, wherein performing the logic initialization comprises: determining whether the number of input bits of the sub-circuit of the sub-circuits is less than or equal to K; determining whether logic of the sub-circuit is specified as a truth table; outputting the truth table if it is determined that the logic of the sub-circuit is specified as a truth table; and simulating or determining the truth table if it is determined that the logic of the sub-circuit is not specified as a truth table. Example 27: The machine-readable storage media of claim21, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a hardware description language or a netlist if the number of input bits of the sub-circuit of the sub-circuits is greater than K; determining whether logic of the sub-circuit is specified as a truth table if it is determined that the logic of the sub-circuit is specified as a hardware description language or a netlist; and mapping the netlist to a MIG using majority or minority gates from the standard cell library if it is determined that the logic is not specified as a truth table. Example 28: The machine-readable storage media of claim21, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a hardware description language or a netlist if the number of input bits of the sub-circuit of the sub-circuits is greater than K; determining whether logic of the sub-circuit is specified as a truth table if it is determined that the logic of the sub-circuit is specified as a hardware description language or a netlist; applying logic synthesis on the sub-circuit to obtain a netlist if it is determined that the logic is specified as a truth table; and mapping the netlist to a MIG using majority or minority gates from the standard cell library if it is determined that the logic is not specified as a truth table. Example 29: The machine-readable storage media of claim27, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a graph of higher-level blocks, it is determined that the logic of the sub-circuit is not specified as a hardware description language or a netlist; and mapping the graph of the higher-level blocks to a MIG using the majority or minority gates from the standard cell library if it is determined that the logic is specified as a graph. Example 30: The machine-readable storage media of claim29, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a truth table, if it is determined that the logic is not specified as a graph; parsing and simulating for a truth table if it is determined that the logic of the sub-circuit is not specified as a truth table; and generating a MIG using wide-input majority or minority gates by applying the truth table. Example 31: The machine-readable storage media of claim29, wherein generating a MIG using wide-input majority or minority gates comprises: generating a list of product terms of the logic of the sub-circuit; order the product terms in descending or ascending order of literal frequency of the product terms, to generate a list of ordered product terms; determining whether there is delay minimization for the list of ordered product terms; and applying logarithmic breakdown and majority gate synthesis of each product term in the list of ordered product terms if delay minimization is possible. Example 32: The machine-readable storage media of claim31, wherein generating a MIG using wide-input majority or minority gates comprises: applying linear breakdown and majority gate synthesis of each product term in the list of ordered product terms if delay minimization is not possible. Example 33: The machine-readable storage media of claim31, wherein generating a MIG using wide-input majority or minority gates comprises: generating a list of sum terms of the logic of the sub-circuit; tallying the product terms across the list of sum terms; order the list of sum terms in descending or ascending order of product term frequency, to generate a list of ordered sum terms; determining whether there is delay minimization for the list of ordered sum terms; and applying logarithmic breakdown and majority gate synthesis of each sum term in the list of ordered sum terms if delay minimization is possible, to generate the MIG. Example 34: The machine-readable storage media of claim33, wherein generating a MIG using wide-input majority or minority gates comprises: applying linear breakdown and majority gate synthesis of each sum term in the list of ordered sum terms if delay minimization is not possible, to generate the MIG. Example 35: The machine-readable storage media of claim31, wherein generating the list of product terms comprises: applying one or more of Karnaugh map, Quine McCluskey algorithm, or Espresso heuristic on the sub-circuit. Example 36: A machine-readable storage media having machine-readable instructions stored thereon that when executed cause one or more machines to perform a method comprising: receiving one or more input files indicative of a logic function; generating a graph from the one or more input files; identifying inputs, state elements, and outputs from the graph; segregating the graph into subgraphs by grouping logic components between the inputs and the state elements, between the state elements, between the state elements and the outputs, and between the input and the outputs; and determining whether a subgraph from among the subgraphs includes a feedback path; performing combinational circuit synthesis on the subgraph if it is determined that the subgraph does not include a feedback path; performing sequential circuit synthesis on the subgraph if it is determined that the subgraph includes a feedback path; synthesizing a circuit using outputs from the combinational circuit synthesis and the sequential circuit synthesis. Example 37: The machine-readable storage media of claim36, wherein performing combinational circuit synthesis or performing sequential circuit synthesis comprises: selecting standard CMOS logic gates and majority or minority gates of any fan-in or fan-out to synthesize the circuit. Example 38: The machine-readable storage media of claim36, wherein the majority or minority gates include non-linear polar material. Example 39: The machine-readable storage media of claim38, wherein the non-linear polar material includes one of: ferroelectric material, para-electric material, or non-linear dielectric. Example 40: The machine-readable storage media of claim39, wherein the ferroelectric material includes one of: Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or elements from lanthanide series of periodic table; Lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb; relaxor ferroelectric which includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST); perovskite includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3; hexagonal ferroelectric includes one of: YMnO3, or LuFeO3; hexagonal ferroelectrics of a type h-RMnO3, where R is a rare earth element which includes one of: cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), or yttrium (Y); Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides; Hafnium oxides as Hf1-x Ex Oy, where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y; Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction; Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate; or improper ferroelectric includes one of: [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. Example 41: The machine-readable storage media of claim36, wherein the input is in one or more forms of Verilog, truth table, Boolean expression, graph, or netlist. Example 42: The machine-readable storage media of claim36, wherein performing combinational circuit synthesis on the subgraph comprises iteratively breaking subgraph into smaller blocks. Example 43: The machine-readable storage media of claim42, wherein performing combinational circuit synthesis comprises: selecting an option that maximizes power, performance, and area for the block; for each block of the smaller blocks, synthesizing the block in view of the selected option, using CMOS cells, or a combination of CMOS cells and majority or minority gate cells, wherein synthesizing the block results in a synthesized block; and combining the synthesized block, associated with each block of the smaller blocks, to hierarchically create larger cells and a complete circuit. Example 44: The machine-readable storage media of claim36, wherein performing combinational circuit synthesis on the subgraph comprises: breaking subgraph into blocks; selecting an option that maximizes power, performance, and area for each block of the blocks of the subgraph; performing majority-minority inverter graph (MIG) synthesis on the blocks to generate MIG subgraphs; and matching a functionality of one or more standard building block cells to sections of the MIG subgraphs that maximizes power, performance, and area of the sections of the MIG subgraphs. Example 45: The machine-readable storage media of claim44, wherein performing combinational circuit synthesis on the subgraph comprises: replacing the sections of the MIG subgraphs with the one or more standard building block cells if a matching functionality is determined; and combining the replaced sections of the MIG subgraphs with other sections of the MIG subgraphs to generate a complete circuit. Example 46: The machine-readable storage media of claim44, wherein the one or more standard building block cells include CMOS cells, majority or minority gate cells, or a combination of CMOS cells and majority or minority gate cells. Example 47: The machine-readable storage media of claim44, wherein performing combinational circuit synthesis on the subgraph comprises: prioritizing and selecting one or more larger building block cells over the one or more standard building block cells if the one or more larger building block cells match a functionality of the sections of the MIG subgraphs that maximizes power, performance, and area of the sections of the MIG subgraphs; replacing the sections of the MIG subgraphs with the one or more larger building block cells if a matching functionality is determined; and combining the replaced sections of the MIG subgraphs with other sections of the MIG subgraphs to generate a complete circuit. Example 48: The machine-readable storage media of claim36, wherein performing sequential circuit synthesis on the subgraph, comprises: determining if the subgraph indicates an edge triggered sequential; adding an input variable to the subgraph to represent a previous output state, if it is determined that the subgraph indicates a non-edge triggered sequential; applying a truth table for a latch to the subgraph with the input variable; performing majority-minority inverter graph (MIG) synthesis to the subgraph, in response to applying the truth table, to generate MIG subgraphs; modifying the MIG subgraphs by wiring an output of the latch to one or more nodes of a majority or minority gate cell to receive input from the previous output state; and generating a synthesized circuit in response to modifying the MIG subgraphs. Example 49: The machine-readable storage media of claim36, wherein performing sequential circuit synthesis on the subgraph, comprises: determining if the subgraph indicates an edge triggered sequential; if the subgraph indicates the edge triggered sequential, determining if the edge triggered sequential is a master-slave architecture; if the edge triggered sequential is a master-slave architecture, adding an input variable to the subgraph to represent a previous output state; applying a truth table for a latch to the subgraph with the input variable; performing majority-minority inverter graph (MIG) synthesis to the subgraph, in response to applying the truth table, to generate MIG subgraphs; and modifying the MIG subgraphs by wiring an output of the latch to one or more nodes of a majority or minority gate cell to receive input from the previous output state; duplicating the latch to generate a duplicated latch; coupling the latch with duplicated latch to generate a master-slave architecture; wiring a clock to the latch and an inverted clock to the duplicated latch after modifying the MIG subgraphs; and generating a synthesized circuit in response to wiring the clock to the latch and the inverted clock to the duplicated latch. Example 50: The machine-readable storage media of claim36, wherein performing sequential circuit synthesis on the subgraph, comprises: determining if the subgraph indicates an edge triggered sequential; if the subgraph indicates the edge triggered sequential, determining if the edge triggered sequential is a master-slave architecture; if the edge triggered sequential is not a master-slave architecture, adding a first input variable to represent to the subgraph to represent a previous output state; if the edge triggered sequential is not a master-slave architecture, adding a second input variable to represent to the subgraph to represent a delayed clock; applying a truth table for a flip-flop to the subgraph; performing majority-minority inverter graph (MIG) synthesis to the subgraph, in response to applying the truth table, to generate MIG subgraphs; modifying the MIG subgraphs by wiring an output of the flip-flop to one or more nodes of a majority or minority gate cell to receive input from the previous output state; modifying the MIG subgraphs by wiring the delayed clock as a clock to a delay element; and generating a synthesized circuit in response to wiring the output of the flip-flop and wiring the delayed clock as the clock to the delay element. Example 51: The machine-readable storage media of claim44, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: receiving inputs on the blocks of the subgraph; identifying a number of blocks in the subgraph; comparing the number of blocks with a first threshold; exacting synthesis of the subgraph using one or more solvers if it is determined that the number of blocks is less than or equal to the first threshold; performing inverter minimization in response to exacting synthesis; and synthesizing the subgraph in response to performing inverter minimization. Example 52: The machine-readable storage media of claim44, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: identifying a number of blocks in the subgraph; comparing the number of blocks with a first threshold; comparing the number of blocks with a second threshold, if it is determined that the number of blocks is greater than the first threshold, wherein the second threshold is larger than the first threshold; simulating the subgraph to determine signal flowing through each edge of the subgraph if it is determined that the number of blocks is less than or equal to the second threshold; topologically splitting the subgraph into first subgraphs, equivalent to the first threshold, using heuristics that maximizes power, performance, and area for each block of the blocks of the subgraph; exacting synthesis of each subgraphs of the first subgraphs topologically, using the signal flowing through each edge of the graph, to generate synthesized first subgraphs; performing inverter minimization in response to exacting synthesis; adding the synthesized first subgraphs to a new graph; determining whether the new graph has better power, performance, and area than the subgraph to which MIG synthesis is performed; and synthesizing the synthesized first subgraphs if it is determined that the new graph is worst in power, performance, and area than the subgraph to which MIG synthesis is performed. Example 53: The machine-readable storage media of claim52, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises simulating the subgraph to determine the signal flowing through each edge of the subgraph if it is determined that the new graph is better in power, performance, and area than the subgraph to which MIG synthesis is performed. Example 54: The machine-readable storage media of claim52, wherein the signal flowing through each edge of the graph is determined by applying one or more solvers. Example 55: The machine-readable storage media of claim54, wherein the one or more solvers include: satisfiability solver (SAT) and Mixed Integer Linear Programming (MIP). Example 56: The machine-readable storage media of claim54, wherein the one or more solvers include: satisfiability solver (SAT) and Mixed Integer Linear Programming (MIP). Example 57: The machine-readable storage media of claim52, wherein the inputs include: gate type, maximum gate fan-in, area or delay target, and description of blocks. Example 58: The machine-readable storage media of claim54, wherein the description of blocks includes one or more of: Verilog, graph netlist, or truth table. Example 59: The machine-readable storage media of claim44, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: identifying a number of blocks in the subgraph; comparing the number of blocks with a first threshold; comparing the number of blocks with a second threshold, if it is determined that the number of blocks is greater than the first threshold, wherein the second threshold is larger than the first threshold; topologically splitting the subgraph into second subgraphs, wherein each second subgraph has blocks less than or equal to the second threshold; simulating each of the second subgraphs to determine signal flowing through each edge of the second subgraph; topologically splitting each of the second subgraphs into third subgraphs using heuristics that maximizes power, performance, and area for each block of the blocks of the second subgraphs; exacting synthesis of each subgraphs of the third subgraphs topologically, signal flowing through each edge of the second subgraph and by applying one or more solvers, wherein exacting synthesis of each subgraphs of the third subgraphs generates exacted third subgraphs; performing inverter minimization in response to exacting synthesis; adding the synthesized second subgraphs to the exacted third subgraphs, to generate a new graph; determining whether the new graph has better power, performance, and area than the subgraph to which MIG synthesis is performed; and synthesizing the new graph if it is determined that the new graph is worst in power, performance, and area than the subgraph to which MIG synthesis is performed. Example 60: The machine-readable storage media of claim59, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: topologically splitting the subgraph if it is determined that the new graph has better power, performance, and area than the subgraph to which MIG synthesis is performed. Example 61: The machine-readable storage media of claim59, wherein the second subgraphs are overlapping subgraphs. Example 62: A machine-readable storage media having machine-readable instructions stored thereon that when executed cause one or more machines to perform a method comprising: receiving one or more input files indicative of a logic function; generating a graph from the one or more input files; identifying inputs, state elements, and outputs from the graph; segregating the graph into subgraphs by grouping logic components between the inputs and the state elements, between the state elements, between the state elements and the outputs, and between the input and the outputs; determining whether a subgraph from among the subgraphs includes a feedback path; selecting standard CMOS logic gates and majority or minority gates of any fan-in or fan-out; performing combinational circuit synthesis on the subgraph, using the selected standard CMOS logic gates and majority or minority gates, if it is determined that the subgraph does not include a feedback path; performing sequential circuit synthesis on the subgraph, using the selected standard CMOS logic gates and majority or minority gates, if it is determined that the subgraph includes a feedback path; and synthesizing a circuit using outputs from the combinational circuit synthesis and the sequential circuit synthesis. An abstract is provided that will allow the reader to ascertain the nature and gist of the technical disclosure. The abstract is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. | 213,620 |
11861279 | DETAILED DESCRIPTION Some embodiments provide a CAD tool to create optimized power, performance (e.g., delay), and area (PPA) digital logic for various standard cells and functional blocks (FUBs) using various optimization approaches. In some embodiments, the CAD tool is capable of receiving a number of inputs that describe a given logic circuit. These inputs can be in a hardware description language (HDL) such as Verilog or VHDL, netlist, graph of higher-level blocks, Boolean expressions, or truth tables. The inputs also include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. Given these inputs that describe a circuit or logic circuit, the CAD tool separates out the circuit into combinational logic component or circuit and sequential logic component or circuit. For the combinational logic synthesis, the CAD tool breaks the circuit down into different blocks informed by the high-level design and optimizes and synthesizes each block separately, in accordance with some embodiments. Various embodiments use multiple ways to optimize the combinational logic using MIG synthesis and optimization including mapping portions of the optimized MIG to standard cells. The majority or minority gates here can be ferroelectric capacitor-based majority or minority gates, in accordance with some embodiments. However, the embodiments are not limited to ferroelectric capacitors-based majority or minority gates, and any technology used for making majority or minority gates are applicable here. Majority or minority gate is a universal gate and could be used to build all types of standard cells and building blocks. Depending upon a logic function, majority or minority gates (M-gates) based synthesis may also provide smaller overall gate count. The CAD tool of some embodiments use M-gates to optimize PPA in at least two ways. One way is to use M-gates as fundamental gates to replace any type of existing gate with 1:1 mapping if advantageous. Another way is to use these gates to reduce the number of gate counts wherever possible. For M-gate based synthesis, in one technique, majority gate is a basic building block and inverters are introduced to build minority gates as needed and also sometimes buffers and/or inverters are introduced to provide higher fan-outs for the circuits. In some embodiments, the scheme unfolds feedback loops in a sequential logic resulting in a combinational logic, and applies logic synthesis techniques to produce a few candidate solutions. For example, the CAD tool synthesizes sequential circuits by transforming them into combinational logic via unfolding loops and synthesizing the resultant combinational logic, and recreating the loops afterwards. Among the various synthesized versions of the sequential circuits, the CAD tool goes through each solution (e.g., each synthesized circuit in this context) and checks their functionality to avoid any race conditions and returns the most optimal functional solution. The scheme of various embodiments uses wide-input majority or minority (herein referred to as M-gates) in combination with CMOS gates. This leads to fewer gates and a smaller logic depth. In some embodiments, inverter minimization is performed as a post-processing activity following M-gate optimization, which does not change the number of M-gates or the logic depth. For example, the CAD tool of various embodiments minimizes the number of inverters in the circuit or along a critical timing path. In some embodiments, the CAD tool accounts for design feedback that involves adding extra CMOS buffers or inverters to drive a large fan-out (or load). Together, various mechanisms of the scheme provide an improved PPA over the known scheme of logic synthesis. The CAD tool of various embodiments has the capability to synthesize both combinational and sequential FUBs. In some embodiments, the CAD tool applies a gate pruning algorithm to facilitate both single and multiple fan-in M-gate synthesis. Some embodiments use extended satisfiability (SAT) formulation to use both majority and minority gates and wide-input M-gates. In some embodiments, the CAD tool uses binary integer linear programming (BIP) framework for logic optimization of a MIG. As such, a specialized framework is established where threshold gate weights are −1, 0, and 1, which allows for the creation of optimal majority or minority inverter graphs. The BIP framework also allows for depth optimization to be explicitly captured in the program constraints. In some embodiments, the framework allows the use of either a single fan-in or multiple fan-in M-gates for synthesis. The CAD tool of some embodiments provides inverter minimization per block (e.g., standard cell or FUB), and thus provides enough boost within the block. For example, by reducing the number of inverters, power savings can be realized. In some embodiments, inverter minimization is performed as a post-synthesis step to reduce the total number of inverters in the block or along a critical timing path of the block. In some embodiments, fan-out constraints and requirements per M-gate are performed as a post-synthesis step by adding inverters and buffers, as needed, to drive a higher fan-out. In some embodiments, hierarchical synthesis is performed to further optimize synthesized circuits by taking advantage of “don't care” input conditions in interior sub-blocks. In some embodiments, the CAD tool uses gate count initialization in optimal synthesis to accelerate the search for an optimal MIG. There are many technical effects of the various embodiments. For example, the CAD tool of various embodiments can take majority or minority gates with a large fan-in (e.g., 3, 5, 7, or higher) input M-gates and standard cell library and produces optimal synthesized logic circuits. Other technical effects will be evident from the various embodiments and figures. In the following description, numerous details are discussed to provide a more thorough explanation of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure. Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate more constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme. FIG.1illustrates top-level architecture100of a computer-aided design (CAD) tool for logic synthesis of a mix of CMOS gates and majority and minority logic gates of various fan-in and/or fan-out, in accordance with some embodiments. Architecture100comprises iterative wrapper101with logic synthesis core101a, which is the nucleus of the CAD tool. In various embodiments, iterative wrapper101has access to a variety of cells to perform logic synthesis. These cells include standard CMOS cells102, such as a CMOS-based inverter, NAND gate, NOR gate, XOR gate, flip-flop (FF), latch, multiplexer, complex gate including half-adder, multiplexer, etc. These CMOS cells can be part of a standard library for a particular process technology node. In some embodiments, iterative wrapper101has access to a standard library of majority gates103that have x fan-in and y fan-out, where ‘x’ is 3 or more, and where ‘y’ is 1 or more. In some embodiments, the majority gates comprise ferroelectric capacitors to receive 3 or more inputs, where the ferroelectric capacitors are coupled together at another end. In some embodiments, the majority gates comprise non-ferroelectric input capacitors that receive 3 or more inputs, wherein the non-ferroelectric capacitors are coupled together at another end, which is coupled to a ferroelectric capacitor. In some embodiments, iterative wrapper101has access to a standard library of minority gates104that have ‘x’ fan-in and ‘y’ fan-out, where ‘x’ is 3 or more, and where ‘y’ is 1 or more. Minority gates104are essentially majority gates with an output inverter. The majority gate103and minority gate104can include basic cells like NAND gate, NOR gate, XOR gate, flip-flop (FF), adder, etc. In some embodiments, iterative wrapper101receives inputs105representing a logic circuit that is to be synthesized. The inputs can be in a number of formats including hardware description language (HDL) such as Verilog or VHDL, netlist, graph of higher-level blocks, Boolean expressions, or truth tables. The inputs also include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. Given these inputs that describe a circuit or logic circuit, the CAD tool separates out the circuit into combinational logic component (or circuit) and sequential logic component or circuit. The output of iterative wrapper101is synthesized circuit106, which includes a mix of CMOS standard cells, majority and/or minority logic gates of various fan-in and fan-out to provide the most optimal circuit design for use in a processor or an integrated circuit (IC). FIG.2illustrates flowchart200of a method of logic synthesis using majority or minority inverter graph (MIG) having majority and minority logic gates of various fan-in and/or fan-out and existing standard cells, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart200provides a top-level view of the overall logic synthesis flow. Here, a logic synthesis scheme uses majority and/or minority inverter graph (MIG) and existing standard cells, which work for both sequential and combinational logic. The logic synthesis scheme allows wide (e.g., 3 or larger odd number) and multiple fan-in inputs (e.g., the optimized MIG can contain M-gates with varied number of inputs). Block201are inputs representing a logic circuit that is to be synthesized using a mix of CMOS and majority and/or minority gates. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. Given these inputs that describe a circuit or logic circuit, the CAD tool separates out the circuit into combinational logic component (or circuit) and sequential logic component or circuit. At block202, the CAD tool identifies the inputs of the logic, state elements (e.g., latches and flip-flops), and outputs of the logic for segmentation from input201. In some embodiments, the inputs and outputs of the logic function are assumed to require state elements such as flip-flops and latches. In some embodiments, big logic blocks are broken down and pipelining is implemented with intermediate state elements as needed to meet clocking and throughput requirements as indicated by block203. This is done keeping the requirements of delay and energy of each component in consideration. In some embodiments, the breakdown with state elements may be performed during a post-processing phase depending upon the synthesized results and delay and energy constraints of the overall logic function unit. At block203, the inputs, outputs, and state elements are classified as terminal nodes. The CAD tool then segments the logic circuit into sub-circuits with nodes (e.g., input and output ports). Each sub-circuit is combinational, in accordance with various embodiments. In various embodiments, the CAD tool creates a list of separate combinational circuits and sequential components or circuits, and initializes an empty synthesized list. Once the combinational and sequential logic blocks are identified separately, specific synthesis flows for combinational and sequential logic blocks are used to optimize for the PPA requirements. The list of combinational circuits (or components) and sequential circuits (or components) is saved as indicated by block204. At block205, a determination is made whether the list of combinational circuits (or components) and sequential circuits (or components) is exhausted. This check is made to go through each circuit in the list and classify it as combinational circuit or sequential circuit. If the list is not exhausted, the process proceeds to block206where the current circuit is assigned as the next circuit in the list, and then that circuit is analyzed at block207to determine whether it is combinational. In a circuit without state elements or with only input and output registers, a region between the inputs and outputs comprises combinational circuit(s). In a pipelined circuit, the region between consecutive pipeline registers comprises combinational circuit(s). By defining inputs, outputs, and state elements as terminal nodes, the circuit can be segmented into sub-circuits (or sub-graphs) with input and output terminals. The sub-circuits are combinational circuits, in accordance with various embodiments. For combinational circuits, combinational circuit synthesis is applied at block208. For circuits identified as sequential circuits (e.g., because they have a feedback loop), sequential component synthesis is applied at block209. The synthesized circuits from block208and block209are added to a list of synthesized circuits as indicated by block210. The process then proceeds to block205, where it is determined whether the list of circuits is exhausted. If not, the process continues iteratively as discussed herein. If the list of circuits is exhausted, the process proceeds to block211. At block211, circuits in the list of synthesized circuits are wired using input and output terminals of the original logic circuit that is read from inputs201. The resultant output after wiring the circuits in the list of synthesized circuits is the synthesized circuit of the original logic circuit, as indicated by block212. The processes for combinational circuit synthesis of block208and sequential circuit synthesis of block209are discussed with reference to subsequent figures herein, in accordance with some embodiments. FIG.3illustrates flowchart300of a method for combinational logic synthesis (e.g., block208) using a top-down approach, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart300illustrates a method that breaks down logic function into small sub-circuits, performs logic synthesis with those, and then combines them to produce the final results. Flowchart300can be used in isolation (e.g., independently) or part of flowchart200to optimize a combinational circuit. Flowchart300begins with inputs for a combinational circuit as indicated by block301. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. At block302, the CAD tool (e.g., iterative wrapper101) iteratively breaks the combinational circuit into non-overlapping smaller blocks. For example, the combinational circuit is segmented into non-overlapping smaller blocks until the blocks are small enough to be synthesized by either using only standard cells or the MIG cells from the MIG synthesis tool. The MIG synthesis tool mixes standard cells and MIG cells. At block302, the CAD tool also keeps track of input and output connections of the smaller blocks. In some embodiments, the CAD tool initializes the empty list of synthesized small blocks. Here, initializing generally refers to creating an empty list that is used to store synthesized circuits. Here, small blocks generally refer to sub-circuits with a maximum of K inputs, where K is 5 or 6. The output of block302is a list of small combinational circuit blocks indicated by block304. This list of these small combinational circuit blocks is then iteratively processed, and the best synthesized sub-circuit or block is then selected based on PPA to be the cell for the small block. This process is indicated by blocks305,306, and307. At block305, the CAD tool determines whether the list of small combinational circuit blocks is exhausted (e.g., whether all small blocks in the list are processed). As each block is processed in the list, the current block is assigned to the next block in the list so that the next block is processed as indicated by block306. This process continues till all blocks in the list304are processed. At block307, the current block is synthesized using standard cells and/or a combination of standard cells and MIG cells using MIG synthesis tools. The standard cell set also comprises circuit representation of bigger building blocks such as adders and multipliers. In some embodiments, the synthesis results are compared with synthesized blocks that implement the same functionality, and the best circuit is chosen based on PPA constraints. In one example, if there is an off-the-shelf CMOS synthesis tool available, its synthesis of the small block can be compared to the MIG synthesis tools' result and the better circuit is selected based on PPA constraints, since M-gates here are compatible with CMOS logic gates. The synthesized block that gives the best PPA (e.g., that meets the PPA objectives as close as possible) is then selected and added to a list of synthesized small block list(s) as indicated by block308. The process then proceeds to block305and the next circuit block becomes the current block, and the process is repeated and the list of synthesized small blocks is filled. When the entire list of small combinational circuit blocks (block304) is processed (or exhausted, the process proceeds to block309. At block309, the synthesized small block cells in the list of synthesized small blocks are combined to hierarchically create bigger cells and finally the full combinational circuit. For example, the small synthesized block cells are rolled-up to represent the full synthesized combinational circuit310. FIG.4illustrates flowchart400of a method for combinational logic synthesis (e.g., block208) using a bottom-up approach, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart400illustrates a method that, unlike the top-down approach ofFIG.3, may not use separation of a combinational circuit into non-overlapping subblocks prior to MIG synthesis. Rather, the entire combinational circuit is passed to the MIG synthesis tool which transforms the circuit into a majority and/or minority inverter graph (MIG) and then optimizes the graph based on the PPA requirement. After MIG optimization, subgraphs of the optimized MIG graph are functionally mapped to building block cells with the best PPA, where the building block cells can be based on existing standard cells or a combination of existing standard cells and MIG cells. In flowchart400, the full logic function is synthesized and pattern matching is used to map sections of the MIG to standard cells. Flowchart400can be used in isolation (e.g., independently) or part of flowchart200to optimize a combinational circuit. Flowchart400begins with inputs for combinational circuit as indicated by block401. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. At block402, the CAD tool performs MIG synthesis. In some embodiments, the MIG synthesis scheme assumes that the logic circuit can be synthesized using a feed-forward network of M-gates and inverters. Majority gate followed by one inverter is equivalent to a minority gate. Minority gates could be made as a fundamental building block. A minority gate with one input is an inverter. Such a network of gates is equivalent to a directed acyclic graph (DAG). MIG synthesis relies on the logic initialization, hierarchical synthesis, optimal synthesis, inverter minimization, and post synthesis algorithms. In various embodiments, MIG synthesis is a flexible algorithm that allows the use of majority and/or minority gates (M-gates), wide-input and single and/or multiple fan-in M-gates. The output of MIG synthesis is a MIG as indicated by block403. At block404, the CAD tool applies heuristic pattern matching with standard cell library. Heuristic pattern matching comprises mapping sections of the MIG to standard cells. The standard cell library comprises gates or higher-level blocks such as n-bit adder, n-bit multiplier, multiplexers, decoders, etc. as indicated by block405. These standard cell library gates or higher-level blocks are input to the heuristic pattern matching scheme of block404. The output of the heuristic pattern matching scheme is the synthesized combinational circuit as indicated by block406. FIG.5illustrates flowchart500of a method for heuristic pattern matching (e.g., block404) with standard cell library, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart500illustrates a method of heuristic pattern matching with standard cell library. Here, a pattern-matching heuristic is used in the bottom-up approach ofFIG.4for mapping sections of the MIG to standard cells. Flowchart500shows one heuristic for solving the problem, by ordering the cells in the cell library in descending order of size (e.g., number of input and output ports), selecting one cell at a time until the library is exhausted, and functionally matching the selected cell to portions of the MIG. Flowchart500begins with block501(e.g., MIG block403). Block501includes MIG input, standard cell library comprising gates and/or higher-level blocks such as n-bit adder, n-bit multiplier, etc. At block502, the CAD tool orders the standard cells according to size from largest to smallest. A person skilled in the art would appreciate that order of cells can be flipped. For example, the cells can be ordered from a smallest size to a largest size instead. The size may be determined by a total device count and/or total device size per cell. In some embodiments, the size may be determined by a layout footprint of the cell. The ordered list of standard cells is then iteratively processed for a match. This iterative process comprises blocks503,504,505,506,507, and508, that adapt a greedy algorithm. At block503, a determination is made regarding whether the ordered cells of the standard cell library are exhausted. In the beginning of the process, the library is not exhausted, and the process proceeds to block504, where the current standard cell is assigned the next (e.g., the first) standard cell in the ordered list. One by one, each cell in the list is traversed. At block505, a current pattern is used as a representation for the current standard cell in the ordered list. The current pattern comprises of characteristics of a subgraph of a MIG. The characteristics can be a set of truth tables. From the set of truth tables, the number of inputs, the number of outputs, and the functionality can be easily extracted. The characteristics could also be the Boolean formulas for the outputs, in accordance with some embodiments. At block506, the CAD tool uses the current pattern to find a matching subgraph in MIG. A determination is made regarding the match at block507. If the current pattern matches the subgraph in the MIG, the process proceeds to block508where the matching subgraph of the MIG is replaced with the current standard cell. The process then proceeds to block503and repeated till the entire list of ordered cells is exhausted with this matching process. If the current pattern does not match the subgraph in the MIG, the process proceeds to block503. After all the ordered list of cells is exhausted, the final MIG represents the synthesized combinational circuit509. FIG.6illustrates high-level flowchart600of sequential logic synthesis (e.g., block209), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart600illustrates a method where the sequential circuit is analyzed and synthesized according to its classification. If a feedback loop is found in a circuit of a logic block to be synthesized, the circuit may be a sequential circuit. Depending on the circuits' response to a clock, the sequential circuit can be edge-triggered, pulse-triggered, or level-triggered. For each type of sequential circuit classification, a particular synthesis process is used. Flowchart600can be used in isolation (e.g., independently) or part of flowchart200to optimize a sequential circuit. Flowchart600starts with the description of the sequential circuit. The description is provided as inputs601. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. In one example, the inputs for the sequential circuits are specified as netlist or Boolean expressions. Any tool that can convert an input description (e.g., HDL) into a netlist can be used as input for flowchart600, in accordance with some embodiments. One reason for using netlists and Boolean expressions is that standard truth tables and graph of higher-level blocks may not capture and reveal the feedback loop in the sequential circuit, respectively. At block602, the CAD tool determines whether the sequential circuit is level-triggered. Examples of level-triggered sequential circuits are latches. The latches can be level-high or level-low latches. If the sequential circuit is level-triggered the process proceeds to block607for level-triggered sequential synthesis. If the circuit is not level-triggered, the process proceeds to block603where a determination is made regarding whether the circuit is pulse-triggered. If the sequential circuit is pulse-triggered, the process proceeds to block605where pulse-triggered sequential synthesis is performed. Examples of pulse-triggered sequential include back-to-back coupled latches configured as a D-flip-flop (D-FF), where each latch is controlled by a different clock (e.g., a clock and an inverse of the clock). If the sequential circuit is not pulse-triggered, it is expected to be edge-triggered. In that case, the CAD tool performs edge-triggered sequential synthesis. Examples of edge-triggered sequential circuits are rising-edge D-FF and falling-edge D-FF. The sequential circuits can have scan gadgets for debug or design-for-test (DFT). The output of edge-triggered sequential synthesis604, pulse-triggered sequential synthesis605, or level-triggered sequential synthesis is a synthesized sequential circuit606. FIG.7illustrates flowchart700of a method of level-triggered sequential logic synthesis (e.g., block607), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart700provides an optimal way of synthesizing level-sensitive sequential components such as latches. Flowchart700begins with sequential circuit input701. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. In one example, the inputs for the sequential circuits are specified as a netlist or Boolean expressions. Any tool that can convert an input description (e.g., HDL) into a netlist can be used as input for flowchart600, in accordance with some embodiments. One reason for using netlists and Boolean expressions is that standard truth tables and graph of higher-level blocks may not capture and reveal the feedback loop in the sequential circuit, respectively. At block701a, the CAD tool determines whether the logic defined as part of input701is specified as HDL (e.g., Verilog). If the logic is specified as an HDL, then at block701b, the CAD tool applies logic synthesis on the HDL to obtain a netlist. Any suitable logic synthesis tool may be used (e.g., commercially available logic synthesis tools may be used). If the logic is not specified as an HDL, then at block702, the CAD tool determines whether the sequential circuit is described by a netlist. If the sequential circuit is described as a netlist, the process proceeds to block703. At block703, for each feedback connection from the output of a cell to the input of a cell, an auxiliary primary input is introduced to represent a previous output state. This makes the circuit a combinational circuit, as indicated by block705. If the sequential circuit is not a netlist, then for each previous state in the Boolean expression, an auxiliary input variable is introduced. One reason for adding the auxiliary input is to convert a directed cyclic graph to a directed acyclic graph (DAG) in MIG synthesis, which assumes the input graph to be DAG. The auxiliary input is an additional input variable or loop variable to represent a previous value of the output. As such, the auxiliary input effectively breaks a loop in the graph turning it into a combinational circuit as indicated by block705. At block706, combinational circuit synthesis is performed on the combinational circuit as described with reference toFIG.3andFIG.4. At block707, post combinational circuit synthesis is performed. In some embodiments, during post combinational circuit synthesis, the loop variables are replaced by connections from the output(s) to the gates, which receive input from the loop variables. For example, feedback wiring, from corresponding output M-gates to M-gates receiving input from the axillary input variables, are made. The resultant output is a synthesized circuit708. If the sequential circuit is not described as a netlist, the process proceeds to block704. At block704, the CAD tool introduces an auxiliary input variable for each previous state in the Boolean expression. This makes a combinational circuit as indicated by block705. FIG.8illustrates flowchart800of a method of a pulse-triggered sequential logic synthesis, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart800provides an optimal way of synthesizing pulse-triggered sequential components such as D-FFs. In a pulse-triggered sequential circuit, there are back-to-back latches (e.g., first latch coupled to a second latch) with data passing into a first latch on a high or low clock level and from the first to a second latch on a corresponding low or high clock level). In a pulse-triggered circuit, the clock signal is inverted for the second latch relative to the first latch. Flowchart800begins with sequential circuit input801. The inputs can be in in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs include a list of narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. In one example, the inputs for the sequential circuits are specified as a netlist or Boolean expressions. Any tool that can convert an input description (e.g., HDL) into a netlist can be used as input for flowchart600, in accordance with some embodiments. One reason for using netlists and Boolean expressions is that standard truth tables and graph(s) of higher-level blocks may not capture and reveal the feedback loop in the sequential circuit, respectively. For each latch of the back-to-back latches, the CAD tool performs level-triggered sequential synthesis as described with reference toFIG.8and as indicated by block802. The output of level-triggered sequential synthesis is a latch as indicated by block803. At block804, the synthesized latch is duplicated (e.g., a copy is made) and connected back-to-back with the synthesized latch (e.g., first latch) of block803. Then, an inverted clock is provided to the duplicated latch (e.g., second latch) compared to the first latch. The resultant circuit is a synthesized pulse-triggered sequential circuit (e.g., a D-FF) as indicated by block805. FIG.9illustrates flowchart900of a method of edge-triggered sequential logic synthesis, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart900provides an optimal way of synthesizing edge-triggered sequential components such as rising-edge or falling-edge D flip-flops (D-FFs). In some embodiments, edge-triggering is accounted for by transforming to level-triggered. This transformation is done by introducing a new input variable that represents a delayed version of the clock. While some embodiments use unrolling technique for sequential circuits, it is possible that some of the synthesized results for edge-triggered circuits may contain a race condition which causes the output of the circuit to be unstable and continue to fluctuate. This happens because of the time dependence in sequential circuits from the previous clock cycle. To handle this problem, some embodiments generate multiple synthesis solutions with given PPAs. During a post-processing phase, in some embodiments, each of the circuit solutions is simulated for stability and the final result is selected based on correct functionality and according to the best PPA results. Flowchart900begins with sequential circuit input801. The inputs can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, etc. At block902, CAD tool adds an auxiliary input variable to represent a delayed clock signal. The delayed clock signal is used to capture the concept of an edge. For example, clock (clk) and delayed clock (dclk) are used to capture an edge of an input data. Here, clk and dclk are used to capture the clock edge (transition from low to high or from high to low). Starting at time t=0, assume clk is low for an interval of T/2, high for an interval of T/2, and then low for another T/2 interval. Assume a delay of τ. dclk(t)=clk(t−τ) will be high at t=0 for an interval of τ. Then it will be low for an interval of T/2, high for an interval of T/2, and then low for another T/2 interval. Consider the first rising edge, when t=T/2. During the hold time, right after the edge, clk will be high while dclk will be trailing it at a low signal. Here, (clk=high, dclk=low) represents a rising edge in the truth table. Consider the next edge, a falling edge, when t=T/2+T/2=T. During the hold time, right after the edge, clk will be low while dclk will be trailing it at a high signal. Here, (clk=low, dclk=high) represents a falling edge in the truth table. At block903, the CAD tool initializes an empty list of synthesized circuits. Here, initialize generally refers to start an empty list. Synthesized circuits will be added to the empty list later. The circuits correspond to different fan-ins and PPA. For example, a circuit can correspond to a fan-in list of [3, 5] and area or delay requirements. Synthesis is then performed using maximum fan-in of 3 with area requirement and with delay requirement. Then the CAD tool synthesizes using a maximum fan-in of 5 with area requirement and with delay requirement. This gives at most 4 synthesized circuits. For each of the 4 synthesized circuits, the CAD tool can also keep all the discarded circuits from inverter minimization. This gives a large number of circuits with the same M-gate connections but with majority and minority gates substituted and inverters added or removed. In some embodiments, the M-gate fan-in list is processed until it is exhausted as indicated by block904. Each fan-in is selected in turn to be the maximum allowed fan-in for edge-triggered sequential logic synthesis. The list of fan-ins allows the CAD tool to create candidate synthesized circuits, one (or more if all the discarded candidates of inverter minimization are considered) for each fan-in, since it is not known ahead of time if the synthesized circuit will be stable. At block905, the maximum fan-in for synthesis is assigned to be the value of the current fan-in from the M-gate fan-in list. At block906, level-triggered sequential synthesis is applied using the maximum M-gate fan-in.FIG.7illustrates a method for level-triggered sequential synthesis. The output of level-triggered sequential synthesis is then processed at block907where wire delayed clock (e.g., dclk) is connected as clock to a delay element (e.g., buffer). The resultant circuit is a synthesized circuit908. At block909, the synthesized circuit is added to the list of synthesized circuits, and the process is iteratively performed again with the next fan-in from the M-gate fan-in list, and so on till the entire list is exhausted as determined by block904. Once the list is exhausted, the process proceeds to block910, where post-processing is done to check for oscillations in each of the M-gate. The post-processing can be done using any suitable circuit simulator such as SPICE or its derivative (e.g., SPICE-like) simulators. One reason for such possible oscillations is that some of the synthesized MIGs for edge-triggered circuits may contain a race condition which causes the output of the circuit to be unstable and continue to fluctuate. This happens because of the time dependence in sequential circuits from the previous clock cycle. During block910, the edge-triggered circuit obtained at block908is checked for stability and the final edge-triggered circuit is selected based on correct functionality and according to the best PPA results (or target results). The resultant final edge-triggered circuit is the synthesized edge-triggered circuit as indicated by block911. FIGS.10A-Billustrate flowcharts1000and1030, respectively, of a method of MIG synthesis, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowcharts1000and1030comprises MIG optimization algorithms including initialization, hierarchical synthesis, optimal synthesis, and other synthesis algorithms built on top of MIGs. The algorithm is a flexible algorithm that allows the use of majority and/or minority gates (M-gates), wide-input and single/multiple fan-in M-gates. Flowcharts1000and1030form the basis of block402ofFIG.4. The MIG synthesis algorithm of flowcharts1000and1030assume that the logic circuit can be synthesized using a feed-forward network of M-gates and inverters. As discussed herein, majority gate followed by one inverter is equivalent to a minority gate. Minority gates could be made as a fundamental building block. A minority gate with one input is an inverter. Such a network of gates is equivalent to a DAG. Flowchart1000begins with a logic circuit input1001. The input(s) can be in in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, maximum number of bits for optimal synthesis, K, and/or maximum number of bits for hierarchical synthesis, H, etc. At block1002, the CAD tool computes a maximum M-gate fan-in, and ignores fan-out constraints. A user specifies whether they want majority or minority gates as the basic building blocks. The number of inputs to the M-gate (aka fan-in), are also to be specified. This could be a list of a single or multiple fan-ins. Each fan-in is an odd number. The user also needs to specify whether the primary objective is area, energy, or delay minimization. This choice determines which heuristic is used in splitting the graph into subgraphs and the scoring of the current optimized graph, in accordance with various embodiments. At block1003, the CAD tool performs the process of logic initialization. Logic initialization is described with reference toFIG.11. In various embodiments, the input circuit from input1001is transformed by the logic initialization flow into a form that can be easily optimized using optimal or hierarchical synthesis. At block1004, a determination is made whether the number of input bits is less than or equal to K. K is a small number such as in {4, 5, 6}. In some embodiments, K is less than or equal to 10. K represents the maximum bit width for which an optimal MIG can be found in a reasonable amount of time. If the number of input bits is less than or equal to K, the process proceeds to block1005where the logic circuit is synthesized optimally rather than using heuristics. Examples of methods for optimal synthesis are binary integer programming (BIP), or satisfiability (SAT) formulation and associated solvers. The optimal synthesis output is then processed for inverter minimization at block1006. The process then proceeds to block1018, as indicated by transition letter A, which indicates the resultant synthesized MIG. At block1019, after the MIG optimization, each M-gate is pruned to have the desired fan-in using a gate pruning algorithm. During post-synthesis, the fan-out requirements are honored using the buffering algorithm which introduces inverters and buffers as needed. The resultant circuit is synthesized MIG as indicated by block1020. In some embodiments, MIG synthesis ignores fan-out requirements during MIG optimization. In some embodiments, fan-out requirements are observed during the post-synthesis flow by a buffering algorithm. If the number of input bits is greater than K, the process proceeds to block1007. At block1007, the number of input bits is compared with H. H is a larger number such as 20 or more. H represents the maximum bit width for which hierarchical synthesis can improve the optimality of the synthesized MIG. If the number of input bits is less than or equal to H, the process proceeds to block1008where hierarchical synthesis is performed. For example, when the number of input bits lies in (K, H], hierarchical synthesis is used. After hierarchical synthesis, the resultant circuit is synthesized MIG as indicated by block1018. When the number of input bits exceeds H, multiple independent hierarchical synthesis are performed and the results glued together as indicated by block1009. At block1009, the graph is topologically split into non-overlapping H-MIGs subgraphs, each with H input bits. These non-overlapping H-MIGs subgraphs are listed as H-MIGs in a list as indicated by block1010. Each H-MIG is then processed till the list of H-MIG is exhausted, as indicated by decision block1011. For that, the current H-MIG in the list is assigned the next H-MIG from the list at block1012, and then hierarchical synthesis is performed on the current H-MIG as indicated by block1013. The output of hierarchical synthesis is synthesized H-MIG (hierarchical MIG) as indicated by block1014. The H-MIG is then added to a new graph at block1015. This new graph is from a hierarchical synthesis flow ofFIG.26, in accordance with some embodiments. The process is then repeated iteratively for each H-MIG in the list of H-MIGs and synthesized H-MIGs are added to the new graph. Once all the H-MIGs are exhausted, the process proceeds to block1017as indicated by marker B. At block1017, the CAD tool decides about whether the new graph has a better synthesis objective. Note, in block1009, the CAD tool also creates an empty (new) graph to which each synthesized H-MIG will be added. After processing all the H-MIGs, there should be two graphs. These two graphs include the current graph (either the initialized MIG or the graph from the previous iteration of the outside loop) and the new graph. In some embodiments, the CAD tool compares the two graphs to tell us whether to continue improving or to stop. If the new graph has a better synthesis objective, the process proceeds to block1009as indicated by marker C. If the new graph does not have a better synthesis objective, then the process has the synthesized MIG as indicated by block1018. In some embodiments, the current graph and the new graph, as discussed with reference toFIGS.10A-B, can be compared by extracting the gate count (or area, if the layout footprint of M-gates and inverters is known) or the depth (or delay, if the propagation delay of M-gates and inverters is known) from the graphs. If the new graph has improved PPA, the optimization continues otherwise it is terminated, since achieving results better than the current graph may not be feasible. FIG.11illustrates flowchart1100of a method of logic initialization flow for MIG synthesis (e.g., block1003ofFIG.10), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart1100provides methods for translating logic circuit inputs such as Verilog or netlists, graph of higher-level blocks, Boolean expressions, and truth tables into truth tables (when the circuit is small) or MIG (for larger circuits). In various embodiments, the logic initialization flow is responsible for mapping the different input forms of the logic function to the forms that the actual synthesis steps of the MIG synthesis algorithm can easily work with. For small circuits, the output of the logic initialization algorithm is a truth table, whereas for larger circuits, the output is a MIG. Flowchart1100begins with a logic circuit input1101. The input(s) can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combination circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, maximum number of bits for optimal synthesis, K, and/or maximum number of bits for hierarchical synthesis, H, etc. At block1102, the CAD tool decides whether the number of input bits is less than or equal to K. As described herein, K is a small number such as in {4, 5, 6}. In some embodiments, K is less than or equal to 10. K represents the maximum bit width for which an optimal MIG can be found in a reasonable amount of time. As such, when the number of input bits is less than or equal to K, the process proceeds to block1103where it is determined whether the logic circuit, which is input to the logic initialization flow, is specified as a truth table. If the logic circuit is specified as a truth table, the truth table is saved as indicated by block1105. If the logic circuit is not specified as a truth table, simulation is performed on the logic circuit at block1104and truth table1105is derived from the simulation. As mentioned herein, truth tables can be derived for logic circuits with fewer inputs (e.g., less than 10). If the number of inputs is large (e.g., greater than K), then the process proceeds to block1106. At block1106, the CAD tool determines whether the logic circuit is specified as Verilog (or in any other hardware description language) or a netlist. If the logic circuit is specified in a hardware description language or as a netlist, the process proceeds to block1107. At block1107, the CAD tool determines whether the logic circuit is specified as HDL (e.g., Verilog). This determination is made to obtain a netlist if such HDL is specified. As such, at block1108, if it is determined that the logic circuit is specified as HDL, the CAD tool performs standard logic synthesis using any suitable tool such as open source or commercial tools to obtain a netlist. At block1109, the CAD tool maps the netlist to MIG using M-gate standard cells to generate the MIG as indicated by block1110. As discussed herein, the M-gate standard cells include cells of various fan-in and fan-out for a number of different logic functions (e.g., AND, OR, NAND, etc.). These cells can be ferroelectric based cells or non-ferroelectric based cells (e.g., CMOS of other technologies). If it is determined that the logic is not specified as a hardware description language or a netlist, the process proceeds to block1111from block1106. At block1111, the CAD tool decides whether the logic is specified as a graph of higher-level blocks. The graph looks like a logic function. A graph of higher-level blocks is a graph containing a connection between blocks that are bigger than a gate (e.g., two or more M-gates). For example, in an array multiplier, a connection of full adders and half adders constitutes a graph of higher-level blocks. Given that the CAD tool knows the optimal MIG of a full adder and a half adder, the full adder and half adder blocks are replaced with their MIG equivalent and the MIGs are connected following the connections of the full adder and half adder blocks in the array multiplier. If the logic is specified as a graph of higher-level blocks, the process proceeds to block1111a, where the graph of the blocks is mapped to MIG using M-gate standard cells and/or functional unit blocks (FUB) cells (which are higher-level cells). The resultant circuit is a MIG as indicated by block1110. If it is determined that the logic is not specified as a graph of higher-level blocks, the process proceeds from block1111to1112. At block1112, the CAD tool decides whether the logic is specified as a truth table. If that is the case, the truth table is identified and saved as illustrated by block1113. If the logic is not specified as a truth table, then at block1114, the CAD tool parses the Boolean expressions and simulates them to generate truth tables. These truth table(s) are saved as illustrated by block1113. Once the truth tables are identified, the CAD tool performs wide input logic (WILK) initialization at block1115to generate the MIG. WILK is a heuristic for initializing a majority and/or minority inverter graph (MIG), in accordance with some embodiments. In various embodiments, WILK is a constructive approach that relies on two-level logic formulation, commutative and associative (symmetric) properties of disjunction (OR) and conjunction (AND), and the expressiveness of wide-input majority gates for initializing combinational circuits. In some embodiments, WILK uses wide-input M-gates based on a result of sum-of-products (SOP) minimization algorithm. FIGS.12A-Billustrate flowcharts1200and1230, respectively, of a method of wide-input logic initialization (WILK) flow (e.g., block1115ofFIG.11), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart1200(and flowchart1230) provide heuristic for initializing MIG using wide-input M-gates based on the result of a sum of products (SOP) minimization algorithm. Flowchart1200begins with a logic circuit input1201. The input(s) can be in a number of formats including HDL such as Verilog or VHDL either describing the functionality or circuit connectivity of the combinational circuit, netlist of various gates (such as NAND, NOR, Minority, Majority, AND, OR), graph of higher-level blocks, Boolean expressions, or truth tables for the function. The truth tables can be multiple inputs and multiple outputs. The inputs also include a list narrow and/or wide-input majority or minority (herein referred to as M-gates), fan-in constraints of the M-gates, fan-out constraints of the M-gates, PPA objectives, preference of majority or minority gates, maximum number of bits for optimal synthesis, K, and/or maximum number of bits for hierarchical synthesis, H, etc. At block1202, the CAD tool performs a method of simplifying Boolean algebra expressions (e.g., a logic function). This can be performed by a Karnaugh map (K-map), Quine McClusky (QMC) algorithm or Expresso heuristic. Any logic function can be represented by two levels of logic as given by the minterm expansion: f(x1,x2,…,xn)=⋁c1,c2,…,cnf(c1,c2,…,cn)Λx1c1Λx2c2Λ…Λxncn, where ciis either 0 or 1. When ciis 1, xici=xi(the input is used in its original form). When ciis 0, xici=xι(the input is used in its inverted form). The first level of logic is represented by at most 2nAND gates (∧), one for each of the 2npossible combinations of 0 and 1 for c1, c2, . . . , cn. The second level of logic is represented by a single OR gate (∨). Each operand of the OR gate is a representation of a row in the truth table for ƒ(x1, x2, . . . , xn). The two-level minterm expansion is a specific example of sum of product (SOP) representation of logic. The number of literals xiciin each minterm and the number of minterms can be minimized in a process known in the literature as sum-of-product (SOP) minimization. Karnaugh maps (K-maps) can be used for SOP minimization for small number of input bits n (e.g., n≤5). Quine McCluskey (QMC) algorithm can be used for slightly larger n (e.g., 5<n<8). For much larger n (e.g., n≥8) heuristics such as the Espresso algorithm can be used for SOP minimization. Here, the techniques for simplifying Boolean expression for SOP minimization are generally referred to as the K-map algorithm. The output of the K-map algorithm is a sum (OR gate) of product of terms (AND gates). SOP can always be implemented using AND gates feeding into an OR gate. Likewise, a product-of-sums expression (POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F′). The output of block1202is a list of product terms (minterms) as indicated by block1203. At block1204, the CAD tool tallies literals (e.g., terms x1, x2, . . . xn) across the list of product terms. Based on the frequency of occurrence of the literals, each product term is ordered. For example, each product term in the list of product terms is ordered in a descending order. A person skilled in the art would appreciate that the order can be ascending order and the algorithm can be modified to reuse the ordered list accordingly. In some embodiments, some other heuristic ordering can be used that brings frequent cohorts of literals in close proximity within each minterm. The output of block1204is a list of ordered product terms as indicated by block1205. Each term in the ordered product term is a AND gate. Due to the commutative and associative (symmetric) properties of OR/AND gates, a large fan-in OR/AND gate can be broken down into a sequence of smaller OR/AND gates. For example, OR(a, b, c, d)=OR(OR(a, b), OR(c, d)) or OR(a, b, c, d)=OR(OR(OR(a, b), c), d). The first breakdown (OR(OR(a, b), OR(c, d)) is logarithmic (depth oriented) whereas the second breakdown (OR(OR(OR(a, b), c, d) is linear (area oriented). A (2N−1)-input majority gate can represent an N-input AND gate, by tying (N−1) of the majority gate's inputs to a ground level. Similarly, a (2N−1)-input majority gate can represent an N-input OR gate, by tying (N−1) of the majority gate's inputs to a supply level. Since a majority gate can represent AND and OR gates, and the inputs to the AND and OR gates are either original or inverted forms of the input digital signals, any logic function can be represented by majority gates and inverters only. As such, wide-input majority gates provide flexibility is simplifying a give logic function. Given the list of input logic functions in the form of a truth table(s) with a moderate number of input bits n e.g., 16 input bits, the maximum fan-in F for the majority gate, and the desired PPA criterion, WILK initialization flow1200applies K-map at block1202for SOP minimization. The output of K-map is a list of product terms as indicated by block1203. To ensure re-use of majority gates during the construction of the initial MIG, WILK initialization flow1200tallies at block1204the literals across all product terms and order each product term based on the frequency of its literals from most frequent to least frequent as indicated by block1205. This ensures that for smaller maximum fan-in, the most frequent set of literals are grouped together fostering gate reuse. Thereafter, WILK initialization flow1200synthesizes each ordered product term using a set of majority gates using the relationship between AND gates and majority gates stated above. At block1206, the CAD tool determines whether the logic is to be optimized (or simplified) for delay minimization, area or energy. If the logic is to be optimized for delay minimization, (e.g., shallow logic depth), then the process proceeds to block1207. When the maximum fan-in F is limited relative to the product term with pliterals (when F<2p−1), more than one majority gate is needed. To ensure the depth is not adversely affected, the logarithmic breakdown of the product term as shown herein for p=7 and F=5 (which can represent 3-input AND gates) inFIG.13can be used.FIG.13illustrates graph1300for logarithmic breakdown of a product term for use in the wide-input logic initialization flow, in accordance with some embodiments. InFIG.13, one product term with seven terms is broken down into a sequence of majority gates. In this example, it takes a depth of 2 and 3 AND gates to achieve the function represented by the 7-term input product term. In this example, each AND gate is implemented as a majority gate that can be reused. Referring back toFIG.12A, if the logic is to be optimized for area or energy, the process proceeds to block1208from block1206. The linear breakdown of the product term illustrated inFIG.14increases the depth by 1.FIG.14illustrates graph1400for linear breakdown of product term for use in the wide-input logic initialization flow, in accordance with some embodiments. In this example, it takes a depth of 3 levels of majority based OR gates. In general, the linear breakdown has the advantage of keeping the high frequency literals closer (within the same majority gate) and using fewer gates. The choice between linear and logarithmic breakdown depends on the tradeoff between area and delay. Referring back toFIGS.12A-B, after synthesizing the ordered product terms, WILK initialization flow1200synthesizes the sums (OR gates) of all the product terms. Again, the product terms across all sum terms are tallied (one sum term per output logic function) at block1209. Subsequently, the list of sum terms is ordered based on the frequency of their constituent product terms. The list of ordered sum terms is the output of block1209as indicated by block1210. In some embodiments, a set of majority gates using the relationship between OR gates and majority gates stated above are utilized, to represent each sum term. When the maximum fan-in F is limited relative to the sum term with s product terms (when F<2s−1), more than one majority gate is needed. At block1211, the CAD tool determines whether the ordered sum terms are to be optimized (or simplified) for delay minimization, area or energy. If the ordered sum terms are to be optimized for delay minimization, (e.g., shallow logic depth), then the process proceeds to block1212. At block1212, the CAD tool uses logarithmic breakdown and majority gate synthesis of sum (OR) terms as illustrated inFIG.15.FIG.15illustrates graph1500for logarithmic breakdown of a sum term for use in the wide-input logic initialization flow, in accordance with some embodiments. Referring back toFIG.12B, if the logic is to be optimized for area or energy, the process proceeds to block1213from block1211. The linear breakdown and majority gate synthesis of sum (OR) term(s) is illustrated with reference toFIG.16.FIG.16illustrates graph1600for linear breakdown of sum term for use in the wide-input logic initialization flow, in accordance with some embodiments. The resultant output from blocks1212or1213is a MIG as indicated by block1214. While the WILK initialization flow is illustrated with reference to performing synthesis of product terms first and then the sum terms, the order can be reversed, in accordance with some embodiments. For example, WILK initialization flow can be accomplished with reference to performing synthesis of product-of-sum (POS) logic representation. FIG.17illustrates flowchart1700of a method for optimal synthesis flow, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowchart1700provides a mechanism for optimal MIG synthesis of relatively small circuits (e.g., number of inputs is less than or equal to K) using either area-oriented or delay-oriented algorithms depending on the primary synthesis objective. Flowchart1700begins with a set of inputs1701. The input(s) include MIG or truth tables, maximum fan-in, synthesis objective (e.g., PPA objectives), and maximum relative or absolute gate count. At block1702, the CAD tool decides whether to minimize delay. When delay minimization is the stated objective, the process proceeds to block1703where delay-oriented synthesis is performed as discussed with reference toFIG.20. In delay-oriented synthesis, one objective is depth minimization. If area or energy minimization or efficiency is the stated objective, the process proceeds to block1704where area-oriented optimal synthesis is performed as discussed with reference toFIG.18andFIG.19. In area-oriented optimal synthesis, one objective is to reduce gate count of a logic. The resultant output of the delay-oriented synthesis or area-oriented optimal synthesis is MIG1705. FIGS.18A-Billustrate flowcharts1800and1830for area-oriented optimal synthesis flow (e.g., block1704), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Flowcharts1800and1830represents the area-oriented optimal synthesis flow used in obtaining area-optimal MIGs. The CAD tool receive inputs1801. The inputs include a given MIG or truth tables. The truth tables describe the outputs of a logic as a function of inputs. The inputs also include a maximum fan-in. At block1802, a determination is made about the description of the logic circuit. To accelerate the search for area-optimal MIGs, the area-oriented optimal synthesis of flowchart1800uses two paths for selecting the initial gate counts. When a MIG is specified, the number of M-gates in it are used as an upper bound on the number of M-gates for an area-optimal MIG. When a truth table is specified, a lower bound on the number of M-gates is obtained by the gate count initialization algorithm ofFIG.19. As such, at block1803, the CAD tool initializes the gate count value using the gate count initializer flow fromFIG.19. The gate count initialization algorithm ofFIG.19takes advantage of the fact that when there are multiple outputs and they are not correlated with each other, with input variables of constant inputs, the search for an area-optimal MIG can be accelerated by skipping small gate counts and initializing the number of gates to a non-unit minimum value. At block1803, the logic depth is set to a large number (e.g., 100 or 1000 or more) so it is not a binding constraint to affect synthesis. At block1804, the CAD tool creates binary integer program (BIP) or satisfiability (SAT) problem and solves that problem using a solver. The purpose of finding a solution is to find a minimum number of gates (e.g., AND gates, OR gates, M-gates, etc.) that are needed to find a solution that obeys or complies with the truth tables. In various embodiments, inverters or buffers are not counted as gates because are they are too small compared to AND gates, OR gates, M-gates. At a later stage in the process, inverter minimization is performed to optimize (e.g., reduce) the number of inverters while meeting timing constraints and logic function. At block1805, the CAD tool decides whether the problem (of obtaining the truth table function for the logic circuit) is feasible or satisfiable with the current gate count low bound. If it is not feasible or satisfiable, the gate count is incremented (e.g., the gate count bound is increased by one or more) at block1806and the process of establishing the BIP or SAT problem and its solution is performed again. This process continues till the CAD tool determines that the problem is feasible or satisfiable with the new gate count. When the problem is feasible or satisfiable with the new gate count, the process proceeds to block1807. At block1807, the solution found by the solver is considered as the best solution that provides the least gate count to meet the function of the truth tables. The process then proceeds to get the best depth solution. Here, best depth solution refers to fastest delay possible from input to output. The process then proceeds to block1808as indicated by identifier E. Blocks1808,1809,1810,1811, and1812determine the best circuit topology in view of area and logic depth. At block1808, the initial depth value is decremented. This initial depth value may be a small number such as 10. In some embodiments, this initial depth value comes from the best solution with the area objective. From the best area solution, the CAD tool extracts the circuit depth from the interconnection of M-gates specified by the BIP or SAT solution. At block1809, BIP or SAT problem is setup and solved using a solver. Any suitable solver can be used to solve the BIP or SAT problem. At block1810, the CAD tool determines whether the problem is feasible (e.g., solvable) or satisfiable using the decremented depth. The purpose of finding a solution is to find a minimum logic depth needed to find a solution that obeys or complies with the truth tables for the optimized area. If a solution is found, then it means that a better solution may be possible. For example, the depth can be further decreased beyond its current limit. As such, at block1811, the current feasible or satisfiable solution is considered as the best solution for depth optimization, and then the depth count is decremented again to see if a better solution for depth is possible. The process then repeats till the CAD tool determines that the problem is not feasible or satisfiable. In that case, the process proceeds to block1812where the current solution is marked as the best solution and MIG1813is formed using the optimized area and the updated lower depth. If at block1802, the CAD tool determines that the logic is not specified as truth tables (e.g., the CAD tool input is a MIG), then the process proceeds to block1814. At block1814, the input MIG is translated to a feasible solution for BIP or feasible satisfiable (SAT) solution. The feasible BIP or SAT solution is assigned as the best solution. In some embodiments, the initial gate count can be obtained directly from the MIG (number of M-gates in the graph) or extracted from the best feasible/SAT solution. In some embodiments, this initial gate count is obtained directly from the MIG. Like in block1803, the depth is set to a large number (e.g., 100) so that it does not become a bottleneck to find an optimized area (e.g., reduced number of gates). At block1815, the current gate count is decremented (e.g., by one or more). The decremented amount herein can be fixed or programmable. In some embodiments, the search for the optimal gate count and later the circuit depth in the current flowcharts is a linear search with a step size of 1. Other search mechanisms such as bisection search may be used, where there are two extremes (e.g., low and high gate counts or circuit depths with opposite feasibility/satisfiability) which surround the optimum and the interval between the two extremes is shrunk by a factor of 2 after each search iteration until only the optimum remains (interval size is 0). Linear search with step sizes greater than 1 need a backtracking mechanism for when the optimum is overshot. For example, if the best gate count is 2 and the current gate count is 1 and the CAD tool steps by 2, then the CAD tool ends up at a gate count of 3, which will be feasible/SAT. Note, gate count of 1 is not feasible/SAT. For feasibility or satisfiability study between gate counts 1 and 3, it is noted that since the feasibility/satisfiability of the problem at gate counts of 1 and 3 are different, the CAD tool tests the problem feasibility or satisfiability at gate count of 2, in accordance with some embodiments. In some embodiments, a flowchart for linear search with step size >1 or bisection search will be more cumbersome than the linear search with step size=1. At block1816, the BIP or SAT problem is then solved using a problem solver. The problem is to find a circuit that functions according to the logic of the MIG with reduced gate count. At block1817, the CAD tool determines whether the problem is feasible or satisfiable with the reduced gate count. If it is, this means that there may be more room for reducing the number of gates. At block1818, the current solution is assigned as the best solution and then the gate count is decremented at block1820, and the process of setting up the problem and finding the solution is repeated. This process is repeated till the problem can no longer find a feasible or satisfiable solution given the gate count. At that point, the minimum gate count is achieved. Thereafter, the process continues with finding the best depth for the logic (e.g., the lowest or shallowest depth possible given the reduced gate count). This process begins at block1819and follows blocks1808,1809,1810,1811, and1812as previously discussed. FIG.19illustrates flowchart1900for gate count initialization for area-oriented optimal synthesis flow (e.g., block1803), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. When the logic functions are truth tables, the CAD tool takes advantage of the fact that when there are multiple outputs and they are not correlated with each other, with input variables or with constant inputs, the search for an initial gate count can be accelerated by skipping small gate counts (e.g., 1, 2, etc.) and initializing the number of gates to a non-unit minimum value (e.g., 5). Flowchart1900begins with input truth tables1901. These truth tables define logic outputs as function of logic inputs. An example of a truth table is illustrated in the table below for an adder, minority function, and inverted input. Here, inputs are X1, X2, X3, while the outputs are X1_b, (which is inverse of X1), Min(X1,X2,X3), FACarry, and FASum. TABLE 1X1X2X3X1_bMin(X1, X2, X3)FACarryFASum11100110111010101001000111011100010010110110001010001100 At block1902, the CAD tool determines whether the number of outputs represented by the truth tables is equal to 1. If there is only one output, then the gate count is initialized to 1 as indicated by block1903. If the number of outputs represented by the truth tables is greater than 1, then the process proceeds to block1904to determine a gate count value that can be used as a starting point for optimizing area. At block1904, the CAD tool initializes an empty list of uncorrelated truth table outputs (LUTT). This list is populated by reviewing the truth tables (e.g., outputs of the truth tables). At block1905, the CAD tool determines whether the list of outputs of the truth tables is exhausted. This process is done to iteratively pass through each output of the truth tables and determine whether the output can be added to the list of uncorrelated truth tables (LUTT). The outputs of a truth tables can be out1, out2, out3, and so on for a number of inputs in1, in2, etc. At block1906, the first output (e.g., out1) is made the current output and then checked at block1907whether the current output of the truth table or its inverted form is a constant, one of the inputs, or in LUTT. This process is done for each output of the truth table. If the current output of the truth table or its inverted form is a constant, one of the inputs, or in LUTT, the process proceeds to block1905and the next output is made the current output and the check is made again. When the current output of the truth table or its inverted form is not a constant, is not one of the inputs, or not in LUTT, then a new unique, non-constant, and non-input truth table output is identified which is added to the LUTT at block1908. When the entire list of outputs of the truth tables are exhausted and checked for LUTT, the process identifies the gate count which is the length of LUTT as indicated by block1909. FIGS.20A-Billustrate flowchart2000and2030for delay-oriented optimal synthesis flow (e.g., block1703), in accordance with some embodiments of the disclosure. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. The delay-oriented optimal synthesis flowcharts2000and2030kick-starts the search for a delay-optimal MIG by using area-oriented optimal synthesis of flowchart1800to establish the minimum number of M-gates, maximum logic depth, and bounding the search for the delay-optimal MIG. The flowchart begins with inputs2001which include a given MIG or truth tables, given maximum fan-in, and a given maximum relative or absolute gate count. At block2002, area-oriented optimal synthesis is performed to get a minimum count of gates for a logic function. Block2002performs the flowcharts ofFIG.18A,FIG.18B, andFIG.19to arrive at a best area oriented MIG2003. Block2003gives the minimum bound on gate count and upper bound on depth (obtained from the graph). At block2004, the CAD tool translates the area-oriented MIG to a feasible or satisfiable (SAT) solution. This solution is assigned as the best solution until the next best solution is determined. At block2004, the CAD tool extracts the initial gate count and depth from the best solution, and also computes the maximum absolute gate count GCmax. In some embodiments, the maximum absolute gate count GCmaxis a finite maximum. The iterative process to find the lowest obtainable depth then starts at block2005, where the current depth number from the graph is decremented. The amount decremented may be fixed or programmable.FIGS.20A-Bshow the case of using a linear search with unit steps. In some embodiments, the search for the optimal depth can be accomplished by using a linear search with programmable non-unit steps combined with backtracking in case of overshooting the optimal depth, and bisection search. At block2006, the CAD tool sets up the BIP or SAT problem and solves it (using any suitable solver) to arrive at a possible solution that satisfies the logic function for the given depth limit. If the CAD tool determines at block2007that the problem is feasible or satisfiable, then at block2008the solution is considered as the best solution and the depth is decremented to see if further delay minimization can be achieved. After an iterative process, the CAD tool will determine that the problem is not solvable because the solution is not feasible or satisfiable. In that case, the process proceeds to block2009as indicated by identifier F. Here, the gate count is incremented. The idea is that after obtaining the minimum depth for an optimized area, the gate count is increased and the depth analysis redone to find an optimal depth, thereby trading off gate count for depth. A strictly minimum depth goal may result in a very wide logic, which may not be feasible to implement. As such, in some embodiments, the area and depth optimization are done iteratively to find the optimal depth for the logic circuit within a fixed area budget. At block2010, the CAD tool determines whether the gate count is less than or equal to GCMax. If it is, then at block2011, the CAD tool creates a BIP or SAT problem and solves it. At block2012, the CAD tool determines if the problem is feasible to solve or satisfiable. If not, then the gate count is incremented again and the process is repeated. If the problem is feasible to solve or satisfiable and the gate count is still below GCMax, then the solution is considered the best solution as indicated by block2013and the process of depth decrementing starts again as indicated by identifier G. At block2013, the CAD tool assigns the feasible or satisfiable solution as the best solution. Here, GCMaxis the fixed area budget. There is an inherent tradeoff between depth and gate count. Decreasing the depth usually increases the gate count and vice versa. GCMaxserves as the overall stopping condition, so that the gate count (and area) doesn't grow ad infinitum. If the area budget has not been reached, the CAD tool can continue trying to decrease the depth. If at block2010it is determined that the gate count is greater than GCMax, then the best solution is used to generate the MIG at block2014. The final outcome is MIG2015which is delay optimized (with fewer depths) in view of the fixed area budget. FIG.21illustrates flowchart2100for synthesis problem formulation as binary integer program (BIP), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. In some embodiments, the CAD tool extends a mixed integer program to a binary integer program by restricting the weights of a threshold gate to the set {−1, 0, 1} to reduce them to symmetric gates (of which majority and minority gates are a subset). In some embodiments, the CAD tool introduces a new set of constraints for minority gates and a new set of constraints for delay optimization. In some embodiments, the CAD tool allows the use of single or multiple fan-in M-gates. Synthesis problem formulation as binary integer program (BIP) is a process in the area-oriented and delay-oriented optimal synthesis flows described herein. In some embodiments, the CAD tool receives inputs2101including gate count and depth of the desired MIG, and maximum fan-in. At block2102, the CAD tool formulates BIP (binary integer program) problem using objectives B1 and constraints B-2 to B-11. At block2103, the CAD tool solves the BIP problem using a solver (e.g., open source or commercial solves such as GUROBI, CPLEX, SCIP, etc.). The output of the solver is the BIP solution2104. Given n binary input variables x1, x2, . . . , xnand M binary output logic functions y1, y2, . . . , yM, let x0be the constant representing the low binary state. Let there be r M-gates laid out as a feed-forward network with a depth of d. Let wikrepresent the weight of a connection from the i-th input variable to the k-th gate, αlk, the weight of a connection from the l-th gate to the k-th gate, πim, the weight of the connection from the i-th input to the m-th output logic function, and ϕkm, the weight of a connection from the k-th gate to the m-th output logic function. If wik=1, αlk=1, πim=1, ϕkm=1, a positive connection exists. If wik=−1, αlk=−1, πim=−1, ϕkm=−1, an inverted connection exists (the input signal is inverted before connecting to the gate/output). If wik=0, αlk=0, πim=0, ϕkm=0, no connection exists. Given n binary input variables, there are a total of 2npossible input configurations, corresponding to the rows of a truth table. When there are don't care (x) conditions, the number of truth table rows is less than 2n. Let xi(j)represent the j-th truth table entry of the i-th input variable, Pk(j), the j-th truth table entry of the output from the k-th gate, and ym(j), the j-th truth table entry of the m-th output logic function. Let Tkrepresent the threshold of the k-th M-gate. Introduce the binary variables wik+, wik−, αlk+, αlk−, πim+, πim−, ϕkm+, ϕkm−, βlk(j)+, βlk(j)−such that wik=wik+−wik−, αlk=αlk+−αlk−, πim=πim+−πim−, ϕkm=ϕkm+−ϕkm−, and βlk(j)βlk(j)+−βlk(j)−. Let binary variable μikrepresent the presence of a connection from the i-th input variable to the k-th gate, vlk, the presence of a connection from the l-th gate to the k-th gate, χim, the presence of a connection from the i-th input to the m-th output logic function, and ψkm, the presence of a connection from the k-th gate to the m-th output logic function. Let U be a large enough constant. Let dkbe the depth of the k-th gate, where d1=1. Assume D is an upper bound of the depth of the circuit. Let blkbe an auxiliary binary variable that indicates an l-th gate that is one-hop away from the k-th gate (that is, which gate achieves the maximum). Assume bkis an auxiliary binary variable that indicates a terminal gate on the critical path. The integer variables dkare encoded into binary variables turning the integer linear program into a binary integer linear program. The binary integer program is given as the minimization of the objective ∑k=1r∑i=0nμik+∑k=2r∑l=1k-1vlk(B-1) subject to the following constraints ∑i=0nwikxi(j)+∑l=1k-1αlkPl(j)-Tk≥Pk(j)U-U,and(B-2a)-∑i=0nwikxi(j)-∑l=1k-1αlkPl(j)+Tk-1≥-Pk(j)U,(B-2b) for each j=1, 2, . . . , 2nand k=1, 2, . . . , r, for majority gates or ∑i=0nwikxi(j)+∑l=1k-1αlkPl(j)-Tk≥-Pk(j)U,and(B-2c)-∑i=0nwikxi(j)-∑l=1k-1αlkPl(j)+Tk-1≥Pk(j)U-U,(B-2d) for each j and k, for minority gates, Pk(j)+αlk+-2βlk(j)+≥0,j=1,2,…,2n,k=1,2,…,r,l=1,2,…,k-1(B-3a)Pk(j)+αlk--2βlk(j)-≥0,j=1,2,…,2n,k=1,2,…,r,l=1,2,…,k-1(B-3a)Pk(j)+αlk+-βlk(j)+≤1,j=1,2,…,2n,k=1,2,…,r,l=1,2,…,k-1(B-3b)Pk(j)+αlk--βlk(j)-≤1,j=1,2,…,2n,k=1,2,…,r,l=1,2,…,k-1(B-3c)wik++wik-≤μik,i=0,1,…,n,k=1,2,…,r(B-4a)αlk++αlk-≤vlk,k=1,2,…,r,l=1,2,…,k-1(B-4b)πim++πim-≤χim,i=0,1,…n,m=1,2,…,M(B-4c)ϕkm++ϕkm-≤ψkm,k=1,2,…,r,m=1,2,…,M(B-4d)∑i=0nxim+∑k=1rψkm=1,m=1,2,…,M(B-5)ym(j)≤xi(j)+(1-χim)+(1-πim+),j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6a)xi(j)≤ym(j)+(1-χim)+(1-πim+),j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6b)ym(j)≤(1-xi(j))+(1-χim)+πim+,j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6c)(1-xi(j))≤ym(j)+(1-χim)+πim+,j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6d)ym(j)≤Pk(j)+(1-ψkm)+(1-ϕkm+),j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6e)Pk(j)≤ym(j)+(1-ψkm)+(1-ϕkm+),j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6f)ym(j)≤(1-Pk(j))+(1-ψkm)+ϕkm+,j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6g)(1-Pk(j))≤ym(j)+(1-ψkm)+ϕkm+,j=1,2,…,2n,k=1,2,…,r,m=1,2,…,M(B-6h)∑i=0nμik+∑l=1k-1vlk≤l,k=1,2,…,r(B-7)∑k=l+1rvlk≤F,l=1,2,…,r-1(B-8)Tk=0.5(∑i=0nμik+∑l=1k-1vlk+1),k=1,2,…,r(B-9)≥dl+1-D(1-vlk),k=1,2,…,r,l=1,2,…,k-1(B-10a)dk≤dl+1+D(1-vlk)+D(1-blk),k=1,2,…,r,l=1,2,…,k-1(B-10b)∑l=1k-1blk≤1+(k-2)(1-vlk),k=1,2,…,r(B-10c)1≤∑l=1k-1blk+(1-vlk),k=1,2,…,r(B-10d)d≥dk,k=1,2,…,r(B-11a)d≤dk+D(1-bk)k=1,2,…,r(B-11b)∑k=1rbk=1(B-11c) While area minimization involves incrementing the number of gates r until all constraints are satisfied, depth minimization involves incrementing the circuit delay d until all constraints are satisfied. This may require a tradeoff of increasing r beyond the minimum gate count. FIG.22illustrates flowchart2200for synthesis problem formulation as Boolean Satisfiability (SAT) problem, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. SAT formulation is an alternative step to BIP formulation in the area-oriented and delay-oriented optimal synthesis flows described herein. In some embodiments, the SAT formulation extends the traditional SAT problem by allowing the use of minority gates instead of only majority gates, allowing the use of wide-input, and single/multiple fan-in M-gates. At block2201, the CAD tool receives inputs2201which include gate count and depth of desired MIG, and maximum fan-in. It then uses these inputs at block2202to formulate a SAT (satisfiability) problem using objectives S-1 and constraints S-10. At block2203, the SAT problem is satisfied using open-source or commercial solvers such as Z3 solver. The output of the solver is the SAT solution (UNSAT) or variable assignment as indicated by block2204. Majority/Minority Inverter Graph synthesis can be formulated as a Boolean satisfiability problem with constraints reflective of the PPA requirements. According to the literature, an MIG of size r over input variables x1, x2, . . . , xnis a sequence (xn+1, xn+2, . . . , xn+r) of gates that combine previous gates using the majority function xi=<a1,a2,a3> (S-1) for n<i≤n+r, where the three inputs to the gate are defined as a1=xs1ip1i,a2=xs2ip2i,a3=xs3ip3i, (S-2) where 0≤s1i<s2i<s3i<i are indexes pointing to the operands and 0≤p1i, p2i, p3i≤1 with p1i+p2i+p3i≥2, (S-3) are the operands' polarities. The operands are ordered by their index and at most one of the operands is complemented (pji=0). To represent boolean functions with fan-in less than three e.g., AND and OR gates, the zero variable x0=0 is defined. The output logic functions ƒ1, ƒ2, . . . , ƒMconstrain the output of the gates through ƒi=xsipifor 1≤i≤M, where 0≤si≤n+r indicates which input variable or gate realizes the i-th output function and 0≤pi≤1 is the output polarity. The depth of the i-th gate is specified as li=max{ls1i,ls2i,ls3i}+1 (S-4) for n<i≤n+r, where the depth of the input variables is set to 0 (li=0, for i≤n). Our formulation extends the literature in two ways. First, we enable the use of either majority or minority gates by representing <.> as either the majority or minority voting function. The minority voting function is the negation of the majority voting function. Second, and more importantly, we allow wide-input gates with any odd number of inputs greater than or equal to three. To enable wide-input gates, we define the n-input majority function <.>neither as the conjunction (AND) of the disjunction (OR) of all (n choose (n+1)/2), (n+1)/2-sized combinations of the n inputs <a1,a2, . . . ,an>n=∧(as1∨as2∨ . . . ∨as(n+1)/2) (S-5) or the disjunction of the conjunction of all (n choose (n+1)/2), (n+1)/2-sized combinations of the n inputs <a1,a2, . . . ,an>n=∨(as1∧as2∧ . . . ∧as(n+1)/2) (S-6) where (s1, s2, . . . , s(n+1)/2) specifies the indexes of the size (n+1)/2 subset of the input variables. While it is valid to allow each input to be complemented, this will lead to excess inverters in the graph. Because the outputs from the gates can be complemented, not all the inputs should be complementable. To ensure that at most (n−1)/2 operands can be complemented we constrain the input polarities to the i-th gate by the following Boolean expression: <p1i,p2i, . . . ,pni>n, (S-7) where 0≤p1i, p2i, . . . , pni≤1. To allow an M-gate to represent at least two input AND or OR gates, only three operands can be strictly ordered via sji, j=1, 2, . . . . , n is the input port of the M-gate while i=n+1, n+2, . . . , n+r is the gate number. The strict ordering of the last three input ports as in: 0≤s1i≤s2i≤ . . . ≤s(n−2)i<s(n−1)i<sni<i.(S-8) ensures the flexibility of the M-gate while reducing the redundancy of the representation. The depth of the i-th gate is now specified as: li=max{ls1i,ls2i, . . . ,lsni}+1 (S-9) for n<i≤n+r. The depth of the MIG is the maximum level over all outputs and is given as: li=max{ls1i,ls2i,…,lsni}+1(S‐9) and must satisfy the depth constraint: maxfi=xsipi{lsi}≤d,(S‐10) where d is the desired depth of the MIG. FIG.23illustrates flowchart2300for inverter minimization flow (e.g.,1006and1008), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Inverter minimization flow is performed following optimal synthesis1005in MIG synthesis flow1000. Inverter minimization flow is the second stage of logic optimization focusing on the inverter. In some embodiments, inverter minimization flow assumes inverters are less expensive than M-gates and as such does not introduce new M-gates (beyond switching between majority and minority gates) or alter the connection between M-gates (beyond eliminating or introducing an inverter along the connection). Moreover, although the inverter minimization flow is exhaustive, it is efficient, since it is performed in a depth wise manner on small circuits. At block2301, the CAD tool receives the input. The input here comprises a MIG and synthesis objective (e.g., area, energy, delay minimization, etc.). With these inputs, the CAD tool at block2302gets a list of M-gates for each depth in descending order {depthmax, . . . , 2, 1}. As such, a list of M-gates per graph depth is collected as indicated by block2303. In some embodiments, the order for the list of M-gates for each depth is in ascending order. At block2304, the CAD tool begins an iterative process to find if the number if inverters can be reduced in the whole circuit along its critical path. In this process, the CAD tool processes the list of M-gates per graph depth and determines if inverter optimization is possible. At block2304a, the CAD tool selects the M-gates in the current depth and assumes that there are ‘r’ M-gates at this depth. Here, ‘r’ indicates the width of the current logic depth. If ‘r’ is too big, then 4rwill be huge and the search will be laborious. If all configurations of the entire MIG are considered, r will be the gate count, which can be large, making 4rnew MIGs a huge number. The depth wise approach constraints ‘r’ to be the gate count per logic depth, which will be much smaller than the full circuit's gate count. Due to the equivalence property illustrated inFIG.24, the depth wise approach gives an equivalent optimal configuration to the full circuit approach, but more efficiently in compute and space. At block2305, the CAD tool creates 4rnew MIGs from the best MIG using the four configurations illustrates inFIG.24, at each of the r M-gates. FIG.24illustrates equivalent forms2400of majority and minority function, in accordance with various embodiments. Due to the self-duality property of majority or minority functions, functions2401,2402,2403, and2404are equivalent, where x are the input bits, y are the output bits, ƒ is the majority/minority function (gate), and f_b is the corresponding minority/majority function. Stated plainly, to maintain the same functionality, an even number of {x, y, ƒ} can be negated. By applying this property from the last level of logic recursively to the input level and keeping track of the inverter count for each application of the self-duality property, one can select a configuration with the minimal inverter count. The levels of logic are obtained by grouping the nodes on the graph by depth. Nodes that have the same depth belong to the same level of logic. In some embodiments, it is assumed that a M-gate is more expensive in PPA than a CMOS inverter and that there is compatibility between the M-gate technology and CMOS. This is in stark contrast to other beyond CMOS technologies such as QCA which are not compatible with and cannot use CMOS inverters. Such beyond CMOS technologies like Quantum-dot cellular automata (QCA) have native inverter implementation, but such inverters are much more expensive than a majority gate. As such, some embodiments do not allow an increase in the M-gate count during inverter propagation. When counting the number of inverters, it is assumed that the inverters are connected to the source M-gate, so that multiple inverted connections to target M-gates only count as one inverter. Referring back toFIG.23, at block2306, the CAD tool simplifies each of the 4rnew MIGs by cancelling back-to-back inverters as illustrated inFIG.25.FIG.25illustrates the concept2500of inverter cancellation, in accordance with some embodiments. During inverter minimization it can happen that two inverters are between the connection of two M-gates. By the property of inversion, the following two functions are equivalent. Configuration2501can be minimized to configuration2502. As such, the back-to-back inverters cancel each other, leading to an inverter count decrease of 2. To avoid explosion in computation, inverter propagation is performed after the synthesis of each K-MIG as opposed to after the synthesis of the full logic circuit, in accordance with some embodiments. Referring back toFIG.23, at block2307, the CAD tool determines whether the synthesis objective is delay minimization. If delay minimization is the primary synthesis objective, the process proceeds to block2308where the delay along the timing critical path is computed. At block2308, a MIG is selected with the smallest delay, and the best MIG is assigned as the selected MIG. The process then proceeds to block2304to perform inverter minimization for the next graph depth. If at block2307, the CAD tool determines that delay minimization is not the primary synthesis objective (e.g., the objective is area or energy), the process proceeds to block2309. At block2309, the CAD tool counts the number of inverters in each of the 4rnew MIGs, and selects a MIG with the smallest inverter count. The CAD tool then assigns the best MIG to the selected MIG. The process then moves to block2304to check whether the entire depth list is processed for inverter minimization. If so, then the last selected MIG is the best MIG as indicated by block2310. This selected MIG is minimized for inverters. In some embodiments, a similar process can also be performed for buffers (e.g., for buffer minimization). FIG.26illustrates flowchart2600of hierarchical synthesis flow (e.g.,1008and1013), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Hierarchical synthesis flow is a process in the MIG synthesis flow ofFIGS.10A-B. Hierarchical synthesis is used for larger circuits for which optimal synthesis usingFIG.10Abecomes time consuming, in accordance with some embodiments. At block2601, the CAD tool receives inputs including MIG with n primary inputs, where n belongs to (K,H]. The inputs also include maximum fan-in and synthesis objective (e.g., area, energy, delay minimization). The CAD tool then assigns the input MIG to the best MIG at block2602. When a logic circuit has ≤H input bits, then at block2603, the CAD tool simulates the initialized MIG and annotates each edge in the graph with its truth table (signal). Note that the truth tables of the output edges from the graph represent output of the logic circuit. The CAD tool simulates the best MIG by passing all 2ninput signal configurations (truth table input rows) through the graph and annotates each node with its input signals and output signal. The output of block2603is the annotated MIG as indicated by block2604. Here, 2Hrepresents the largest truth table that should be computed and stored in memory for each edge in the graph. Hierarchical synthesis flow takes advantage of don't care conditions (rows of truth table that can be ignored during synthesis) in the internal subgraphs of MIG to further decrease PPA of the synthesized circuit. This occurs because as signals flow from the primary inputs of a circuit into its internal sections, the signals are shaped such that not all possibilities available at the periphery are present on the circuit's interior. When a logic circuit has more than H input bits, the CAD tool first topologically splits its underlying MIG into non-overlapping subgraphs, such that the number of input edges (bits) to each subgraph is less than or equal to H. Each of these subgraphs are called an H-MIG. This makes it computationally feasible for the CAD tool to simulate each H-MIG, annotate its edges with truth tables (signals), and take advantage of don't care conditions in synthesizing the H-MIG. In some embodiments, the synthesis of the H-MIGs are independent and can be performed in parallel. To ensure that the dependency structure of the logic components (nodes in the graph) is maintained, the nodes in the graph are first topologically sorted before they are segmented into H-MIGs, in accordance with some embodiments. The segmentation can be done greedily by adding nodes to a subgraph until the input bit condition of ≤H is satisfied. It can also be done using other graph-cut heuristics, in accordance with some embodiments. In some embodiments, each H-MIG can be considered a smaller logic circuit. The edges connected to input pins of the original logic circuit or nodes in other upstream H-MIGs represent the input pins to the current H-MIG. The edges connected to output pins of the original logic circuit or nodes in other downstream H-MIGs represent the output pins from the current H-MIG. Each H-MIG cannot be synthesized optimally due to NP hardness, so in some embodiments, the CAD tool splits it into smaller synthesizable graphs called K-MIGs (or K-feasible cones, K-subgraphs) as indicated by block2605. In block2605, the new MIG is initialized with all terminal nodes (inputs and outputs). The CAD tool then splits annotated MIG topologically into K-MIGs. In computational complexity theory, NP-hardness is the defining property of a class of problems. This class of problems are informally at least as hard as the hardest problems in NP. A simple example of an NP-hard problem is the subset sum problem. Greedy algorithm or area or delay-oriented heuristics can be used to create the set of K-MIGs, in accordance with some embodiments. The greedy algorithm splits the H-MIG into K-MIGs by adding nodes to a subgraph until the input bit condition of ≤K is satisfied. At block2606, an iterative process begins where the K-MIG list from block2605is processed till its exhausted. At block2607, the next K-MIG is selected in the list as the current K-MIG. Here, the number of input edges to the K-MIGs is i≤K. At block2608, the CAD tool reduces the 2ninput truth table rows to at most 2min(i, n)row, by selecting the unique rows. When the number of unique rows is <2min(i, n), don't care conditions exist and can be taken advantage of Each of the i inputs to a K-MIG will have 2nentries (a truth table column) because n primary inputs to an H-MIG results in 2ninput signal configurations (truth table rows). Let us consider the 2n-bit long bit strings for each of the i unique input connections (ignoring constant connections) to the K-MIG as the inputs in a new truth table for the K-MIG and the 2n-bit long bit strings for each of the output connections emanating from the K-MIG as the truth table outputs. The K-MIG's truth table has i input columns. This implies that at most 2min(i,n)of the rows can be unique. In some embodiments, the number of unique truth table rows will be less than 2min(i, n), which amounts to less restrictions in synthesizing the K-MIG and ultimately a more compact circuit. At block2609, the CAD tool performs optimal synthesis using reduced truth table and BIP or SAT formulation and associated solvers. To illustrate the reduction of the truth table from 2nrows to ≤2min(i,n)rows for a K-MIG with i unique input connections (ignoring constant connections), consider the MIG for a Majority-OR circuit2620inFIG.26B. Assume the full circuit is an H-MIG and the second majority gate (an OR gate) with its inputs and output in the dashed box is a K-MIG. In this example, n=4 and i=2 (note, here constant inputs don't count as input variables). The H-MIG truth table has 16 rows as shown in Table 2. Extracting the inputs and output columns of the K-MIG from the overall truth table, we obtain the truth table for the K-MIG is obtained as shown in Table 3. Removing duplicate rows, we obtain the reduced truth table shown in Table 4, which has 4 rows. TABLE 2Overall truth tableabcd1Y1Y21111111011111110111110011101110111101011011001101000110111101110110111101011100101001100111010010010001000000100 TABLE 3OR (second majority gate) truth tableextracted from overall truth tabled1Y1Y21111111111111101111111011101110101110111011101000111010001000100 TABLE 4OR reduced truth tabled1Y1Y21111110101110100 After synthesizing each K-MIG followed by inverter minimization at block2610, the CAD tool connects the optimal K-MIG to other optimally synthesized K-MIGs within a new H-MIG, using their input and output edges (ports). At block2611, the CAD tool adds synthesized MIG to new MIG by adding missing predecessor M-gates, connecting input edges to predecessor M-gates, and output edges to successor terminal output nodes. For circuits with input bits >H, once each new H-MIG is synthesized in parallel, the H-MIGs are connected with each other to create a new bigger MIG. Once the new H-MIG (e.g., circuit with input bits ≤H) or MIG (e.g., circuit with input bits >H) is created, then at block2612the new H-MIGs/MIG is compared to the current best H-MIG/MIG based on the synthesis PPA objective. At block2613, the CAD tool decides about the H-MIG/MIG. If the new H-MIG/MIG is better, it becomes the new best H-MIG/MIG and the optimization is repeated as indicated by blocks2614and2606. However, if the CAD tool determines at block2613the new H-MIG/MIG is worse, the optimization is terminated, and the best H-MIG/MIG is retuned as the optimal MIG as indicated as bock2615. FIG.27illustrates flowchart2700for post-synthesis flow (e.g.,1019), in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. Post-synthesis flow of flowchart2700is performed by MIG synthesis ofFIGS.10A-B. The post-synthesis flow of flowchart2700ensures that the fan-in and fan-out requirements are observed by the overall synthesis flow. Flowchart2700begins with inputs2701which include a given MIG, a list of allowed M-gate fan-ins, and M-gate fan-out constraints. At block2702, the CAD tool applies a gate pruning algorithm ofFIG.28. At block2703, the CAD tool applies a buffering algorithm ofFIG.29. The output after applying the gate pruning algorithm and the buffering algorithm is the synthesized MIG as indicated by block2704. FIG.28illustrates flowchart2800for gate pruning algorithm flow, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. After the BIP/SAT solution is obtained and translated into an MIG, each gate can be simplified to the smallest possible input width (fan-in) by pruning input connections or expanded to the next larger allowed fan-in by using the gate pruning algorithm of flowchart2800. At block2801, the CAD tool receives input to simulate the MIG. The inputs include a given MIG, allowed list of M-gate fan-ins (Ilist). At block2802, the CAD tool simulates the MIG, and annotates each non-terminal node with input signals. At block2803, the CAD tool initializes pruned MIG with all terminal nodes of input (e.g., primary inputs and outputs). At block2804, the CAD tool gets a list of non-terminal nodes of input MIG. The CAD tool then starts an iterative process (blocks2805through2811). Given a majority or minority inverter graph, each gate is simplified to the smallest possible width by pruning input connections using the following relation for M-gates: M(x1,x2, . . . ,xj,xj+1,,xj+2, . . . ,x1)=M(x1,x2, . . . ,xj,xj+2, . . . ,xI). (GPA-1) When a signal and its inverted form are inputs to an I-input M-gate, the pruned version will be an (I−2)-input gate. This pruning can be performed until no pair of signals and their inverted form remain. In some embodiments, if a single fan-in is desired for circuit uniformity, following the gate pruning, a pair of source and ground signals can be connected to each pruned M-gate until the maximum fan-in is achieved. This corresponds to applying equation GPA-1 in reverse, from right to left, where xj+1is the ground signal. At block2805, the CAD tool checks if the list of non-terminal nodes of input MIG is exhausted (e.g., all items in the list are processed). In the beginning of the flow, the CAD tool proceeds to block2806since it begins to process the list of non-terminal nodes. At block2806, the CAD tool selects the net node in the list as the current M-gate. At block2807, the CAD tool finds all input edge signals to the M-gate in the annotated MIG. At block2808, the CAD tool, using the cancellation property of M-gate in equation (GPA-1), eliminates pairs edges with inverse signals to obtain pruned M-gate. At block2809, the CAD tool makes a determination whether the pruned M-gate fan-in is in the Ilist. If the pruned M-gate fan-in is not in the Ilist, the process proceeds to block2810where the CAD tool adds a pair of source and ground input edges to the pruned M-gate until the fan-in is in the Ilist. At block2811, the CAD tool adds the pruned M-gate to the MIG. The process is repeated for all the non-terminal nodes of the input MIG. Once all the non-terminal nodes in the list are exhausted, a pruned MIG is achieved as indicated by block2812. FIG.29illustrates flowchart2900for buffering algorithm flow, in accordance with some embodiments. While various blocks are shown in a particular order, the order can be modified. For example, some blocks can be performed before others or performed simultaneously. The various blocks can be performed by software, hardware, or a combination of both. Here, the trapezoidal shaped blocks are inputs or outputs. In some embodiments, to address the fan-out constraints on M-gates, it is assumed that there are no fan-out constraints and perform the synthesis. In the buffering algorithm of the post-synthesis flow shown in flowchart2900, this assumption is modified or corrected by introducing inverters and buffers as needed to ensure the functionality of the circuit. When all outward connections (fan-out) from each M-gate are considered, if one of the connections is inverted, an inverter is already present in the circuit. As such, only one more inverter may be needed to buffer the non-inverted connections. On the other hand, if there are no inverted connections, a buffer (e.g., two inverters connected back-to-back) may be used. At block2901, the CAD tool receives input2901which includes the MIG and M-gate fan-out constraints. At block2902, the CAD tool initializes the buffered MIG with all terminal nodes of input MIG (e.g., primary inputs and outputs). At block2903, the CAD tool gets a list of non-terminal nodes of input MIG. The iterative process then begins at block2904, and the CAD tool checks whether the list of nodes is exhausted. At block2904, if the CAD tool determines that the list is not exhausted (or processed), then at block2905the CAD tool selects the next node in the list as the current M-gate. At block2906, the CAD tool finds all output edges from the current M-gate group as inverted and non-inverted. At block2907, the CAD tool determines whether there are two groups of connections. If there are two groups of connections, then at block2908, the CAD tool determines whether the fan-out constraint exceeded. If the fan-out constraint exceeded, then at block2909, the CAD tool adds one inverter after the M-gate. The CAD tool then re-wires all inverted connections after the first inverter and all the non-inverted connections from after the second inverter. At block2916, the CAD tool adds the buffered M-gate to the MIG and the process repeats. If the fan-out constrain is not exceeded, then at block2910, the CAD tool decides not to add buffers (e.g., no buffering needed). The CAD tool then rewires all inverted connections from after the inverter. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. If there are no two groups of connections (see block2907), then the process proceeds to block2911where the CAD tool determines whether the group is inverted. If the group is inverted, then at block2912, the CAD tool decides that no buffering is needed, and rewires all inverted connections from after the inverter. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. If the group is inverted (see block2911), then at block2913the CAD tool determines whether the fan-out constraint is exceeded. If the fan-out constraint is exceeded, then at block2914, the CAD tool adds buffer after M-gate and rewires all non-inverted connections from the buffer. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. If the fan-out constraint is not exceeded (see block2913), then at block2915, the CAD tool decides that no buffering or rewiring is needed. The process then proceeds to block2916, where the CAD tool adds the buffered M-gate to the MIG and the process repeats. After the CAD tool determines that the node list is exhausted (see block2904), the buffered MIG is provided as indicated by block2917. FIG.30illustrates processor system3000with machine-readable storage media having instructions that when executed cause the processor to perform logic synthesis, in accordance with various embodiments. Elements of embodiments (e.g., the various flowcharts described herein) are also provided as a machine-readable medium (e.g., memory) for storing the computer-executable instructions (e.g., instructions to implement any other processes discussed herein). In some embodiments, computing platform3000comprises memory3001, processor3002, machine-readable storage media3003(also referred to as tangible machine-readable medium), communication interface3004(e.g., wireless or wired interface), and network bus3005coupled together as shown. In some embodiments, processor3002is a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a general-purpose Central Processing Unit (CPU), or a low power logic implementing a simple finite state machine to perform the method of the various flowcharts, etc. In some embodiments, the various logic blocks of system3000are coupled together via network bus3005. Any suitable protocol may be used to implement network bus3005. In some embodiments, machine-readable storage medium3003includes instructions (also referred to as the program software code/instructions) for logic synthesis of a mix of CMOS gates and majority and minority logic circuits as described with reference to various embodiments and flowchart. Program software code/instructions associated with the flowcharts (and/or various embodiments) and executed to implement embodiments of the disclosed subject matter may be implemented as part of an operating system or a specific application, component, program, object, module, routine, or other sequence of instructions or organization of sequences of instructions referred to as “program software code/instructions,” “operating system program software code/instructions,” “application program software code/instructions,” or simply “software” or firmware embedded in processor. In some embodiments, the program software code/instructions associated with the flowcharts of various embodiments are executed by system3000. In some embodiments, the program software code/instructions associated with the flowcharts of various embodiments are stored in a computer executable storage medium3003and executed by processor3002. Here, computer executable storage medium503is a tangible machine-readable medium that can be used to store program software code/instructions and data that, when executed by a computing device, causes one or more processors (e.g., processor3002) to perform a method(s) as may be recited in one or more accompanying claims directed to the disclosed subject matter. The tangible machine-readable medium3003may include storage of the executable software program code/instructions and data in various tangible locations, including for example ROM, volatile RAM, non-volatile memory and/or cache and/or other tangible memory as referenced in the present application. Portions of this program software code/instructions and/or data may be stored in any one of these storage and memory devices. Further, the program software code/instructions can be obtained from other storage, including, e.g., through centralized servers or peer to peer networks and the like, including the Internet. Different portions of the software program code/instructions and data can be obtained at different times and in different communication sessions or in the same communication session. The software program code/instructions associated with the various flowcharts and data can be obtained in their entirety prior to the execution of a respective software program or application by the computing device. Alternatively, portions of the software program code/instructions and data can be obtained dynamically, e.g., just in time, when needed for execution. Alternatively, some combination of these ways of obtaining the software program code/instructions and data may occur, e.g., for different applications, components, programs, objects, modules, routines or other sequences of instructions or organization of sequences of instructions, by way of example. Thus, it is not required that the data and instructions be on a tangible machine-readable medium in entirety at a particular instance of time. Examples of tangible computer-readable media3003include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others. The software program code/instructions may be temporarily stored in digital tangible communication links while implementing electrical, optical, acoustical or other forms of propagating signals, such as carrier waves, infrared signals, digital signals, etc. through such tangible communication links. In general, tangible machine-readable medium3003includes any tangible mechanism that provides (i.e., stores and/or transmits in digital form, e.g., data packets) information in a form accessible by a machine (i.e., a computing device), which may be included, e.g., in a communication device, a computing device, a network device, a personal digital assistant, a manufacturing tool, a mobile communication device, whether or not able to download and run applications and subsidized applications from the communication network, such as the Internet, e.g., an iPhone®, Galaxy®, Blackberry® Android®, or the like, or any other device including a computing device. In one embodiment, processor-based system is in a form of or included within a PDA (personal digital assistant), a cellular phone, a notebook computer, a tablet, a game console, a set top box, an embedded system, a TV (television), a personal desktop computer, etc. Alternatively, the traditional communication applications and subsidized application(s) may be used in some embodiments of the disclosed subject matter. FIG.31illustrates 3-input majority gate3100with linear input capacitors and a non-linear output capacitor, in accordance with some embodiments. Logic Gate3100comprises first, second, and third drivers3101,3102, and3103, respectively. These drivers can be analog drivers generating analog signals or digital drivers generating signals that toggle between ground and the power supply rail, or a combination of analog or digital drivers. For example, driver3101is a CMOS driver such as a buffer, inverter, a NAND gate, NOR gate, etc., while driver3102is an amplifier generating a bias signal. The drivers provide input signals Vin1(and current I1), Vin2(and current I2), and Vin3(and current I3) to the three inputs of 3-input majority gate3104. In various embodiments, 3-input majority gate3104comprises three input nodes Vin1, Vin2, and Vin3. Here, signal names and node names are interchangeably used. For example, Vin1refers to node Vin1or signal Vin1depending on the context of the sentence. 3-input majority gate3103further comprises capacitors C1, C2, and C3. Here, resistors R1, R2, and R3are interconnect parasitic resistances coupled to capacitors C1, C2, and C3respectively. In various embodiments, capacitors C1, C2, and C3are non-ferroelectric capacitors. In some embodiments, the non-ferroelectric capacitor includes one of: dielectric capacitor, para-electric capacitor, or non-linear dielectric capacitor. A dielectric capacitor comprises first and second metal plates with a dielectric between them. Examples of such dielectrics are: HfO, ABO3 perovskites, nitrides, oxy-fluorides, oxides, etc. A para-electric capacitor comprises first and second metal plates with a para-electric material between them. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric materials to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95)), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. A dielectric capacitor comprises first and second metal plates with non-linear dielectric capacitor between them. The range for dielectric constant is 1.2 to 10000. The capacitors C1, C2, and C3can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, hybrid of metal capacitors or transistor capacitor. The capacitors C1, C2, and C3can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, or hybrid of metal capacitors or transistor capacitor. One terminal of the capacitors C1, C2, and C3is coupled to a common node cn. This common node is coupled to node n1, which is coupled to a first terminal of a non-linear polar capacitor3105. The majority function is performed at the common node cn, and the resulting voltage is projected on to capacitor3105. For example, the majority function of the currents (I1, I2, and I3) on node cn results in a resultant current that charges capacitor105. Table 5 illustrates the majority function f(Majority Vin1, Vin2, Vin3). TABLE 5Vin1Vin2Vin3cn (f(Majority Vin1, Vin2, Vin3))00000010010001111000101111011111 A capacitor with FE material (also referred to as a FEC) is a non-linear capacitor with its potential VF(QF) as a cubic function of its charge.FIG.32illustrates plot3200showing characteristics of a FEC. Plot3200is a charge-voltage (Q-V) plot for a block f Pb(Zr0.5Ti0.5)O3of area (100 nm)2and thickness 20 nm (nanometer). Plot3200shows local extrema at +/−Voindicated by the dashed lines. Here, the term Vcis the coercive voltage. In applying a potential V across the FEC, its charge can be unambiguously determined only for |V|>Vo. Otherwise, the charge of the FEC is subject to hysteresis effects. Referring back toFIG.31, in some embodiments, N odd number of capacitors are coupled to a single FEC to form a majority gate. In this case, N=3. The measured charge on the FEC (QF) is the output of the majority gate. Solving for a steady-state solution, the parasitic resistors are ignored and the input potentials Vi(or Vin) are assumed to be constant. In this case, the charge across each linear capacitor (C1, C2, C3) is: Qi=Ci·(Vi−VF) (1) The charge summed at node Cn and across FEC105is express as: QF=∑iQi(2)QF=∑iCiVi-∑iCiVF(3)QF=∑iCiVi-CVF(QF)(4)VF(QF)=∑icicVi-QFc(5) Here, C=ΣiCiis the sum of the capacitances. In the limit, C→∞, the following is achieved: VF(QF)=∑icicVi=V¯(6) The potential across FEC3105is the average of all the input potentials weighted by the capacitances (e.g., C1, C2, and C3). When Ci=C/N are all equal, VFis just a simple mean. To ensure that QF=VF−1(V) (7) is well defined, all possible values ofVhave magnitudes greater than Vc, the coercive potential. Assuming binary input of +/−Vs, the potential with the smallest magnitude is: V=Vs/N(8) This occurs when (N+1)/2 of the inputs are +Vsand (N−1)/2 are −Vs. Then, Vs>NVC(9) The output of the majority gate at node n1is expressed byFIG.33.FIG.33illustrates plot3300showing the output of a 3-input majority gate, in accordance with some embodiments. As an example, for N=3, the possible inputs are: V¯∈{-33VS,-13VS,+13VS,+33Vs}(10) Referring back toFIG.31, since capacitor3105is a non-linear polar capacitor, both terminals of the capacitor are pre-discharged to ground or to a known predetermined voltage via n-type transistors pull-down transistors MN1and MN2, and p-type pull-up transistors. The predetermined voltage can be programmable. The pre-determined voltage can be positive or negative. In some embodiments, n-type transistor MN1is coupled to node Vout_int1(internal Vout node) and is controllable by clock or reset signal Clk1. In some embodiments, n-type transistor MN2is coupled to node Vout_int2(internal Vout node) and is controllable by clock or reset signal Clk2. In some embodiments, p-type transistor MP1is coupled to node Vout_int2, and is controllable by Clk3b. In some embodiments, the n-type transistors MN1and MN2are replaced with p-type transistors to pre-charge both terminals (Vout_int1and Vout_int2) of capacitor3105to a supply voltage or another predetermined voltage, while the p-type transistor MP1is replaced with an n-type transistor coupled to ground or a negative supply rail. The predetermined voltage can be programmable. The pre-determined voltage can be positive or negative. In some embodiments, the pre-charge or pre-discharge of the terminals of capacitor3105(or nodes cn and n1) is done periodically by a clock signals Clk1, Clk2, and Clk3b. The controls can be a non-clock signal that is generated by a control logic (not shown). For example, the control can be issued every predetermined or programmable time. In some embodiments, clock signals Clk1, Clk2, and Clk3bare issued in a reset phase, which is followed by an evaluation phase where inputs Vin1, Vin2, and Vin3are received, and majority function is performed on them.FIG.34illustrates timing diagram3400for resetting the ferroelectric capacitor for majority gate ofFIG.31, in accordance with some embodiments. Clk1has a pulse larger than the pulse widths of Clk2and Clk3b. Clk3bis an inverse of Clk3(not shown). In some embodiments, Clk1is first asserted which begins to discharge node Vout_int1. While node Vout_int1is being discharged, Clk2is asserted. Clk2may have a pulse width which is substantially half of the pulse width of Clk1. When Clk2is asserted, node Vout_int2is discharged. This sequence assures that both terminals of the non-linear polar material of capacitor3105are discharged sequentially. In various embodiments, before discharging node Vout_int2, Clk3bis de-asserted which turns on transistor MP1, causing Vout_int2to be charged to a predetermined value (e.g., supply level). The pulse width of Clk3bis smaller than the pulse width of clk1to ensure the Clk3bpulsing happens within the Clk1pulse window. This is useful to ensure non-linear polar capacitor3105is initialized to a known programmed state along with the other capacitors (e.g., C1, C2, C3) which are initialized to 0 V across them. The pulsing on Vout_int2creates the correct field across the non-linear polar capacitor3105in conjunction with Vout_int1to put it in the correct state, such that during operating mode, if Vout_int1goes higher than Vc value (coercive voltage value), it triggers the switching for non-linear polar capacitor3105, thereby resulting into a voltage build up on Vout_int2. In some embodiments, load capacitor CL is added to node Vout_int2. In some embodiments, load capacitor CL is a regular capacitor (e.g., a non-ferroelectric capacitor). The capacitance value of CL on Vout_int2is useful to ensure that the FE switching charge (of FE capacitor3105) provides the right voltage level. For a given FE size (area A), with polarization switching density (dP) and desired voltage swing of Vdd (supply voltage), the capacitance of CL should be approximately CL=dP*A/Vdd. There is slight deviation from the above CL value as there is charge sharing on Vout_int2due to dielectric component of FE capacitor3105. The charge sharing responds relative to voltage on Vout_int1, and capacitor divider ratio between the dielectric component of the FE capacitor3105, and load capacitor (CL). Note, the capacitance of CL can be aggregate of all the capacitances (e.g., parasitic routing capacitance on the node, gate capacitance of the output stage3106, and drain or source capacitance of the reset devices (e.g., MN2, MP1) on the Vout_int2node. In some embodiments, for a given size of non-linear polar capacitor3105, CL requirement can be met by just the load capacitance of non-FE logic3106, and parasitic component itself, and may not need to have it as a separate linear capacitor. In some embodiments, the non-linear polar material of capacitor105includes one of: ferroelectric (FE) material, para-electric material, relaxor ferroelectric, or non-linear dielectric. In various embodiments, para-electric material is the same as FE material but with chemical doping of the active ferroelectric ion by an ion with no polar distortion. In some cases, the non-polar ions are non-s orbital ions formed with p, d, f external orbitals. In some embodiments, non-linear dielectric materials are same as para-electric materials, relaxors, and dipolar glasses. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric material to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. In various embodiments, the FE material can be any suitable low voltage FE material that allows the FE material to switch its state by a low voltage (e.g., 100 mV). In some embodiments, the FE material comprises a perovskite of the type ABO3, where ‘A’ and ‘B’ are two cations of different sizes, and ‘O’ is oxygen which is an anion that bonds to both the cations. Generally, the size of A atoms is larger than the size of B atoms. In some embodiments, the perovskite can be doped (e.g., by La or Lanthanides). Perovskites can be suitably doped to achieve a spontaneous distortion in a range of 0.3 to 2%. For example, for chemically substituted lead titanate such as Zr in Ti site; La, Nb in Ti site, the concentration of these substitutes is such that it achieves the spontaneous distortion in the range of 0.3 to 2%. For chemically substituted BiFeO3, BiCrO3, BiCoO3 class of materials, La or rare earth substitution into the Bi site can tune the spontaneous distortion. Threshold in the FE material has a highly non-linear transfer function in the polarization vs. voltage response. The threshold is related to a) non-linearity of switching transfer function; and b) the squareness of the FE switching. The non-linearity of switching transfer function is the width of the derivative of the polarization vs. voltage plot. The squareness is defined by the ratio of the remnant polarization to the saturation polarization; perfect squareness will show a value of 1. The squareness of the FE switching can be suitably manipulated with chemical substitution. For example, in PbTiO3 a P-E (polarization-electric field) square loop can be modified by La or Nb substitution to create an S-shaped loop. The shape can be systematically tuned to ultimately yield a non-linear dielectric. The squareness of the FE switching can also be changed by the granularity of the FE layer. A perfect epitaxial, single crystalline FE layer will show higher squareness (e.g., ratio is closer to 1) compared to a poly crystalline FE. This perfect epitaxial can be accomplished using lattice matched bottom and top electrodes. In one example, BiFeO (BFO) can be epitaxially synthesized using a lattice matched SrRuO3 bottom electrode yielding P-E loops that are square. Progressive doping with La will reduce the squareness. In some embodiments, the FE material is contacted with a conductive metal oxide that includes one of the conducting perovskite metallic oxides exemplified by: La—Sr—CoO3, SrRuO3, La—Sr—MnO3, YBa2Cu3O7, Bi2Sr2CaCu2O8, LaNiO3, and ReO3. In some embodiments, the FE material comprises a stack of layers including low voltage FE material between (or sandwiched between) conductive oxides. In various embodiments, when FE material is a perovskite, the conductive oxides are of the type AA′BB′O3. A′ is a dopant for atomic site A, it can be an element from the Lanthanides series. B′ is a dopant for atomic site B, it can be an element from the transition metal elements especially Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn. A′ may have the same valency of site A, with a different ferroelectric polarizability. In some embodiments, the FE material comprises hexagonal ferroelectrics of the type h-RMnO3, where R is a rare earth element such as: cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y). The ferroelectric phase is characterized by a buckling of the layered MnO5 polyhedra, accompanied by displacements of the Y ions, which lead to a net electric polarization. In some embodiments, hexagonal FE includes one of: YMnO3 or LuFeO3. In various embodiments, when the FE material comprises hexagonal ferroelectrics, the conductive oxides adjacent to the FE material are of A2O3 (e.g., In2O3, Fe2O3) and AB2O3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, the FE material comprises improper FE material. An improper ferroelectric is a ferroelectric where the primary order parameter is an order mechanism such as strain or buckling of the atomic order. Examples of improper FE material are LuFeO3 class of materials or super lattice of ferroelectric and paraelectric materials PbTiO3 (PTO) and SnTiO3 (STO), respectively, and LaAlO3 (LAO) and STO, respectively. For example, a super lattice of [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. While various embodiments here are described with reference to ferroelectric material for storing the charge state, the embodiments are also applicable for paraelectric material. For example, the capacitor of various embodiments can be formed using paraelectric material instead of ferroelectric material. In some embodiments, the FE material includes one of: Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides. In some embodiments, FE material includes one of: Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction. In some embodiments, the FE material includes Bismuth ferrite (BFO), lead zirconate titanate (PZT), BFO with doping material, or PZT with doping material, wherein the doping material is one of Nb or; and relaxor ferroelectrics such as PMN-PT. In some embodiments, the FE material includes Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or any element from the lanthanide series of the periodic table. In some embodiments, the FE material105includes lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb. In some embodiments, the FE material includes a relaxor ferroelectric includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST). In some embodiments, the FE material includes Hafnium oxides of the form, Hf1-x Ex Oy where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y. In some embodiments, FE material105includes Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate. In some embodiments, the FE material comprises multiple layers. For example, alternating layers of [Bi2O2]2+, and pseudo-perovskite blocks (Bi4Ti3O12 and related Aurivillius phases), with perovskite layers that are n octahedral layers in thickness can be used. In some embodiments, the FE material comprises organic material. For example, Polyvinylidene fluoride or polyvinylidene difluoride (PVDF). The FE material is between two electrodes. These electrodes are conducting electrodes. In some embodiments, the electrodes are perovskite templated conductors. In such a templated structure, a thin layer (e.g., approximately 10 nm) of a perovskite conductor (such as SrRuO3) is coated on top of IrO2, RuO2, PdO2, or PtO2 (which have a non-perovskite structure but higher conductivity) to provide a seed or template for the growth of pure perovskite ferroelectric at low temperatures. In some embodiments, when the ferroelectric comprises hexagonal ferroelectric material, the electrodes can have hexagonal metals, spinels, or cubic metals. Examples of hexagonal metals include: PtCoO2, PdCoO2, and other delafossite structured hexagonal metallic oxides such as Al-doped ZnO. Examples of spinels include Fe3O4 and LiV2O4. Examples of cubic metals include Indium Tin Oxide (ITO) such as Sn-doped In2O3. The charge developed on node n1produces a voltage and current that is the output of the majority gate3104. Any suitable driver3106can drive this output. For example, a non-FE logic, FE logic, CMOS logic, BJT logic, etc. can be used to drive the output to a downstream logic. Examples of the drivers include inverters, buffers, NAND gates, NOR gates, XOR gates, amplifiers, comparators, digital-to-analog converters, analog-to-digital converters, etc. In some embodiments, output “out” is reset by driver106via Clk1signal. For example, NAND gate with one input coupled to Vout_int2and the other input coupled to Clk1can be used to reset “out” during a reset phase. WhileFIG.31illustrates a 3-input majority gate, the same concept can be extended to more than 3 inputs to make an N-input majority gate, where N is greater than 2. For example, a 5-input majority gate is similar to 3-input majority gate104but for additional inputs Vin4and Vin5. These inputs can come from the same drivers (e.g., any one of drivers101,102,103) or from different drivers. Input Vin4and Vin5can be analog, digital, or a combination of them. For example, Vin4is a digital signal while Vin5is an analog signal. The additional inputs Vin4and Vin5are coupled to additional non-ferroelectric capacitors C4and C5, respectively (not shown). The composition and size of the capacitors C4and C5are similar to that of C1, C2, and C3. Here, resistors R4and R5are parasitic resistors. The majority function is performed at the common node cn, and the resulting voltage is projected on to capacitor3105. For example, the majority function of the currents (I1, I2, I3, I4, and I5) on node cn results in a resultant current that charges capacitor3105. Table 6 illustrates the majority function f(Majority Vin1, Vin2, Vin3, Vin4, Vin5) of a 5-input majority gate. TABLE 6cn (f(Majority Vin1,Vin1Vin2Vin3Vin4Vin5Vin2, Vin3, Vin4, Vin5))000000000010000100000110001000001010001100001111010000010010010100010111011000011011011101011111100000100010100100100111101000101011101101101111110000110011110101110111111001111011111101111101 FIG.35illustrates 3-input minority gate3500with non-linear input capacitors, in accordance with some embodiments. In some embodiments, 3-input majority gate3500comprises non-linear input capacitors C1nl, C2nl, and C3nlthat receives digital signals a, b, and c, respectively. Here, signal names and node names are interchangeably used. For example, ‘a’ refers to node ‘a’ or signal ‘a’ depending on the context of the sentence. One end or terminal of capacitor C1nlis coupled to node a while the other end of capacitor C1nlis coupled to summing node Vs. The same is true for other non-linear capacitors C2nland C3nlas shown. In some embodiments, 3-input majority gate3500comprises a driver circuitry3501. In this example, driver circuitry3501is an inverter. In other embodiments, other types of driver circuitries can be used such as NAND gate, NOR gate, multiplexer, buffer, and other logic gates. The majority function is performed at summing node Vs as Majority(a,b,c). In this example, since driver3501is an inverter, minority function is performed at output “out” as Minority(a,b,c). In some embodiments, in addition to the gate capacitance of driver circuitry3501, an additional linear capacitor CL is coupled to summing node Vs and ground as shown. In some embodiments, this linear capacitor CL is a non-ferroelectric capacitor. In some embodiments, the non-ferroelectric capacitor includes one of: dielectric capacitor, para-electric capacitor, or non-linear dielectric capacitor. A dielectric capacitor comprises first and second metal plates with a dielectric between them. Examples of such dielectrics are: HfO, ABO3 perovskites, nitrides, oxy-fluorides, oxides, etc. A para-electric capacitor comprises first and second metal plates with a para-electric material between them. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric materials to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95)), HfZrO2, Hf—Si—O, La-substituted PbTiO3, PMN-PT based relaxor ferroelectrics. A dielectric capacitor comprises first and second metal plates with non-linear dielectric capacitor between them. The range for dielectric constant is 1.2 to 10000. The capacitor CL can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, hybrid of metal capacitors or transistor capacitor. The capacitor CL can be implemented as MIM (metal-insulator-metal) capacitor technology, transistor gate capacitor, or hybrid of metal capacitors or transistor capacitor. In some embodiments, the non-linear input capacitors C1nl, C2nl, and C3nlcomprise non-linear polar material. In some embodiments, the non-linear polar material includes one of: ferroelectric (FE) material, para-electric material, relaxor ferroelectric, or non-linear dielectric. In various embodiments, para-electric material is the same as FE material but with chemical doping of the active ferroelectric ion by an ion with no polar distortion. In some cases, the non-polar ions are non-s orbital ions formed with p, d, f external orbitals. In some embodiments, non-linear dielectric materials are same as para-electric materials, relaxors, and dipolar glasses. In some embodiments, f-orbital materials (e.g., lanthanides) are doped to the ferroelectric material to make paraelectric material. Examples of room temperature paraelectric material include: SrTiO3, Ba(x)Sr(y)TiO3 (where x is −0.5, and y is 0.95), HfZrO2, Hf—Si—O, La-substituted PbTiO3, and PMN-PT based relaxor ferroelectrics. In various embodiments, the FE material can be any suitable low voltage FE material that allows the FE material to switch its state by a low voltage (e.g., 100 mV). In some embodiments, the FE material comprises a perovskite of the type ABO3, where ‘A’ and ‘B’ are two cations of different sizes, and ‘O’ is oxygen which is an anion that bonds to both the cations. Generally, the size of A atoms is larger than the size of B atoms. In some embodiments, the perovskite can be doped (e.g., by La or Lanthanides). Perovskites can be suitably doped to achieve a spontaneous distortion in a range of 0.3 to 2%. For example, for chemically substituted lead titanate such as Zr in Ti site; La, Nb in Ti site, the concentration of these substitutes is such that it achieves the spontaneous distortion in the range of 0.3 to 2%. For chemically substituted BiFeO3, BiCrO3, BiCoO3 class of materials, La or rare earth substitution into the Bi site can tune the spontaneous distortion. In some embodiments, perovskite includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3. Threshold in the FE material has a highly non-linear transfer function in the polarization vs. voltage response. The threshold is related to: a) non-linearity of switching transfer function; and b) the squareness of the FE switching. The non-linearity of switching transfer function is the width of the derivative of the polarization vs. voltage plot. The squareness is defined by the ratio of the remnant polarization to the saturation polarization; perfect squareness will show a value of 1. The squareness of the FE switching can be suitably manipulated with chemical substitution. For example, in PbTiO3 a P-E (polarization-electric field) square loop can be modified by La or Nb substitution to create an S-shaped loop. The shape can be systematically tuned to ultimately yield a non-linear dielectric. The squareness of the FE switching can also be changed by the granularity of the FE layer. A perfect epitaxial, single crystalline FE layer will show higher squareness (e.g., ratio is closer to 1) compared to a poly crystalline FE. This perfect epitaxial can be accomplished using lattice matched bottom and top electrodes. In one example, BiFeO (BFO) can be epitaxially synthesized using a lattice matched SrRuO3 bottom electrode yielding P-E loops that are square. Progressive doping with La will reduce the squareness. In some embodiments, the FE material is contacted with a conductive metal oxide that includes one of the conducting perovskite metallic oxides exemplified by: La—Sr—CoO3, SrRuO3, La—Sr—MnO3, YBa2Cu3O7, Bi2Sr2CaCu2O8, LaNiO3, and ReO3. In some embodiments, the FE material comprises a stack of layers including low voltage FE material between (or sandwiched between) conductive oxides. In various embodiments, when FE material is a perovskite, the conductive oxides are of the type AA′BB′O3. A′ is a dopant for atomic site A, it can be an element from the Lanthanides series. B′ is a dopant for atomic site B, it can be an element from the transition metal elements especially Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn. A′ may have the same valency of site A, with a different ferroelectric polarizability. In some embodiments, the FE material comprises hexagonal ferroelectrics of the type h-RMnO3, where R is a rare earth element such as: cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y). The ferroelectric phase is characterized by a buckling of the layered MnO5 polyhedra, accompanied by displacements of the Y ions, which lead to a net electric polarization. In some embodiments, hexagonal FE includes one of: YMnO3 or LuFeO3. In various embodiments, when the FE material comprises hexagonal ferroelectrics, the conductive oxides adjacent to the FE material are of A2O3 (e.g., In2O3, Fe2O3) and AB2O3 type, where ‘A’ is a rare earth element and B is Mn. In some embodiments, FE material comprises improper FE material. An improper ferroelectric is a ferroelectric where the primary order parameter is an order mechanism such as strain or buckling of the atomic order. Examples of improper FE material are LuFeO3 class of materials or super lattice of ferroelectric and paraelectric materials PbTiO3 (PTO) and SnTiO3 (STO), respectively, and LaAlO3 (LAO) and STO, respectively. For example, a super lattice of [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. While various embodiments here are described with reference to ferroelectric material for storing the charge state, the embodiments are also applicable for paraelectric material. For example, the capacitor of various embodiments can be formed using paraelectric material instead of ferroelectric material. In some embodiments, the FE material includes one of: Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides. In some embodiments, FE material includes one of: Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction. In some embodiments, the FE material includes Bismuth ferrite (BFO), lead zirconate titanate (PZT), BFO with doping material, or PZT with doping material, wherein the doping material is one of Nb or relaxor ferroelectrics such as PMN-PT. In some embodiments, the FE material includes Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or any element from the lanthanide series of the periodic table. In some embodiments, the FE material includes lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb. In some embodiments, the FE material includes a relaxor ferroelectric including one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST). In some embodiments, the FE material includes Hafnium oxides of the form, Hf1-x Ex Oy where E can be Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y. In some embodiments, FE material105includes Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate. In some embodiments, the FE material comprises multiple layers. For example, alternating layers of [Bi2O2]2+, and pseudo-perovskite blocks (Bi4Ti3O12 and related Aurivillius phases), with perovskite layers that are n octahedral layers in thickness can be used. In some embodiments, the FE material comprises organic material. For example, Polyvinylidene fluoride or polyvinylidene difluoride (PVDF). The FE material is between two electrodes. These electrodes are conducting electrodes. In some embodiments, the electrodes are perovskite templated conductors. In such a templated structure, a thin layer (e.g., approximately 10 nm) of a perovskite conductor (such as SrRuO3) is coated on top of IrO2, RuO2, PdO2, or PtO2 (which have a non-perovskite structure but higher conductivity) to provide a seed or template for the growth of pure perovskite ferroelectric at low temperatures. In some embodiments, when the ferroelectric comprises hexagonal ferroelectric material, the electrodes can have hexagonal metals, spinels, or cubic metals. Examples of hexagonal metals include: PtCoO2, PdCoO2, and other delafossite structured hexagonal metallic oxides such as Al-doped ZnO. Examples of spinels include Fe3O4 and LiV2O4. Examples of cubic metals include Indium Tin Oxide (ITO) such as Sn-doped In2O3. The majority function is performed at the summing node Vs, and the resulting voltage is projected on to capacitance of driver circuitry3501. For example, the majority function of the currents (Ia, Ib, and Ic) on node Vs results in a resultant current that charges capacitor3501. Table 7 illustrates the majority function f(Majority a, b, c). TABLE 7abcVs (f(Majority a, b, c))00000010010001111000101111011111 The charge developed on node Vs produces a voltage and current that is the output of the majority gate3500. Any suitable driver3501can drive this output. For example, a non-FE logic, FE logic, CMOS logic, BJT logic, etc. can be used to drive the output to a downstream logic. Examples of the drivers include inverters, buffers, NAND gates, NOR gates, XOR gates, amplifiers, comparators, digital-to-analog converters, analog-to-digital converters, multiplexers, etc. WhileFIG.35illustrates a 3-input majority gate, the same concept can be extended to more than 3 inputs to make an N-input majority gate, where N is greater than 2. In various embodiments, ‘N’ is an odd number. For example, a 5-input majority gate is like an input majority gate3500but for additional inputs ‘d’ and ‘e’. These inputs can come from the same drivers or from different drivers. In some embodiments, the 3-input majority gate can be configured as a fast inverter with a much faster propagation delay compared to a similar sized (in terms of area footprint) CMOS inverter. This is particularly useful when the inputs have a significantly slower slope compared to the propagation delay through the non-linear input capacitors. One way to configurate the 3-input majority gate as an inverter is to set one input to a logic high (e.g., b=1) and set another input to a logic low (e.g., b=0). The third input is the driving input which is to be inverted. The inversion will be at the Vs node. The same technique can also be applied to N-input majority gate, where ‘N’ is 1 or any other odd number. In an N-input majority gate, (N−1)/2 inputs are set to ‘1’ and (N−1)/2 inputs are set to ‘0’, and one input is used to decide the inversion function. It will be appreciated that the various embodiments are described as a majority gate, the same concepts are applicable to a minority gate. In a minority gate the driving circuitry is an inverting circuitry coupled to the summing node Vs. The minority function is seen at the output of the inverting circuitry. In some embodiments, (2N−1) input majority gate can operate as an N-input AND gate where (N−1) inputs of the majority gate are set to zero. The AND function will be seen at the summing node Vs. Similarly, N-input NAND, OR, NOR gates can be realized. In various embodiments, the summing node Vs is driven by a driver circuitry (e.g., inverter, buffer, NAND gate, AND gate, OR gate, NOR gate, or any other logic circuitry). However, driver circuitry3501can be replaced with another majority or minority gate. In one such embodiment, the storage node Vsis directly coupled to a non-linear capacitor of another majority or minority gate. Any logic function ƒ(x1, x2, . . . xn) can be represented by two levels of logic as given by the min-term expansion: ƒ(x1, x2, . . . xn)=VC1,C2, . . . Cnƒ(x1, x2, . . . xn)∧x1C1∧x2C2∧x3C3. . . ∧xnCnwhere Ciis either 0 or 1. When Ciis 1, xiCi=xi(the input is used in its original form). When Ciis 0, xiCi=xi(the input is used in its inverted form). The first level of logic is represented by at most 2nAND gates (Δ), one for each of the 2npossible combinations of 0 and 1 for C1, C2, . . . Cn. The second level of logic is represented by a single OR gate (∨). Each operand of the OR gate is a representation of a row in the truth table for ƒ(x1, x2, . . . xn). A (2N−1)-input majority gate can represent an N-input AND gate, by tying (N−1) of the majority gate's inputs to a ground level. Similarly, a (2N−1)-input majority gate can represent an N-input OR gate, by tying (N−1) of the majority gate's inputs to a supply level (Vdd). Since a majority gate can represent AND and OR gates, and the inputs to the AND and OR gates are either original or inverted forms of the input digital signals, any logic function can be represented by majority gates and inverters only, in accordance with some embodiments. FIG.36illustrates 3-input majority gate3600with non-linear input capacitors, in accordance with some embodiments. In some embodiments, the summing node Vs is not coupled to a CMOS driver (e.g., buffer, inverter, NAND gate, or any other CMOS logic gate). In one example, Vs is coupled to another majority or minority gate. For instance, Vs is coupled to a terminal of another non-linear capacitor of another majority or minority gate. FIG.37illustrates 3-input majority XOR gate3700with non-linear input capacitors, in accordance with some embodiments. XOR gate3700is a 2-input XOR gate that performs XOR function on inputs a and b. In various embodiments, XOR gate3700comprises non-linear input capacitors C1nl, C2nl, C3nl, C4nl, C5nl, and C6nl, inverter3703, and non-linear output capacitors C7nl, C8nl, and C9nl. Capacitors C1nl, C2nl, and C3nlreceive inputs a, b, and 0, and perform majority AND function on node Vs1. Capacitors C4nl, C5nl, and C6nlreceive inputs a, b, and Vdd, and perform majority OR function on node Vs2. The NAND output on node out1 is received by output capacitor C7nl. The OR output on node Vs2 is received by capacitor C8nl. Capacitor C9nlreceives a predetermined input 0 in this example. The majority function on node out3 is an AND of out1, out2, and 0. In some embodiments, instead of driving voltage on node Vs2 to out2, buffer3701is used between nodes Vs2 and out2. In some embodiments, instead of driving output out3 as the XOR output, buffer3702is used to output the XOR output on node out. In some embodiments, Vs2 is directly connected to node out2. In some embodiments, out3 is directly connected to node out. In some embodiments, linear or non-linear capacitors CL1, CL2, and CL3are added on the summing nodes Vs1, Vs2, and out3, respectively. By swapping the voltages ‘0’ and ‘Vdd’ different logic functions can be realized, in accordance with various embodiments. FIG.38illustrates a system-on-chip3800having logic which is synthesized using the CAD tool of various embodiments. In some embodiments, SOC3800comprises memory3801having static random-access memory (SRAM) or FE based random access memory FE-RAM, or any other suitable memory. The memory can be non-volatile (NV) or volatile memory. Memory3801may also comprise logic3803to control memory3802. For example, write and read drivers are part of logic3803. These drivers and other logic are implemented using the majority or threshold gates of various embodiments. The logic can comprise majority or threshold gates and traditional logic (e.g., CMOS based NAND, NOR etc.). SOC further comprises a memory I/O (input-output) interface3804. The interface may be double-data rate (DDR) compliant interface or any other suitable interface to communicate with a processor. Processor3805of SOC3800can be a single core or multiple core processor. Processor3805can be a general-purpose processor (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), or an Application Specific Integrated Circuit (ASIC) processor. In some embodiments, processor3805is an artificial intelligence (AI) processor (e.g., a dedicated AI processor, a processor circuitry, a graphics processor configured as an AI processor). In various embodiments, processor3805(or processor circuitry3805) is configured to execute one or more instructions. AI is a broad area of hardware and software computations where data is analyzed, classified, and then a decision is made regarding the data. For example, a model describing classification of data for a certain property or properties is trained over time with large amounts of data. The process of training a model requires large amounts of data and processing power to analyze the data. When a model is trained, weights or weight factors are modified based on outputs of the model. Once weights for a model are computed to a high confidence level (e.g., 95% or more) by repeatedly analyzing data and modifying weights to get the expected results, the model is deemed “trained.” This trained model with fixed weights is then used to make decisions about new data. Training a model and then applying the trained model for new data is hardware intensive activity. In some embodiments, Al processor3805has reduced latency of computing the training model and using the training model, which reduces the power consumption of such Al processor systems. Processor3805may be coupled to a number of other chip-lets that can be on the same die as SOC3800or on separate dies. These chip-lets include connectivity circuitry3806, I/O controller3807, power management3808, and display system3809, and peripheral connectivity3810. Connectivity3806represents hardware devices and software components for communicating with other devices. Connectivity3806may support various connectivity circuitries and standards. For example, connectivity3806may support GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, 3rd Generation Partnership Project (3GPP) Universal Mobile Telecommunications Systems (UMTS) system or variations or derivatives, 3GPP Long-Term Evolution (LTE) system or variations or derivatives, 3GPP LTE-Advanced (LTE-A) system or variations or derivatives, Fifth Generation (5G) wireless system or variations or derivatives, 5G mobile networks system or variations or derivatives, 5G New Radio (NR) system or variations or derivatives, or other cellular service standards. In some embodiments, connectivity3806may support non-cellular standards such as WiFi. I/O controller3807represents hardware devices and software components related to interaction with a user. I/O controller3807is operable to manage hardware that is part of an audio subsystem and/or display subsystem. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of SOC3800. In some embodiments, I/O controller3807illustrates a connection point for additional devices that connect to SOC3800through which a user might interact with the system. For example, devices that can be attached to the SOC3800might include microphone devices, speaker or stereo systems, video systems or other display devices, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices. Power management3808represents hardware or software that perform power management operations, e.g., based at least in part on receiving measurements from power measurement circuitries, temperature measurement circuitries, charge level of battery, and/or any other appropriate information that may be used for power management. By using majority and threshold gates of various embodiments, non-volatility is achieved at the output of these logic. Power management3808may accordingly put such logic into low power state without the worry of losing data. Power management may select a power state according to Advanced Configuration and Power Interface (ACPI) specification for one or all components of SOC3800. Display system3809represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the processor3805. In some embodiments, display system3809includes a touch screen (or touch pad) device that provides both output and input to a user. Display system3809may include a display interface, which includes the particular screen or hardware device used to provide a display to a user. In some embodiments, the display interface includes logic separate from processor3805to perform at least some processing related to the display. Peripheral connectivity3810may represent hardware devices and/or software devices for connecting to peripheral devices such as printers, chargers, cameras, etc. Peripheral connectivity3810say support communication protocols, e.g., PCIe (Peripheral Component Interconnect Express), USB (Universal Serial Bus), Thunderbolt, High Definition Multimedia Interface (HDMI), Firewire, etc. Here, multiple non-silicon semiconductor material layers may be stacked within a single fin structure. The multiple non-silicon semiconductor material layers may include one or more “P-type” layers that are suitable (e.g., offer higher hole mobility than silicon) for P-type transistors. The multiple non-silicon semiconductor material layers may further include one or more “N-type” layers that are suitable (e.g., offer higher electron mobility than silicon) for N-type transistors. The multiple non-silicon semiconductor material layers may further include one or more intervening layers separating the N-type from the P-type layers. The intervening layers may be at least partially sacrificial, for example to allow one or more of a gate, source, or drain to wrap completely around a channel region of one or more of the N-type and P-type transistors. The multiple non-silicon semiconductor material layers may be fabricated, at least in part, with self-aligned techniques such that a stacked CMOS device may include both a high-mobility N-type and P-type transistor with a footprint of a single FET (field effect transistor). It is pointed out that those elements of the figures having the same reference numbers (or names) as the elements of any other figure can operate or function in any manner similar to that described, but are not limited to such. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic “may,” “might,” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the elements. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional elements. The term “device” may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus which comprises the device. Throughout the specification, and in the claims, the term “connected” means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices. The term “coupled” means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices. The term “adjacent” here generally refers to a position of a thing being next to (e.g., immediately next to or close to with one or more things between them) or adjoining another thing (e.g., abutting it). The term “circuit” or “module” may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term “signal” may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.” The term “scaling” generally refers to converting a design (schematic and layout) from one process technology to another process technology and subsequently being reduced in layout area. The term “scaling” generally also refers to downsizing layout and devices within the same technology node. The term “scaling” may also refer to adjusting (e.g., slowing down or speeding up—i.e. scaling down, or scaling up respectively) of a signal frequency relative to another parameter, for example, power supply level. The terms “substantially,” “close,” “approximately,” “near,” and “about,” generally refer to being within +/−10% of a target value. For example, unless otherwise specified in the explicit context of their use, the terms “substantially equal,” “about equal” and “approximately equal” mean that there is no more than incidental variation between among things so described. In the art, such variation is typically no more than +/−10% of a predetermined target value. Unless otherwise specified the use of the ordinal adjectives “first,” “second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. For the purposes of the present disclosure, phrases “A and/or B” and “A or B” mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The terms “left,” “right,” “front,” “back,” “top,” “bottom,” “over,” “under,” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms “over,” “under,” “front side,” “back side,” “top,” “bottom,” “over,” “under,” and “on” as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material “over” a second material in the context of a figure provided herein may also be “under” the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material “on” a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies. The term “between” may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material “between” two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices. Here, the term “backend” generally refers to a section of a die which is opposite of a “frontend” and where an IC (integrated circuit) package couples to IC die bumps. For example, high-level metal layers (e.g., metal layer6and above in a ten-metal stack die) and corresponding vias that are closer to a die package are considered part of the backend of the die. Conversely, the term “frontend” generally refers to a section of the die that includes the active region (e.g., where transistors are fabricated) and low-level metal layers and corresponding vias that are closer to the active region (e.g., metal layer5and below in the ten-metal stack die example). Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive. While the disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. The embodiments of the disclosure are intended to embrace all such alternatives, modifications, and variations as to fall within the broad scope of the appended claims. In addition, well known power/ground connections to integrated circuit (IC) chips and other components may or may not be shown within the presented figures, for simplicity of illustration and discussion, and so as not to obscure the disclosure. Further, arrangements may be shown in block diagram form in order to avoid obscuring the disclosure, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the present disclosure is to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting. Following examples illustrates the various embodiments. Any one example can be combined with other examples described herein. Example 1: A machine-readable storage media having machine-readable instructions stored thereon that when executed cause one or more machines to perform a method comprising: receiving one or more input files indicative of a logic function of a logic circuit; classify, from the one or more input files, inputs, outputs, and state elements as terminal nodes; segmenting the logic circuit into sub-circuits with the terminal nodes as input and output ports; generating a list of combinational circuits and sequential circuits by analyzing feedback paths in each of the sub-circuits; for each item in the list, performing combinational circuit synthesis on an individual sub-circuit of the sub-circuits if it is determined that the individual sub-circuit does not include a feedback path, for each item in the list, performing sequential circuit synthesis on the individual sub-circuit if it is determined that the individual sub-circuit includes a feedback path, adding synthesized outputs from performing the combinational circuit synthesis and from performing the sequential circuit synthesis to a list of synthesized circuits; and wiring circuits, to generate a synthesized circuit, in the list of synthesized circuits using the inputs and the outputs of the logic circuit. Example 2: The machine-readable storage media of claim1, wherein performing the combinational circuit synthesis comprises: iteratively breaking each sub-circuit of the sub-circuits into non-overlapping blocks; selecting an option that maximizes power, performance, and area for a block of the non-overlapping blocks; for each block of the non-overlapping blocks, synthesizing the block in view of the selected option using standard CMOS logic gates and majority or minority gates of any fan-in or fan-out, or a combination of them, wherein synthesizing the block results in a synthesized block; adding the synthesized block to a list of synthesized blocks; and combining synthesized blocks from the list of synthesized blocks, to hierarchically create larger cells and a complete circuit, wherein a larger cell of the larger cells is larger than a block of the non-overlapping blocks. Example 3: The machine-readable storage media of claim1, wherein performing the combinational circuit synthesis comprises: for each sub-circuit of the sub-circuits, performing majority inverter graph (MIG) synthesis to generate a MIG with connected nodes of majority gates and inverter gates; and heuristically pattern matching the MIG, with standard cell library comprising logic gates, to generate a synthesized circuit. Example 4: The machine-readable storage media of claim3, wherein the logic gates include an n-bit adder and an n-bit multiplier. Example 5: The machine-readable storage media of claim3, wherein heuristically pattern matching the MIG comprises: ordering the logic gates in the standard cell library from largest to smallest or smallest to largest, to generate ordered logic gates; defining a current pattern as a representation of a current standard cell in the ordered logic gates; determining whether a match exists between the current pattern and a subgraph of the MIG; and replacing the subgraph of the MIG with the current pattern if the match exists. Example 6: The machine-readable storage media of claim1, wherein performing the sequential circuit synthesis comprises: determining whether the individual sub-circuit of the sub-circuits is level-triggered; and performing level-triggered sequential synthesis on the individual sub-circuit if it is determined that the individual sub-circuit of the sub-circuits is level-triggered. Example 7: The machine-readable storage media of example 6, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist or specified as a hardware description language; introducing an auxiliary primary input to the individual sub-circuit for each feedback connection from an output of the individual sub-circuit to an input of the individual sub-circuit, if it is determined that the individual sub-circuit is a netlist or specified as a hardware description language; and performing combinational circuit synthesis on the individual sub-circuit after the auxiliary primary input is introduced. Example 8: The machine-readable storage media of example 7, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate, wherein performing level-triggered sequential synthesis comprises: feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary primary input; or feedback wiring from an output of the first majority or minority gate to an input of the first majority or minority gate. Example 9: The machine-readable storage media of claim6, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; for each previous state in a Boolean expression for the individual sub-circuit, introducing an auxiliary input if it is determined that individual sub-circuit is not a netlist; performing combinational circuit synthesis on the individual sub-circuit after the auxiliary input is introduced, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate; and feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary input. Example 10: The machine-readable storage media of claim1, wherein performing the sequential circuit synthesis comprises: determining whether the individual sub-circuit of the sub-circuits is pulse-triggered; and performing pulse-triggered sequential synthesis on the individual sub-circuit if it is determined that the individual sub-circuit of the sub-circuits is pulse-triggered. Example 11: The machine-readable storage media of claim10, wherein performing the pulse-triggered sequential synthesis comprises: performing level-triggered sequential synthesis to generate a latch circuit; duplicated the latch circuit to generate a duplicate latch circuit; placing the duplicate latch circuit in back-to-back configuration with the latch circuit; and wiring a first clock to the latch circuit and a second clock to the duplicate latch circuit, wherein the second clock is an inverse of the first clock. Example 12: The machine-readable storage media of claim11, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; introducing an auxiliary primary input to the individual sub-circuit for each feedback connection from an output of the individual sub-circuit to an input of the individual sub-circuit, if it is determined that the individual sub-circuit is a netlist; and performing combinational circuit synthesis on the individual sub-circuit after the auxiliary primary input is introduced. Example 13: The machine-readable storage media of claim12, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate, wherein performing level-triggered sequential synthesis comprises: feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary primary input. Example 14: The machine-readable storage media of claim11, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; for each previous state in a Boolean expression for the individual sub-circuit, introducing an auxiliary input if it is determined that individual sub-circuit is not a netlist; performing combinational circuit synthesis on the individual sub-circuit after the auxiliary input is introduced, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate; and feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary input. Example 15: The machine-readable storage media of claim1, wherein performing the sequential circuit synthesis comprises: determining whether the individual sub-circuit of the sub-circuits is edge-triggered; and performing edge-triggered sequential synthesis on the individual sub-circuit if it is determined that the individual sub-circuit of the sub-circuits is edge-triggered. Example 16: The machine-readable storage media of claim15, wherein performing the edge-triggered sequential synthesis comprises: adding an auxiliary input to the individual sub-circuit, wherein the auxiliary input represents a delayed clock signal; initializing an empty list of synthesized circuits with a plurality of majority or minority gates with different fan-in; identifying a majority or minority gate, from the plurality of majority or minority gates, having a largest fan-in; iteratively performing level-triggered sequential synthesis on the individual sub-circuit after the auxiliary input is added and using the majority or minority gate starting with the largest fan-in and then using a next largest fan-in; for each circuit output obtained after performing level-triggered sequential synthesis, adding wire delay to the delayed clock signal to generate a wire delayed clock; and for each circuit output obtained after performing level-triggered sequential synthesis, connecting the wire delayed clock to a delay element to generate a plurality of synthesized circuits. Example 17: The machine-readable storage media of claim16, wherein performing the edge-triggered sequential synthesis comprises: checking for oscillation in the plurality of synthesized circuits; and identifying a synthesized circuit, from the plurality of synthesized circuits, that meets power, performance, and area objectives. Example 18: The machine-readable storage media of claim16, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; introducing an auxiliary primary input to the individual sub-circuit for each feedback connection from an output of the individual sub-circuit to an input of the individual sub-circuit, if it is determined that the individual sub-circuit is a netlist; and performing combinational circuit synthesis on the individual sub-circuit after the auxiliary primary input is introduced. Example 19: The machine-readable storage media of claim18, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate, wherein performing level-triggered sequential synthesis comprises: feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary primary input. Example 20: The machine-readable storage media of claim16, wherein performing the level-triggered sequential synthesis comprises: determining whether the individual sub-circuit is a netlist; for each previous state in a Boolean expression for the individual sub-circuit, introducing an auxiliary input if it is determined that individual sub-circuit is not a netlist; performing combinational circuit synthesis on the individual sub-circuit after the auxiliary input is introduced, wherein performing the combinational circuit synthesis comprises introducing a first majority or minority gate and a second majority or minority gate; and feedback wiring from an output of the first majority or minority gate to the second majority or minority gate receiving input from the auxiliary input. Example 21: The machine-readable storage media of claim3, wherein performing the majority inverter graph (MIG) synthesis, to generate a MIG with connected nodes of majority gates and inverter gates, comprises: computing a maximum fan-in for majority or minority gates while ignoring fan-out constraints, wherein a maximum fan-in for a majority or minority gate is equal to a maximum number of inputs bits of the majority or minority gate; performing logic initialization for each sub-circuit of the sub-circuits; determining whether a number of input bits of a sub-circuit of the sub-circuits is less than or equal to K; if the number of input bits is less than or equal to K, applying optimal synthesis to the sub-circuit using one or more of a truth table of the sub-circuit, a binary integer programming (BIP), or a Boolean satisfiability; and performing inverter minimization in response to applying the optimal synthesis to generate a synthesized MIG circuit. Example 22: The machine-readable storage media of claim21, wherein K is less than 10. Example 23: The machine-readable storage media of claim21, wherein performing the majority inverter graph (MIG) synthesis comprises: if the number of input bits is greater than K, determining whether the number of input bits of a sub-circuit of the sub-circuits is less than or equal to H, where H is greater than K; and performing inverter minimization on the sub-circuit to generate the synthesized MIG circuit. Example 24: The machine-readable storage media of claim23, wherein H is 20 or more. Example 25: The machine-readable storage media of claim23, wherein performing the majority inverter graph (MIG) synthesis comprises: if the number of input bits is greater than H, independently applying a plurality of hierarchical synthesis to the sub-circuit and results from the plurality of hierarchical synthesis are glued together to generate the synthesized MIG circuit. Example 26: The machine-readable storage media of claim21, wherein performing the logic initialization comprises: determining whether the number of input bits of the sub-circuit of the sub-circuits is less than or equal to K; determining whether logic of the sub-circuit is specified as a truth table; outputting the truth table if it is determined that the logic of the sub-circuit is specified as a truth table; and simulating or determining the truth table if it is determined that the logic of the sub-circuit is not specified as a truth table. Example 27: The machine-readable storage media of claim21, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a hardware description language or a netlist if the number of input bits of the sub-circuit of the sub-circuits is greater than K; determining whether logic of the sub-circuit is specified as a truth table if it is determined that the logic of the sub-circuit is specified as a hardware description language or a netlist; and mapping the netlist to a MIG using majority or minority gates from the standard cell library if it is determined that the logic is not specified as a truth table. Example 28: The machine-readable storage media of claim21, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a hardware description language or a netlist if the number of input bits of the sub-circuit of the sub-circuits is greater than K; determining whether logic of the sub-circuit is specified as a truth table if it is determined that the logic of the sub-circuit is specified as a hardware description language or a netlist; applying logic synthesis on the sub-circuit to obtain a netlist if it is determined that the logic is specified as a truth table; and mapping the netlist to a MIG using majority or minority gates from the standard cell library if it is determined that the logic is not specified as a truth table. Example 29: The machine-readable storage media of claim27, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a graph of higher-level blocks, it is determined that the logic of the sub-circuit is not specified as a hardware description language or a netlist; and mapping the graph of the higher-level blocks to a MIG using the majority or minority gates from the standard cell library if it is determined that the logic is specified as a graph. Example 30: The machine-readable storage media of claim29, wherein performing the logic initialization comprises: determining whether the logic of the sub-circuit is specified as a truth table, if it is determined that the logic is not specified as a graph; parsing and simulating for a truth table if it is determined that the logic of the sub-circuit is not specified as a truth table; and generating a MIG using wide-input majority or minority gates by applying the truth table. Example 31: The machine-readable storage media of claim29, wherein generating a MIG using wide-input majority or minority gates comprises: generating a list of product terms of the logic of the sub-circuit; order the product terms in descending or ascending order of literal frequency of the product terms, to generate a list of ordered product terms; determining whether there is delay minimization for the list of ordered product terms; and applying logarithmic breakdown and majority gate synthesis of each product term in the list of ordered product terms if delay minimization is possible. Example 32: The machine-readable storage media of claim31, wherein generating a MIG using wide-input majority or minority gates comprises: applying linear breakdown and majority gate synthesis of each product term in the list of ordered product terms if delay minimization is not possible. Example 33: The machine-readable storage media of claim31, wherein generating a MIG using wide-input majority or minority gates comprises: generating a list of sum terms of the logic of the sub-circuit; tallying the product terms across the list of sum terms; order the list of sum terms in descending or ascending order of product term frequency, to generate a list of ordered sum terms; determining whether there is delay minimization for the list of ordered sum terms; and applying logarithmic breakdown and majority gate synthesis of each sum term in the list of ordered sum terms if delay minimization is possible, to generate the MIG. Example 34: The machine-readable storage media of claim33, wherein generating a MIG using wide-input majority or minority gates comprises: applying linear breakdown and majority gate synthesis of each sum term in the list of ordered sum terms if delay minimization is not possible, to generate the MIG. Example 35: The machine-readable storage media of claim31, wherein generating the list of product terms comprises: applying one or more of Karnaugh map, Quine McCluskey algorithm, or Espresso heuristic on the sub-circuit. Example 36: A machine-readable storage media having machine-readable instructions stored thereon that when executed cause one or more machines to perform a method comprising: receiving one or more input files indicative of a logic function; generating a graph from the one or more input files; identifying inputs, state elements, and outputs from the graph; segregating the graph into subgraphs by grouping logic components between the inputs and the state elements, between the state elements, between the state elements and the outputs, and between the input and the outputs; and determining whether a subgraph from among the subgraphs includes a feedback path; performing combinational circuit synthesis on the subgraph if it is determined that the subgraph does not include a feedback path; performing sequential circuit synthesis on the subgraph if it is determined that the subgraph includes a feedback path; synthesizing a circuit using outputs from the combinational circuit synthesis and the sequential circuit synthesis. Example 37: The machine-readable storage media of claim36, wherein performing combinational circuit synthesis or performing sequential circuit synthesis comprises: selecting standard CMOS logic gates and majority or minority gates of any fan-in or fan-out to synthesize the circuit. Example 38: The machine-readable storage media of claim36, wherein the majority or minority gates include non-linear polar material. Example 39: The machine-readable storage media of claim38, wherein the non-linear polar material includes one of: ferroelectric material, para-electric material, or non-linear dielectric. Example 40: The machine-readable storage media of claim39, wherein the ferroelectric material includes one of: Bismuth ferrite (BFO), BFO with a doping material where in the doping material is one of Lanthanum, or elements from lanthanide series of periodic table; Lead zirconium titanate (PZT), or PZT with a doping material, wherein the doping material is one of La, Nb; relaxor ferroelectric which includes one of lead magnesium niobate (PMN), lead magnesium niobate-lead titanate (PMN-PT), lead lanthanum zirconate titanate (PLZT), lead scandium niobate (PSN), Barium Titanium-Bismuth Zinc Niobium Tantalum (BT-BZNT), or Barium Titanium-Barium Strontium Titanium (BT-BST); perovskite includes one of: BaTiO3, PbTiO3, KNbO3, or NaTaO3; hexagonal ferroelectric includes one of: YMnO3, or LuFeO3; hexagonal ferroelectrics of a type h-RMnO3, where R is a rare earth element which includes one of: cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), or yttrium (Y); Hafnium (Hf), Zirconium (Zr), Aluminum (Al), Silicon (Si), their oxides or their alloyed oxides; Hafnium oxides as Hf1-x Ex Oy, where E can be Al, Ca, Ce, Dy, er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y; Al(1-x)Sc(x)N, Ga(1-x)Sc(x)N, Al(1-x)Y(x)N or Al(1-x-y)Mg(x)Nb(y)N, y doped HfO2, where x includes one of: Al, Ca, Ce, Dy, Er, Gd, Ge, La, Sc, Si, Sr, Sn, or Y, wherein ‘x’ is a fraction; Niobate type compounds LiNbO3, LiTaO3, Lithium iron Tantalum Oxy Fluoride, Barium Strontium Niobate, Sodium Barium Niobate, or Potassium strontium niobate; or improper ferroelectric includes one of: [PTO/STO]n or [LAO/STO]n, where ‘n’ is between 1 to 100. Example 41: The machine-readable storage media of claim36, wherein the input is in one or more forms of Verilog, truth table, Boolean expression, graph, or netlist. Example 42: The machine-readable storage media of claim36, wherein performing combinational circuit synthesis on the subgraph comprises iteratively breaking subgraph into smaller blocks. Example 43: The machine-readable storage media of claim42, wherein performing combinational circuit synthesis comprises: selecting an option that maximizes power, performance, and area for the block; for each block of the smaller blocks, synthesizing the block in view of the selected option, using CMOS cells, or a combination of CMOS cells and majority or minority gate cells, wherein synthesizing the block results in a synthesized block; and combining the synthesized block, associated with each block of the smaller blocks, to hierarchically create larger cells and a complete circuit. Example 44: The machine-readable storage media of claim36, wherein performing combinational circuit synthesis on the subgraph comprises: breaking subgraph into blocks; selecting an option that maximizes power, performance, and area for each block of the blocks of the subgraph; performing majority-minority inverter graph (MIG) synthesis on the blocks to generate MIG subgraphs; and matching a functionality of one or more standard building block cells to sections of the MIG subgraphs that maximizes power, performance, and area of the sections of the MIG subgraphs. Example 45: The machine-readable storage media of claim44, wherein performing combinational circuit synthesis on the subgraph comprises: replacing the sections of the MIG subgraphs with the one or more standard building block cells if a matching functionality is determined; and combining the replaced sections of the MIG subgraphs with other sections of the MIG subgraphs to generate a complete circuit. Example 46: The machine-readable storage media of claim44, wherein the one or more standard building block cells include CMOS cells, majority or minority gate cells, or a combination of CMOS cells and majority or minority gate cells. Example 47: The machine-readable storage media of claim44, wherein performing combinational circuit synthesis on the subgraph comprises: prioritizing and selecting one or more larger building block cells over the one or more standard building block cells if the one or more larger building block cells match a functionality of the sections of the MIG subgraphs that maximizes power, performance, and area of the sections of the MIG subgraphs; replacing the sections of the MIG subgraphs with the one or more larger building block cells if a matching functionality is determined; and combining the replaced sections of the MIG subgraphs with other sections of the MIG subgraphs to generate a complete circuit. Example 48: The machine-readable storage media of claim36, wherein performing sequential circuit synthesis on the subgraph, comprises: determining if the subgraph indicates an edge triggered sequential; adding an input variable to the subgraph to represent a previous output state, if it is determined that the subgraph indicates a non-edge triggered sequential; applying a truth table for a latch to the subgraph with the input variable; performing majority-minority inverter graph (MIG) synthesis to the subgraph, in response to applying the truth table, to generate MIG subgraphs; modifying the MIG subgraphs by wiring an output of the latch to one or more nodes of a majority or minority gate cell to receive input from the previous output state; and generating a synthesized circuit in response to modifying the MIG subgraphs. Example 49: The machine-readable storage media of claim36, wherein performing sequential circuit synthesis on the subgraph, comprises: determining if the subgraph indicates an edge triggered sequential; if the subgraph indicates the edge triggered sequential, determining if the edge triggered sequential is a master-slave architecture; if the edge triggered sequential is a master-slave architecture, adding an input variable to the subgraph to represent a previous output state; applying a truth table for a latch to the subgraph with the input variable; performing majority-minority inverter graph (MIG) synthesis to the subgraph, in response to applying the truth table, to generate MIG subgraphs; and modifying the MIG subgraphs by wiring an output of the latch to one or more nodes of a majority or minority gate cell to receive input from the previous output state; duplicating the latch to generate a duplicated latch; coupling the latch with duplicated latch to generate a master-slave architecture; wiring a clock to the latch and an inverted clock to the duplicated latch after modifying the MIG subgraphs; and generating a synthesized circuit in response to wiring the clock to the latch and the inverted clock to the duplicated latch. Example 50: The machine-readable storage media of claim36, wherein performing sequential circuit synthesis on the subgraph, comprises: determining if the subgraph indicates an edge triggered sequential; if the subgraph indicates the edge triggered sequential, determining if the edge triggered sequential is a master-slave architecture; if the edge triggered sequential is not a master-slave architecture, adding a first input variable to represent to the subgraph to represent a previous output state; if the edge triggered sequential is not a master-slave architecture, adding a second input variable to represent to the subgraph to represent a delayed clock; applying a truth table for a flip-flop to the subgraph; performing majority-minority inverter graph (MIG) synthesis to the subgraph, in response to applying the truth table, to generate MIG subgraphs; modifying the MIG subgraphs by wiring an output of the flip-flop to one or more nodes of a majority or minority gate cell to receive input from the previous output state; modifying the MIG subgraphs by wiring the delayed clock as a clock to a delay element; and generating a synthesized circuit in response to wiring the output of the flip-flop and wiring the delayed clock as the clock to the delay element. Example 51: The machine-readable storage media of claim44, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: receiving inputs on the blocks of the subgraph; identifying a number of blocks in the subgraph; comparing the number of blocks with a first threshold; exacting synthesis of the subgraph using one or more solvers if it is determined that the number of blocks is less than or equal to the first threshold; performing inverter minimization in response to exacting synthesis; and synthesizing the subgraph in response to performing inverter minimization. Example 52: The machine-readable storage media of claim44, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: identifying a number of blocks in the subgraph; comparing the number of blocks with a first threshold; comparing the number of blocks with a second threshold, if it is determined that the number of blocks is greater than the first threshold, wherein the second threshold is larger than the first threshold; simulating the subgraph to determine signal flowing through each edge of the subgraph if it is determined that the number of blocks is less than or equal to the second threshold; topologically splitting the subgraph into first subgraphs, equivalent to the first threshold, using heuristics that maximizes power, performance, and area for each block of the blocks of the subgraph; exacting synthesis of each subgraphs of the first subgraphs topologically, using the signal flowing through each edge of the graph, to generate synthesized first subgraphs; performing inverter minimization in response to exacting synthesis; adding the synthesized first subgraphs to a new graph; determining whether the new graph has better power, performance, and area than the subgraph to which MIG synthesis is performed; and synthesizing the synthesized first subgraphs if it is determined that the new graph is worst in power, performance, and area than the subgraph to which MIG synthesis is performed. Example 53: The machine-readable storage media of claim52, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises simulating the subgraph to determine the signal flowing through each edge of the subgraph if it is determined that the new graph is better in power, performance, and area than the subgraph to which MIG synthesis is performed. Example 54: The machine-readable storage media of claim52, wherein the signal flowing through each edge of the graph is determined by applying one or more solvers. Example 55: The machine-readable storage media of claim54, wherein the one or more solvers include: satisfiability solver (SAT) and Mixed Integer Linear Programming (MIP). Example 56: The machine-readable storage media of claim54, wherein the one or more solvers include: satisfiability solver (SAT) and Mixed Integer Linear Programming (MIP). Example 57: The machine-readable storage media of claim52, wherein the inputs include: gate type, maximum gate fan-in, area or delay target, and description of blocks. Example 58: The machine-readable storage media of claim54, wherein the description of blocks includes one or more of: Verilog, graph netlist, or truth table. Example 59: The machine-readable storage media of claim44, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: identifying a number of blocks in the subgraph; comparing the number of blocks with a first threshold; comparing the number of blocks with a second threshold, if it is determined that the number of blocks is greater than the first threshold, wherein the second threshold is larger than the first threshold; topologically splitting the subgraph into second subgraphs, wherein each second subgraph has blocks less than or equal to the second threshold; simulating each of the second subgraphs to determine signal flowing through each edge of the second subgraph; topologically splitting each of the second subgraphs into third subgraphs using heuristics that maximizes power, performance, and area for each block of the blocks of the second subgraphs; exacting synthesis of each subgraphs of the third subgraphs topologically, signal flowing through each edge of the second subgraph and by applying one or more solvers, wherein exacting synthesis of each subgraphs of the third subgraphs generates exacted third subgraphs; performing inverter minimization in response to exacting synthesis; adding the synthesized second subgraphs to the exacted third subgraphs, to generate a new graph; determining whether the new graph has better power, performance, and area than the subgraph to which MIG synthesis is performed; and synthesizing the new graph if it is determined that the new graph is worst in power, performance, and area than the subgraph to which MIG synthesis is performed. Example 60: The machine-readable storage media of claim59, wherein performing majority-minority inverter graph (MIG) synthesis to the subgraph, comprises: topologically splitting the subgraph if it is determined that the new graph has better power, performance, and area than the subgraph to which MIG synthesis is performed. Example 61: The machine-readable storage media of claim59, wherein the second subgraphs are overlapping subgraphs. Example 62: A machine-readable storage media having machine-readable instructions stored thereon that when executed cause one or more machines to perform a method comprising: receiving one or more input files indicative of a logic function; generating a graph from the one or more input files; identifying inputs, state elements, and outputs from the graph; segregating the graph into subgraphs by grouping logic components between the inputs and the state elements, between the state elements, between the state elements and the outputs, and between the input and the outputs; determining whether a subgraph from among the subgraphs includes a feedback path; selecting standard CMOS logic gates and majority or minority gates of any fan-in or fan-out; performing combinational circuit synthesis on the subgraph, using the selected standard CMOS logic gates and majority or minority gates, if it is determined that the subgraph does not include a feedback path; performing sequential circuit synthesis on the subgraph, using the selected standard CMOS logic gates and majority or minority gates, if it is determined that the subgraph includes a feedback path; and synthesizing a circuit using outputs from the combinational circuit synthesis and the sequential circuit synthesis. An abstract is provided that will allow the reader to ascertain the nature and gist of the technical disclosure. The abstract is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. | 213,407 |
11861280 | DETAILED DESCRIPTION Hereinafter, embodiments are described in detail with reference to the accompanying drawings. FIG.1is a block diagram illustrating a neural network system for generating a verification vector according to an embodiment. A neural network system100may infer information included in input data by training (or learning) a neural network or by analyzing the input data by using the neural network. The neural network system100may determine a situation based on the inferred information, or may control components of an electronic device on which the neural network system100is mounted. For example, the neural network system100may be applied to a circuit design device or a circuit design system for designing and verifying circuits, and in addition, the neural network system100may be mounted on one of various kinds of electronic devices. In an embodiment, the neural network system100ofFIG.1may include an application processor. Referring toFIG.1, the neural network system100may include a central processing unit (CPU)110, a neural network device120, a memory130, an interface140, a bus150, and a graphics processing unit (GPU)160. The neural network system100may further include an input/output module, a security module, a power control device, etc., and may further include various types of processors. According to an embodiment, some or all of the components of the neural network system100(for example, the CPU110, the neural network device120, the memory130, and the interface140) may be formed in one semiconductor chip. For example, the neural network system100may be implemented as a system-on-chip (SoC). The components of the neural network system100may communicate with each other via the bus150. The CPU110may control an overall operation of the neural network system100. The CPU110may include one processor core (or single core) or a plurality of processor cores (or multi-core). The CPU110may process or execute programs and/or data stored in a storage area such as the memory130. For example, the CPU110may execute an application program and control the neural network device120to perform neural network-based tasks according to an execution of the application program. The neural network may include at least one of various types of neural network models such as a convolution neural network (CNN), a region with (R) CNN (R-CNN), a region proposal network (RPN), a recurrent neural network (RNN), a stacking-based (S) deep neural network (DNN) (S-DNN), a state-space (S-S) DNN (S-SDNN), a deconvolution network, a deep belief network (DBN), a restricted Boltzmann machine (RBM), a fully convolutional network, a long short-term memory (LSTM) network, and a classification network. The GPU160may speed up a computational operation of the neural network system100. The GPU160may include the plurality of processor cores (or the multi-core), operate by being connected to another GPU (not shown) via the CPU110, a peripheral component interconnect express (PCIe), and VLINK, and accelerate a universal math operation via compute unified device architecture (CUDA). The GPU160may process or execute programs and/or data stored in a storage area such as the memory130. The neural network device120may perform a neural network operation based on the received input data. Furthermore, the neural network device120may generate an information signal based on a result of the neural network operation. The neural network device120may be implemented as a neural network operation accelerator, a coprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), etc. The neural network device120may include a verification vector generator20ofFIG.2that generates the verification vector for verifying the circuit design. The verification vector generator20according to an embodiment may generate verification vectors suitable for verification of each of a plurality of circuit blocks included in a circuit design through coverage-based reinforcement learning. The coverage may denote a performance measure of a test vector used to search for the verification vector suitable for verification of a circuit block. When the test vector is input to the circuit block, the coverage may be determined (or computed) based on a state transition of the circuit block, and details of the coverage are described with reference toFIGS.3A and3B. The verification vector generator20according to an embodiment may perform the reinforcement learning through neural network computation based on the coverage corresponding to the test vector that is determined based on the state transition of the generated circuit block by inputting the test vector to the circuit block, and may determine the verification vector suitable for the circuit block. In other words, the verification vector generator20may generate a reward according to a change of the coverage and apply the generated reward to the reinforcement learning such that the test vector is changed in a direction in which the coverage, or an evaluation standard, continuously increases through the reinforcement learning. As a result, the verification vector generator20may determine the test vector corresponding to the coverage equal to or greater than a reference coverage as the verification vector. The verification vector generator20according to an embodiment may decrease information loss by compressing a repetitive pattern of the test vector used in the reinforcement learning by using the data lossless compression scheme (for example, a run-length encoding) and may maintain an amount of data that a machine learning model is capable of processing. Details of the data lossless compression scheme are described with reference toFIG.4. The verification vector generator20according to an embodiment may generate the test vector that conforms to verification characteristics of a parameter causing the state transition of the circuit block in the circuit design. When the circuit block has first verification characteristics, the verification vector generator20may perform a first reinforcement learning of a first method, and when the circuit block has second verification characteristics, the verification vector generator20may perform a second reinforcement learning of a second method. In other words, the verification vector generator20may reduce time and simulation cost to determine the verification vector by performing a different reinforcement learning according to the verification characteristics of the circuit block. Details of the verification characteristics of the circuit block are described with reference toFIG.5, and details of various methods of the reinforcement learning are described with reference toFIGS.7A and10A, and the like. In addition, the neural network device120may further include a design verifier (not shown). The design verifier (not shown) may perform verification of the circuit design by using the verification vector generated by the verification vector generator20. The memory130may store programs and/or data used in the neural network system100. The memory130may also store computation parameters (for example, reward values, weight values, bias values, etc.) for the neural network, parameters for quantization of the neural network (for example, scale factors, bias values, etc.), input data (for example, the test vector), and output data (for example, a state of the circuit block). The memory130may include dynamic random-access memory (RAM) (DRAM), but is not limited thereto. The memory130may include at least one of a volatile memory and a nonvolatile memory. In this specification, the phrase “at least one of A and B” includes “only one A”, “only one B”, and “both A and B”. The non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), a flash memory, etc. The volatile memory may include dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), phase-change RAM (PRAM), magnetic RAM (MRAM), resistive RAM (RRAM), ferroelectric RAM (FeRAM), etc. In an embodiment, the memory130may include at least one of a hard disk drive (HDD), a solid state drive (SSD), a compact flash (CF) card, a secure digital (SD) card, a micro secure digital (micro-SD) card, an extreme digital (xD) card, and a memory stick. The interface140may receive information about the circuit design generated in the circuit design device or the circuit design system on which the neural network system100is mounted and provide the received information to the neural network device120. The neural network device120may perform the simulation for the reinforcement learning by using the information about the circuit design. The neural network device120according to an embodiment may efficiently generate the verification vector suitable for each circuit block in the circuit design by performing the reinforcement learning based on a clear evaluation criterion that is called the coverage of the test vector, and may reduce the amount of computations and improve non-uniformity by applying the data lossless compression scheme to the test vector. In addition, the neural network device120may reduce the simulation cost by adaptively performing efficient reinforcement learning according to the characteristics related with the parameter causing the state transition of the circuit block. FIG.2is a block diagram for explaining an operation of the verification vector generator20according to an embodiment. Referring toFIG.2, the verification vector generator20may include an actor22and a simulator24and may perform the reinforcement learning for generating the verification vector suitable for verification of the circuit block by using configurations of the actor22and the simulator24. The actor22may provide the simulator24with an action AT that includes the test vector. The simulator24may include factors that are affected by the action AT as a state or environment in which a certain system (for example, the circuit design) is loaded. The simulator24may perform a simulation operation of inputting the test vector included in the action AT to at least one circuit block (or a target circuit block) in the circuit design and outputting the state of the circuit block. The simulator24may check whether a state transition ST of the circuit block according to the input of the test vector (or a state transition history of an accumulated circuit block), and a new state transition ST of the circuit block by the test vector have occurred, and may generate a reward RW according to a result of the checking and provide the generated reward RW to the actor22. In an embodiment, the simulator24may compare the state transition history of the circuit block accumulated by the reinforcement learning before the test vector is input to the circuit block, with the state transition ST of the circuit block generated by the test vector after the test vector is input, and generate the reward RW that has a positive or negative value based on a result of the comparison. The simulator24may generate the weighted reward RW based on a coverage CV corresponding to the test vector. The simulator24may generate a different reward RW according to the degree of change between the coverage CV corresponding to the test vector and the coverage CV corresponding to the state transition history of the circuit block. Details of the reward RW generation are described with reference toFIG.3B. FIG.3Ais a diagram for explaining the coverage CV according to an embodiment, andFIG.3Bis a diagram for explaining a coverage-based reward generation method according to an embodiment. Referring toFIG.3A, the verification vector generator20inFIG.2may generate a plurality of verification vectors for verification of a circuit design CD. First, the circuit design CD may be divided into a first block group BLG_A and a second block group BLG_B according to an arrangement position of a circuit block, the usage of the circuit block, circuit characteristics of the circuit block, parameter-related verification characteristics related with the parameter causing the state transition ST of the circuit block, etc. The verification vector generator20inFIG.2may perform different types of reinforcement learning for each of the first and second block groups BLG_A and BLG_B. For example, in the verification vector generator20inFIG.2, a first configuration of the test vector for finding the verification vector of the first block group BLG_A may be different from a second configuration of the test vector for finding the verification vector of the second block group BLG_B. However, the verification vector generator20inFIG.2may employ the same coverage CV as an evaluation reference in performing the reinforcement learning for each of the first and second block groups BLG_A and BLG_B. The circuit design CD illustrated inFIG.3Ais merely an example embodiment and is not limited thereto, and may include more or fewer circuit blocks that is shown inFIG.3A. The first block group BLG_A may include a first circuit block BL_A1, a second circuit block BL_A2, and a third circuit block BL_A3, and the second block group BLG_B may include a first circuit block BL_B1, a second circuit block BL_B2, a third circuit block BL_B3, and a fourth circuit block BL_B4. In an embodiment, when the circuit design CD includes a DRAM circuit design, the first block group BLG_A may include MRS blocks in which the state transition ST of the circuit block occurs due to setting of MRS values, and the second block group BLG_B may include command blocks in which the state transition ST of the circuit block occurs through an input of commands. A verification vector generation method for a case in which the circuit design CD is a DRM circuit design is described in detail with reference toFIG.5. However, this case is merely illustrative and is not limited thereto, and the technical idea of the inventive concept may be applied to the reinforcement learning that determines the verification vector of any of various types of circuit designs. Hereinafter, descriptions are given assuming that the third circuit block BL_A3in the first block group BLG_A is a target of the reinforcement learning. The third circuit block BL_A3may include a first logic circuit LC1, a second logic circuit LC2, a third logic circuit LC3, a fourth logic circuit LC4, and a fifth logic circuit LC5. A state of the third circuit block BL_A3may be defined by the input or output of the first through fifth logic circuits LC1through LC5. In other words, the state of the third circuit block BL_A3may be defined by a first input N1of the first logic circuit LC1, a second input IN2of the second logic circuit LC2, a third input IN3of the fourth logic circuit LC4, an output Q1of the fourth logic circuit LC4, and an output Q2of the fifth logic circuit. Possible state transitions ST of the third circuit block BL_A3in the embodiment shown inFIG.3Amay have 32 cases as summarized in a state table ST TB. However, the third circuit block BL_A3may have a state transition ST that has no possibility of occurrence due to an original design purpose, and this case may be considered when the coverage CV is computed. The coverage CV according to embodiments may be determined on a circuit block basis, and the coverage CV may be determined by observing not only the state transition ST of one node in the circuit block, but also a change of a set including some or all of the inputs or outputs of logic circuits in the circuit block. In an embodiment, in the case of the MRS block, a state of the MRS block may be defined by the inputs or outputs of the logic circuits in which the MRS value in the MRS block is set. In addition, in the case of the command block, the state of the command block may be defined by the inputs or outputs of the logic circuits constituting a counter in the command block (for example, a refresh counter in the case of DRAM), a read command first in first out (FIFO), a write command FIFO, etc. The coverage CV may indicate whether a new state transition ST has occurred in the circuit block when the test vector is input to the circuit block, and the verification vector generator20inFIG.2may perform coverage-based reinforcement learning until the coverage CV gradually approaches a reference coverage and finally exceeds the reference coverage. The reference coverage may be set in advance and may be set differently depending on circuit design verification environment and a type of the circuit design. The coverage CV corresponding to the test vector may be defined as a value obtained by dividing a value, which is obtained by adding the number of new state transitions ST that have been generated by inputting the test vector to the circuit block and the number of state transitions ST that are not duplicated of the state transition history of the circuit block accumulated by the reinforcement learning before the test vector is input, by the number of all possible state transitions ST of the circuit block. However, this is merely an example embodiment, and is not limited thereto, and the coverage CV may be computed by various mathematical expressions in which the technical idea of the inventive concept is reflected. Referring further toFIG.3B, when, in a first case Case_1, the verification vector generator20inFIG.2inputs an Nthtest vector Test_VectorNto the third circuit block BL_A3, two new Nthstate transitions ST_TVNmay be generated as compared with an (N−1)thstate transition history ST_HistoryN−1of the third circuit block BL_A3. Before the Nthtest vector Test_VectorNis input to the third circuit block BL_A3, the (N−1)thstate transition history ST_HistoryN−1of the third circuit block BL_A3may include four state transitions ST, but because ‘00101’ corresponds to the duplicated state transition, the (N−1)thstate transition history ST_HistoryN−1may include three un-duplicated state transitions ST. Accordingly, an (N−1)thcoverage CVN−1corresponding to the (N−1)thstate transition history ST_HistoryN−1of the third circuit block BL_A3may have a value of ‘ 3/32’. The Nthcoverage CVNcorresponding to the Nthtest vector Test_VectorNmay have a value of ‘ 5/32’ that is a value obtained by dividing a value, which is a result of adding the number (2) of new Nthstate transitions ST generated by the Nthtest vector Test_VectorNand the number (3) of un-duplicated state transitions ST of the (N−1)thstate transition history ST_HistoryN−1, by the number (32) of all possible state transitions ST of the third circuit block BL_A3. Since the Nthcoverage CVNcorresponding to the Nthtest vector Test_VectorNhas increased to be greater than the previous (N−1)thcoverage CVN−1, the verification vector generator20inFIG.2may generate an Nthreward RWNhaving a positive value and apply the generated Nthreward RWNto the reinforcement learning. Furthermore, the verification vector generator20inFIG.2may generate the Nthreward RWNwhich has a value proportional to an amount of change between the Nthcoverage CVNcorresponding to the Nthtest vector Test_VectorNand the previous (N−1)thcoverage CVN−1. In a second case Case_2, when an (N+1)thtest vector Test_VectorN+1is input to the third circuit block BL_A3, the verification vector generator20inFIG.2may compare a generated (N+1)thstate transition ST of the third circuit block BL_A3with the Nthstate transition history ST_HistoryNand determine that they are duplicates. Before the (N+1)thtest vector Test_VectorN+1is input to the third circuit block BL_A3, the Nthstate transition history ST_HistoryNof the third circuit block BL_A3may include six state transitions ST, but because ‘00101’ corresponds to the duplicated state transitions ST, the Nthstate transition history ST_HistoryNmay include five un-duplicated state transitions ST. Accordingly, the Nthcoverage CVNcorresponding to the Nthstate transition history ST_HistoryNof the third circuit block BL_A3may have a value of ‘ 5/32’. Because an (N+1)thcoverage CVN+1corresponding to the (N+1)thtest vector Test_VectorN+1does not have a new state transition ST generated by the (N+1)thtest vector Test_VectorN+1, the (N+1)thcoverage CVN+1may have the same value of ‘ 5/32’ as the previous Nthcoverage CVN. Since the (N+1)thcoverage CVN+1corresponding to the (N+1)thtest vector Test_VectorN+1is the same as the previous Nthcoverage CVN, the verification vector generator20inFIG.2may generate an (N+1)threward RWN+1having a negative value and apply the generated (N+1)threward RWN+1to the reinforcement learning. In a third case Case_3, when an (N+2)thtest vector Test_VectorN+2is input to the third circuit block BL_A3, the verification vector generator20inFIG.2may compare a generated (N+2)thstate transition of the third circuit block BL_A3with the (N+1)thstate transition history ST_HistoryN+1and generate a new (N+2)thstate transition ST_TVN+2. Before an (N+2)thtest vector Test_VectorN+2is input to the third circuit block BL_A3, the (N+1)thstate transition history ST_HistoryN+1of the third circuit block BL_A3may include seven state transitions ST, but because ‘00101’ and ‘00111’ correspond to the duplicated state transitions, the Nthstate transition history ST_HistoryNmay include five un-duplicated state transitions ST. Accordingly, the Nthcoverage CVNcorresponding to the (N+1)thstate transition history ST_HistoryN+1of the third circuit block BL_A3may have a value of ‘ 5/32’. The (N+2)thcoverage CVN+2corresponding to the (N+2)thtest vector Test_VectorN+2may have a value of ‘ 6/32’ that is a value obtained by dividing a value, which is a result of adding the number (1) of new (N+2)thstate transitions ST_TVN+2generated by the (N+2)thtest vector Test_VectorN+2and the number (5) of un-duplicated state transitions ST of the (N+1)thstate transition history ST_HistoryN+1, by the number (32) of all possible state transitions ST of the third circuit block BL_A3. Since the (N+2)thcoverage CVN+2corresponding to the (N+2)thtest vector Test_VectorN+2has increased to be greater than the previous (N+1)thcoverage CVN+1, the verification vector generator20inFIG.2may generate an (N+2)threward RWN+2having a positive value and apply the generated (N+2)threward RWN+2to the reinforcement learning. Furthermore, the verification vector generator20inFIG.2may generate the (N+2)threward RWN+2, which has a value proportional to an amount of change between the (N+2)thcoverage CVN+2corresponding to the (N+2)thtest vector Test_VectorN+2and the previous (N+1)thcoverage CVN+1. At this time, the Nthreward RWNof the first case Case_1may have a value greater than the (N+2)threward RWN+2of the third case Case_3. As described with reference toFIG.3B, the verification vector generator20inFIG.2may perform the coverage-based reinforcement learning until a coverage is greater than or equal to a reference coverage. FIG.4is a diagram for explaining a data lossless compression scheme applied to a test vector in reinforcement learning according to an embodiment. Hereinafter, descriptions are given assuming that the test vector includes a plurality of commands. Referring toFIG.4, a test vector VVa may include a plurality of commands (CMD_11, CMD_12, CMD_13, CMD_21, CMD_31, and CMD_32) that are arranged on a time axis and may include a clock signal CLK corresponding to each of the plurality of commands (CMD_11, CMD_12, CMD_13, CMD_21, CMD_31, and CMD_32). The commands (CMD_11, CMD_12, and CMD_13) may be duplicate commands having an identical value of ‘110001’, and the commands (CMD_31and CMD_32) may be duplicate commands having an identical value of ‘110010’. In other words, the test vector VVa may include commands that are repeated in succession, and these repeated commands may be a factor that inefficiently increases the amount of computations during the reinforcement learning for finding a verification vector. The verification vector generator20inFIG.1according to an embodiment may apply a data lossless compression scheme on the test vector VVa and generate a compressed test vector VVb. In one example, the verification vector generator20inFIG.1may encode the test vector VVa by using a run-length encoding scheme and generate the compressed test vector VVb. The compressed test vector VVb may include a plurality of commands (CMD_1, CMD_2, and CMD_3) that are different from each other, and the number of repetitions NR of each of the commands (CMD_1, CMD_2, and CMD_3). In other words, in the example shown inFIG.4, the value ‘110001’ of CMD_1has an NR value of 3 and thus is repeated 3 times, the value ‘010010’ of CMD_2has an NR value of 1 and is repeated 1 time, and the value ‘110010’ of CMD_3has an NR value of 2 and is repeated 2 times. The verification vector generator20inFIG.1may reduce a load of the reinforcement learning computation by using the compressed test vector VVb when performing the reinforcement learning according to the technical idea of the inventive concept. FIG.5is a diagram for explaining verification characteristics and a reinforcement learning method of a circuit block according to an embodiment. Although the circuit design CD described below is assumed to include a DRAM circuit design, this is merely example and is not limited thereto. The technical idea of the inventive concept may be applicable in determining the verification vector of various circuit designs. Referring toFIG.5, the circuit design CD may include a first target group TG1, a second target group TG2, and a third target group TG3. The first target group TG1may include a first MRS block BL_1A and a first command block BL_1B connected thereto, the second target group TG2may include a second MRS block BL_2A and a second command block BL_2B connected thereto, and the third target group TG3may include a third MRS block BL3A and a third command block BL3B connected thereto. In an embodiment, the verification vector generator20inFIG.1may perform the reinforcement learning sequentially or in parallel to determine the verification vectors suitable for each of the first through third target groups TG1through TG3. In addition, the verification vector generator20inFIG.1may determine the order of determination of the verification vectors for the circuit blocks in the first through third target groups TG1through TG3according to parameter-related verification characteristics that generate the state transition of the circuit block. When the verification vector generator20inFIG.1performs the reinforcement learning for the first target group TG1, the verification vector generator inFIG.1may primarily perform the reinforcement learning for first determining the verification vector of the first MRS block BL_1A and may perform the reinforcement learning for determining the verification vector of the first command block BL_1B in a state where the determined verification vector is input to the first MRS block BL_1. The reason for this is that a state of the first command block BL_1B tends to be dependent on MRS values that are set in the first MRS block BL_1A. In addition, the verification vector generator20inFIG.1may efficiently determine the verification vectors of each circuit block by applying a different reinforcement learning scheme by verification characteristics. In an embodiment, the verification vector generator20inFIG.1may set a reference coverage used for determining the verification vector of each of circuit blocks (BL_1A through BL_3A, and BL_1B through BL_3B) individually for each of the circuit blocks (BL_1A through BL_3A, and BL_1B through BL_3B), or may set the same reference coverage for particular circuit blocks. As an example, the verification vector generator20inFIG.1may individually perform the reinforcement learning for determining the verification vectors of the first through third MRS blocks BL_1A through BL_3A. For example, when the reinforcement learning for determining the first verification vector of the first MRS block BL_1A is performed, and the coverage corresponding to the test vector is equal to or greater than the first reference coverage, the test vector may be determined as the first verification vector. When the reinforcement learning for determining a second verification vector of the second MRS block BL_2A is performed, and the coverage corresponding to the test vector is equal to or greater than a second reference coverage, the test vector may be determined as the second verification vector. When the reinforcement learning for determining a third verification vector of the third MRS block BL_3A is performed, and the coverage corresponding to the test vector is equal to or greater than a third reference coverage, the test vector may be determined as the third verification vector. Here, the first reference coverage, the second reference coverage, and the third reference coverage may be identical to each other or may be different from each other. As another example, the verification vector generator20inFIG.1may perform in a manner that at least two reinforcement learnings among the reinforcement learnings for determining the verification vectors of the first through third MRS blocks BL_1A through BL_3A are related with each other. In other words, when an average calculated value of a coverage corresponding to the test vector in performing the reinforcement learning to determine the first verification vector of the first MRS block BL_1A, a coverage corresponding to the test vector in performing the reinforcement learning to determine the second verification vector of the second MRS block BL_2A, and a coverage corresponding to the test vector in performing the reinforcement learning to determine the third verification vector of the third MRS block BL_3A is equal to or greater than one reference coverage, the verification vectors of the first through third MRS blocks BL_1A through BL_3A may be determined. The average computation may correspond to any one of average computation schemes such as an arithmetic mean, a geometric mean, a harmonic mean, and a weighted mean. The technical idea of the inventive concept that is the basis of the above examples may be applied to the reinforcement learning for determining the verification vectors of the first through third command blocks BL_1B through BL_3B, and furthermore, may be applied to other types of circuit blocks and circuit designs. FIG.6is a block diagram for explaining an operation of a verification vector generator200according to an embodiment. The verification vector generator200described below may perform the reinforcement learning to determine a verification vector for the DRAM circuit design. However, this is merely an example and is not limited thereto, and the verification vector generator200may perform the reinforcement learning to determine verification vectors for various circuit designs. Referring toFIG.6, the verification vector generator200may include an MRS setter210, a simulator220, an emulator230, and a command generator240. The MRS setter210may generate or change a test vector to perform the reinforcement learning for the MRS block. In other words, the MRS setter210may generate a test vector including MRS values that are set in the MRS block and provide the generated test vector to at least one of the simulator220and the emulator230. In addition, the MRS setter210may change at least one of the MRS values included in the test vector in a direction in which the coverage corresponding to the test vector is improved based on the rewards RW accumulated through the reinforcement learning and the MRS value change pattern information corresponding to each reward RW. Thus, the MRS setter210may change at least one of the MRS values included in the test vector when changing the test vector based on the reinforcement learning. In an embodiment, the simulator220may receive the test vector from the MRS setter210and perform a simulation operation of inputting the received test vector to the MRS block. The simulator220may provide the MRS setter210with a reward RW generated on the basis of the state transition ST-related information of the MRS block that is generated at the time of receiving and performing operations, and a coverage corresponding to the determined test vector based on the state transition ST-related information, and may apply the generated reward RW to the reinforcement learning of the MRS block. In other words, the MRS setter210may perform the reinforcement learning for the MRS block by performing the operation of the actor22ofFIG.2. In another embodiment, the emulator230may receive a plurality of test vectors including all possible MRS values from the MRS setter210and may generate an approximate model (AM)232by using the plurality of test vectors. For example, the emulator230may generate the AM232in advance by performing a machine learning based on a supervised learning algorithm on a state transition ST trend of the MRS circuit block by using the plurality of test vectors. The emulator230may generate one AM232corresponding to a plurality of MRS blocks, or a plurality of approximate models AM respectively corresponding to the MRS blocks. The AM232may be stored in a memory area of the emulator230or in a separate memory area. The MRS setter210may provide a test vector to the AM232when performing the reinforcement learning for the MRS block, and may receive at least one reward RW and the coverage CV corresponding to the test vector without complicated computations from the AM232. In other words, the emulator230may generate, to simplify a computation for the reinforcement learning, the AM232by modeling the trend of the state transition ST of the MRS block according to all the values that the test vector is capable of having in advance, and the MRS setter210may easily and quickly perform the reinforcement learning for the MRS block by using the AM232. Detailed contents of the reinforcement learning for the MRS block are described with reference toFIG.7A. The command generator240may generate or change the test vector to perform the reinforcement learning for the command block. In other words, the command generator240may generate the test vector including a plurality of commands and provide the generated test vector to at least one of the simulator220and the emulator230. In addition, the command generator240may change at least one of the MRS values included in the test vector in a direction in which the coverage CV corresponding to the test vector is improved based on the rewards RW accumulated through the reinforcement learning and the command pattern information corresponding to each reward RW. Thus, the command generator240may add at least one command to the test vector when changing the test vector based on the reinforcement learning. In an embodiment, the simulator220may receive the test vector from the command generator240and perform a simulation operation of inputting the received test vector to the command block. By providing the command generator240with the reward RW, which is generated based on the state transition ST-related information of the command block generated during the simulation operation, and the coverage CV corresponding to the test vector determined based on the state transition ST-related information, the simulator220may apply the generated reward RW to the reinforcement learning of the command block. In other words, the command generator240may perform the reinforcement learning for the command block by performing the operation of the actor22ofFIG.2. In another embodiment, the simulator220may receive a start test vector that includes a plurality of commands from the command generator240and may generate a state transition history of the command block generated by inputting the start test vector. Thereafter, the simulator220may generate the reward RW for each command by back-tracking the generated state transition history of the command block and perform a reward mapping for each command. The state transition history may have a form of a simulation log file, and the simulator220may generate the reward RW for each command based on a log back-tracking scheme for a simulation log file. The simulator220may provide a reward mapping result for each command to the command generator240, and the reward mapping result for each command may be applied to the reinforcement learning of the command block. The command generator240may add at least one command to the test vector in a direction in which the coverage CV corresponding to the test vector is improved based on the reinforcement learning and may provide the added command to the simulator220. Thereafter, the simulator220may perform the simulation operation in which the added command is input to the command block and may apply the generated reward RW to the reinforcement learning of the command block, by providing the command generator240with the reward RW that is generated based on the state transition ST-related information of the command block that has been generated and the coverage corresponding to the test vector that has been determined based on the state transition ST-related information. Details of the reinforcement learning for the command block are described with reference toFIG.10A. The embodiment described with reference toFIG.6is an example, and is not limited thereto, and the reinforcement learning for determining the verification vector of the command block may be performed by using the AM232generated from the emulator230by using the plurality of test vectors including all possible command patterns. In addition, the reinforcement learning for determining the verification vector of the MRS block may be performed by performing the back-tracking operation on the state transition history of the MRS block generated by the initial test vector having various MRS values and a reward mapping operation for each MRS value. Furthermore, it may be clear that the AM232, the back-tracking, and the reward mapping described above are used for the reinforcement learning for determining verification vectors for different kinds of circuit blocks and circuit designs. FIG.7Ais a flowchart of the reinforcement learning operation for determining the verification vector of the MRS block according to an embodiment, andFIG.7Bis a flowchart of an embodiment of operation S120in the flowchart ofFIG.7A. Referring toFIG.7A, the verification vector generator20may perform the reinforcement learning according to at least one episode to determine the verification vector of the MRS block. First, starting from any Mth(M is an integer equal to or greater than 1) episode, the verification vector generator20may generate an Nth(N is an integer equal to or greater than 1) test vector that is set by the NthMRS values (S100). The verification vector generator20may check the state transition of the MRS block (S110). For example, the verification vector generator20may perform the simulation operation of inputting the Nthtest vector to the MRS block and check the state transition of the MRS block. The verification vector generator20may determine whether the Nthcoverage CVNcorresponding to the Nthtest vector (for example, a cumulative coverage from the first test vector to the Nthtest vector) is equal to or greater than a reference coverage VTH1(S120). When a result of the determination of operation S120is ‘No’, the verification vector generator20may generate the reward RW having a positive or negative value depending on whether a new state transition of the MRS block has been generated by the Nthtest vector (S130). The verification vector generator20may determine whether the current N has reached a reference number NREF(S140). When a result of the determination of operation S140is ‘No’, the verification vector generator20may increase N by 1 (S150) and repeatedly perform operation S100based on the reward RW generated in operation S130. When the result of the determination of operation S120is ‘Yes’, the verification vector generator20may determine first through Nthtest vectors as verification vectors (S160). When the result of the determination of operation S140is ‘Yes’, the verification vector generator20may initialize N to 1 and increase M by 1 (S170). Thereafter, the verification vector generator20may prepare the reinforcement learning corresponding to a next (M+1)thepisode by performing the reinforcement learning by using the rewards RW generated through the Mthepisode. In other words, operations S100through S150corresponding to the (M+1)thepisode may be performed based on the reinforcement learning that has been performed in the Mthepisode. In an embodiment, the reinforcement learning may use a scheme such as a policy gradient. The Nthtest vector may be determined as the first verification vector. However, since the coverage corresponding to the first verification vector does not satisfy a condition of being equal to or greater than the reference coverage VTH1, the verification vector generator20may determine the second verification vector that compensates the first verification vector. The verification vector generator20may select a test vector, as a second verification vector, that complements the first verification vector among the plurality of test vectors generated while performing the reinforcement learning. However, this is merely an example and is not limited thereto, and the number of test vectors selected to compliment the first verification vector may be plural. Referring toFIG.7B, whileFIG.7Aillustrates an embodiment of the reinforcement learning for the first MRS block BL_1A, the verification vector generator20may perform the reinforcement learning for the second MRS block BL_2A and the third MRS block BL_3A in parallel. The verification vector generator20may obtain coverages CV for the first through third MRS blocks BL_1A through BL_3A (S121). The coverages CV for the first through third MRS blocks BL_1A through BL_3A may include a first coverage corresponding to a certain test vector input to the first MRS block BL_1A, a second coverage corresponding to a certain test vector input to the second MRS block BL_2A, and a third coverage corresponding to a certain test vector input to the third MRS block BL_3A. The verification vector generator20may compute an average of the coverages CV for the first through third MRS blocks BL_1A through BL_3A (S122). Thereafter, the computed average may be applied to the Nthcoverage CVNin operation S120inFIG.7Aand compared with the reference coverage VTH1. Since descriptions thereof have been given above, detailed descriptions thereof are omitted. FIG.8Ais a diagram for explaining a method of generating the AM232according to an embodiment, andFIG.8Bis a diagram for explaining a method of performing the reinforcement learning by using the AM232according to an embodiment. Hereinafter, contents ofFIGS.8A and8Bare described with reference toFIG.6. Referring toFIG.8A, the emulator230may receive a plurality of test vectors including all possible MRS values (MRS_values_1through MRS_values_K (K is an integer equal to or greater than 1)) from the MRS setter210, output sequentially all the MRS values (MRS_values_1through MRS_values_K) according to various patterns to the first through third MRS blocks BL_1A through BL_3A, and learn the trend of the state transition ST of each of the first through third MRS blocks BL_1A through BL_3A by using an AM generator234. In an embodiment, the AM generator234may perform the machine learning based on a policy gradient algorithm for the state transition ST trends of the first through third MRS blocks BL_1A through BL_3A, and as a result, may generate the AM232suitable for the first through third MRS blocks BL_1A through BL_3A. Although descriptions with reference toFIG.8Afocuses on generating the AM232by performing a simulation on the first through third MRS blocks BL_1A through BL_3A together at the same time, this is merely an example and is not limited thereto, and the AM232suitable for each of the first through third MRS blocks BL_1A through BL_3A may be generated by performing individually the simulation on each of the first through third MRS blocks BL_1A through BL_3A. The AM232generated in this manner may be used for the reinforcement learning for determining the verification vector of each of the first through third MRS blocks BL_1A through BL_3A. Referring toFIG.8B, the emulator230may store the AM232generated inFIG.8A. When performing the reinforcement learning to determine the verification vector of the MRS block, the MRS setter210may provide the Nthtest vector having the NthMRS values MRS values N to the emulator230, and by referring to the AM232, the MRS setter210may generate the Nthreward RW_Ncorresponding to the Nthtest vector without complicated computations. Furthermore, the emulator230may generate an Nthcoverage change amount ΔCV_N corresponding to the Nthtest vector by referring to the AM232and generate an Nthreward RW N based on the Nthcoverage change amount ΔCV_N. FIG.9is a diagram for explaining a method of performing sequential reinforcement learning for each circuit block, according to an embodiment. Referring toFIG.9, a circuit design CD′ to be verified may include an MRS block BL_A and a command block BL_B. The verification vector generator20according to an embodiment may first perform the reinforcement learning for determining a verification vector of the MRS block BL_A, and after determining at least one verification vector of the MRS block BL_A, may perform the reinforcement learning for determining a verification vector of the command block BL_B. Thus, before starting the reinforcement learning for the command block BL_B, the verification vector generator20may set at least one verification vector determined for the MRS block BL_A, and thereafter, may perform the reinforcement learning for the command block BL_B. FIG.10Ais a flowchart of a reinforcement learning operation for determining a verification vector of a command block, according to an embodiment, andFIG.10Bis a flowchart of an embodiment of operation S220in the flowchart ofFIG.10A. Referring toFIG.10A, the verification vector generator20may perform reinforcement learning according to at least one episode to determine the verification vector of a command block. Starting from any Mthepisode, the verification vector generator20may generate a test vector including T (T is an integer equal to or greater than 2) commands to determine the verification vector of the command block (S200). The test vector including the T commands may be generated by using various embodiments. For example, the test vector may include a sequential arrangement of T single commands, or a sequential arrangement of K (K is an integer less than T) command sets, or a sequential arrangement of K command sets according to a certain format designated in advance by those of ordinary skill in the art. The verification vector generator20may perform simulation (S210). The verification vector generator20may perform the simulation of inputting the test vector to the command block (S210). The T commands may have a certain pattern, and during the simulation, the T commands may be sequentially input to the command block. The verification vector generator20may determine whether an Mthcoverage CVM corresponding to the test vector is equal to or greater than a reference coverage VTH2(S220). In an example, the Mthcoverage CVM corresponding to the test vector may correspond to a value that is obtained by dividing the number of un-duplicated state transitions ST of the command block generated by the test vector (or T commands) by the number of all possible state transitions ST of the command block. When a result of the determination of operation S220is ‘No’, rewards RW for the T commands based on the result of the simulation generated in operation S210may be generated (S230). Embodiments of a reward generation method using simulation results generated by sequentially inputting T commands included in a test vector to a command block are described in detail with reference toFIGS.11A through11C, respectively. Thereafter, it is determined whether the number of commands, or T, included in the test vector has reached a reference number NOTH(S240). When a result of the determination of operation S240is ‘No’, the reward RW generated in operation S230may be applied to the reinforcement learning, and at least one command (or a command corresponding to a certain command addition unit) may be added to the test vector based on the reinforcement learning (S250). When the number of commands added to the test vector in operation S250is one, the number of commands of the test vector may be T+1, and in operation S210, the simulation operation on the command block may be performed by using only the added command. Thereafter, operations S210through S250may be repeated until the Mthcoverage CVM corresponding to the test vector becomes equal to or greater than the reference coverage VTH2, or the number of commands included in the test vector (that is, T) reaches the reference number NOTH. When a result of the determination of operation S220is ‘Yes’, the verification vector generator20may determine the current test vector as the verification vector (S260). When a result of the determination of operation S240is ‘Yes’, the verification vector generator20may increase M by 1 (S270) and may prepare the reinforcement learning corresponding to the next (M+1)thepisode by performing the reinforcement learning by using the rewards RW generated through the Mthepisode. In other words, operations S200through S250corresponding to the (M+1)thepisode may be performed based on a result of the reinforcement learning that has been performed in the Mthepisode. In an embodiment, the reinforcement learning may use a scheme such as a policy gradient. Referring toFIG.10B, whileFIG.10Aillustrates an embodiment of the reinforcement learning for the first command block BL_1B, the verification vector generator20may perform the reinforcement learning for the second command block BL_2B and the third command block BL_3B in parallel. At this time, the verification vector generator20may obtain coverages CV for the first through third command blocks BL_1B through BL_3B (S221). The coverages CV for the first through third command blocks BL_1B through BL_3B may include a first coverage CV corresponding to a certain test vector input to the first command block BL_1B, a second coverage CV corresponding to a certain test vector input to the second command block BL_2B, and a third coverage CV corresponding to a certain test vector input to the third command block BL_3B. The verification vector generator20may compute an average of the first through third coverages CV for the first through third command blocks BL_1B through BL_3B (S222). Thereafter, the computed average may be applied to the Mthcoverage CVM in operation S220inFIG.10Aand compared with the reference coverage VTH2. Since descriptions of operation S220have been given above, repeated detailed descriptions thereof are omitted for conciseness. FIG.11Ais a flowchart of an embodiment of operation S230in the flowchart ofFIG.10A, andFIGS.11B and11Care diagrams for explaining a log back-tracking operation and a reward mapping operation according to embodiments, respectively. Hereinafter, descriptions are given assuming that T inFIG.10Ais 5. Referring toFIGS.11A and11B, the simulator220may perform the simulation by sequentially inputting first through fifth commands CMD1through CMD5in the form of a trace file Trace file included in a test vector to the command block and may generate a log file Log file as a result of the simulation. The log file Log file may include time information in which each of the first through fifth commands CMD1through CMD5is input, state information of the command block generated in response to each of the first through fifth commands CMD1through CMD5, and transition event information. The CMD generator240inFIG.6may perform log file analysis on the Log file based on the log back-tracking scheme (S242). The command generator240may perform the log back-tracking on the log file Log file, and when the second command CMD2, the third command CMD3, and the fifth command CMD5are input to the command block, the command generator240may recognize an occurrence of the state transition ST. Thereafter, the command generator240inFIG.6may perform the reward mapping on a command-by-command basis by using the transition event information and the time information (S244). Referring further toFIG.11C, by performing the log back-tracking on the log file Log file, the command generator240inFIG.6may perform the reward mapping RW mapping on the second command CMD2, the third command CMD3, and the fifth command CMD5that are input in correspondence with a second time point t2, a third time point t3, and a fifth time point t5, respectively. As an example, the command generator240inFIG.6may map a positive reward RW(+) for the second command CMD2, the third command CMD3, and the fifth command CMD5and may map a negative reward RW(−) for the first and fourth commands CMD1and CMD4. Through these operations, the command generator240inFIG.6may perform the reinforcement learning that maps the positive reward RW(+) for commands that increase the coverage CV corresponding to the test vector, and as a result, at least one command may be added in the direction in which the coverage CV corresponding to the test vector increases in operation S250inFIG.10A. FIG.12is a flowchart of a method of generating a start test vector, according to an embodiment. The start test vector may denote a test vector that is first input to the circuit block during the reinforcement learning to determine the verification vector of the circuit block. The verification vector generator20may collect the verification vectors, which have already been generated based on the professional domain knowledge of verification specialists, to use the verification vectors as a training data set for the supervised learning (S300). The verification vector generator20may perform a pre-training based on the supervised learning scheme by using the collected verification vectors as the training data set (S310). The verification vector generator20may generate the start test vector according to a result of the pre-training (S320). The verification vector generator20may decrease the amount of computations of the reinforcement learning to be performed in the future through the pre-training to generate the start test vector. FIG.13is a flowchart of a circuit design verification method according to an embodiment. Referring toFIG.13, a device for verifying the circuit design according to an embodiment may classify the circuit blocks in the circuit design to be verified according to verification characteristics (S400). In an example, the verification characteristics may include characteristics related with parameters that cause the state transition ST of the circuit block. In the case of DRAM circuit design, the device may classify the DRAM circuit design into the MRS block and the command block. The device may perform the reinforcement learning for verification vector determination based on the verification characteristics of each circuit block (S410). For example, as described above, the device may perform the reinforcement learning by using the AM232for the MRS block and perform the reinforcement learning by using the log back-tracking scheme and the reward mapping RW mapping for the command block. The device may perform verification for the circuit design by using at least one verification vector that has been determined (S420). FIG.14is a flowchart of a verification method for a mass-produced semiconductor device according to an embodiment. Referring toFIG.14, a semiconductor device is mass-produced (S500), and a test apparatus according to embodiments described above may generate at least one verification vector suitable for verification of the semiconductor device (S510). In an embodiment, the test apparatus may receive the test vector for generating at least one verification vector from the outside, and at this time, the test vector may be in a compressed state (for example, a compressed state in a run-length encoding scheme), and the test apparatus may perform an operation for generating at least one verification vector by decompressing the compressed test vector. The test apparatus may verify the mass-produced semiconductor device by using at least one verification vector (S520). FIG.15is a block diagram for explaining a circuit design system according to an embodiment. Hereinafter, a term ‘module’ may refer to a hardware component such as software or a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC), and the ‘module’ may perform certain roles. However, a meaning of the ‘module’ is not limited to software or hardware. The ‘module’ may be configured to reside in an addressable storage medium, and may be configured to be executed by one or more processors. Accordingly, the ‘module’ may include, for example, components such as software components, object-oriented software components, class components, and task components, processes, functions, procedures, subroutines, segments of program codes, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables. Function provided in the components and the ‘module’s may be combined into a smaller number of components and ‘modules’, or may be further separated into additional components and ‘modules’. Referring toFIG.15, a circuit design system1000may include a processor1100, a design module1210, a verification vector generation module1220, a design verification module1230, a memory1300, and a storage1400. Although only one processor1100is illustrated inFIG.15, more processors may be provided. The design module1210may generate a circuit design by using the processor1100. The verification vector generation module1220may use the processor1100to perform the coverage-based reinforcement learning to determine verification vectors of circuit blocks in the circuit design according to various embodiments described above. The design verification module1230may use the processor1100to verify the circuit design by inputting at least one verification vector determined through the coverage-based reinforcement learning to the circuit design and analyzing an output of the circuit design. The memory1300may store data that is used by the design module1210, the verification vector generation module1220, and the design verification module1230for generation of the circuit design, determination of the verification vector, and verification of the circuit design by using the processor1100. The verified circuit design may be stored in the storage1400. A neural network device according to the technical idea of the inventive concept may efficiently generate a verification vector suitable for each circuit block in a circuit design by performing reinforcement learning based on a clear evaluation criterion called a coverage of a test vector, and may reduce the amount of computations and improve non-uniformity by applying a data lossless compression scheme to the test vector. In addition, the neural network device according to the technical idea of the inventive concept may reduce simulation cost for determining the verification vector by adaptively performing the reinforcement learning suitable for verification characteristics of the circuit block. As described above, embodiments have been disclosed in the drawings and the specification. While the embodiments have been described herein with reference to specific terms, it should be understood that they have been used only for the purpose of describing the technical idea of the inventive concept and not for limiting the scope of the inventive concept as defined in the claims. Thus, those with ordinary skill in the art will appreciate that various modifications and equivalent embodiments are possible without departing from the scope of the inventive concept. Therefore, the true scope of protection of the inventive concept should be determined by the technical idea of the appended claims. | 60,494 |
11861281 | DETAILED DESCRIPTION As a semiconductor device is miniaturized, the size of patterns included in a layout may decrease gradually, and accordingly, a minute difference between the size of a designed pattern and the size of a pattern implemented by hardware may cause a yield degradation of an integrated circuit. Particularly, due to a process variation of one or more metal layers corresponding to a back-end-of-line (BEOL), a delay through a timing path including wires implemented by the one or more metal layers may increase, and thus, a timing constraint violation may occur in the timing path. Hereinafter, exemplary embodiments will be described in detail with reference to the accompanying drawings. FIG.1is a flow diagram illustrating an integrated circuit design method according to an exemplary embodiment. Referring toFIG.1, as an operation of designing a layout of an integrated circuit, the integrated circuit design method may be performed using a tool for designing the integrated circuit. In this case, the tool for designing the integrated circuit may be a program including a plurality of instructions performed by a processor. The processor may be a microprocessor or a central processing unit (CPU) and, one or more processors may be used. Accordingly, the integrated circuit design method may be referred to as a computer-implemented method for designing an integrated circuit. In operation S110, a synthesis operation is performed. The synthesis operation processes an abstract form of a circuit into a hardware implementation. For example, operation S110may be performed by the processor by using a synthesis tool. The synthesis tool may generate a netlist by converting input data about an integrated circuit into a hardware type including logic gates. Thus, “synthesis” may be referred to as “logic synthesis”. The “input data” may be an abstract form describing behavior of an integrated circuit, for example, data defined in register transfer level (RTL) code. The “netlist” may be generated from the RTL code by using a standard cell library, and may be a netlist of a gate level. In some exemplary embodiments, the RTL code may be provided as an input file to the synthesis tool, and the netlist may be output as an output file from the synthesis tool. In operation S120, placement and routing may be performed. That is, standard cells defining the integrated circuit are placed and routed (placement & routing (P&R)). For example, operation S120may be performed by the processor by using an enhanced P&R (P&R) tool. Specifically, layout data about the integrated circuit may be generated by placing the standard cells defining the integrated circuit according to the netlist and routing nets included in the placed standard cells. For example, the layout data may be data of a graphic design system (GDS) II format. In some exemplary embodiments, the netlist may be provided as an input file to the EP&R tool, and the layout data may be output as an output file from the P&R tool. According to some exemplary embodiments, in operation S120, wire data D10including layer information of a wire corresponding to a net included in a layout of the integrated circuit and physical information of the wire may be further generated. Herein, “net” may represent an equipotential in an equivalent circuit diagram of the integrated circuit, and may correspond to an interconnection in the layout of the integrated circuit. The layer information may represent one or more layers of a wire pattern used to implement the net, that is, a level of a back-end-of-line (BEOL). The physical information may represent a layout size (e.g., a line length, line width, line area, etc.) of a wire pattern used to implement the net, that is, a layout size of the BEOL. In this case, the output file of the P&R tool may be the layout data and the wire data D10. In other words, the layout data and the wire data D10may be output as separate output files from the P&R tool. However, the inventive concept is not limited thereto, and according to some exemplary embodiments, in operation S120, layout data about the integrated circuit may include wire data. In this case, the output file of the P&R tool may be the layout data. As discussed above, the concept of “net” may represent an equipotential in an equivalent circuit diagram of the integrated circuit, and may correspond to an interconnection in the layout of the integrated circuit. The interconnection may correspond to a wiring structure including at least one via and at least one metal layer that are electrically connected to each other. In conventional layout methods, wire data is produced for each metal layer. However, there is no concept of how the wires are interconnected together to from various nets in the layout. Thus, as used herein, the “wire corresponding to the net” may include a plurality of vias and a plurality of metal layers that are actually used to implement the net. In other words, for example, the wire corresponding to the net may include a wire proceeding from one logic gate on a first layer, running in the first layer and then proceeding through a via to a second layer, running in the second layer and then through another via back to the first layer to connect to another logic gate on the first layer. (See another example inFIGS.3A and3Bdescribed later). It should be noted that in some instances the net may include a wire running on a single metal layer to connect logic components. In such a case, the net may be considered as synonymous with a wire. Herein, “wire” may correspond to a BEOL, and will be used as a concept including a metal layer and a via. Thus, the wire data D10may be metal layer data and may be via data. In some exemplary embodiments, the wire data D10may include layer information of a metal layer corresponding to a net included in the layout of the integrated circuit and physical information of the metal layer. For example, the physical information of the metal layer may include length information, width information, space information, or shielding information of the metal layer. The length information of the metal layer may be a first-direction size of a metal layer pattern used to implement the net. The width information of the metal layer may be a second-direction size of the metal layer pattern used to implement the net, and the first direction and second direction may be perpendicular to each other. The space information of the metal layer may be a distance between adjacent metal layer patterns of the same layer as the metal layer pattern used to implement the net. The shielding information of the metal layer may represent whether there is a shielding pattern adjacent to the metal layer pattern used to implement the net. For example, as an adjacent metal layer pattern of the same layer as the metal layer, the shielding pattern may be a pattern to which a first voltage (e.g., a ground voltage) is applied. In some exemplary embodiments, the wire data D10may include layer information of a via corresponding to a net included in the layout of the integrated circuit and physical information of the via. For example, the physical information of the via may include a type of the via or the number of vias. The type of the via may be various types such as a double via, a single via, or a bar via used in a process. The number of vias may represent the number of vias placed in different metal layers. As another example, the physical information of the via may include length information, circumference information, or width information of the via. In operation S130, parasitic components are extracted. The parasitic components may be extracted from the layout data. For example, operation S130may be performed by the processor by using an enhanced parasitic extraction (PEX) tool. Specifically, parasitic components such as a parasitic resistance and a parasitic capacitance of the net included in the layout data may be extracted, and an enhanced standard parasitic extraction format (SPEF) file may be generated. The enhanced standard parasitic extraction format file may be a modified form of a standard parasitic extraction format (SPEF) file. For example, the SPEF file may include the resistance and capacitance of each of a plurality of metal layers used in the net. For example, the SPEF file may include the resistance and capacitance of each of a plurality of vias used in the net. In other words, as an example, the parasitic resistance and the parasitic capacitance of the net may include a parasitic resistance and a parasitic capacitance of each of a wire running in a first layer, a via from a first layer to a second layer, a wire running in a second layer, and a via from the second layer back to the first layer. According to the present exemplary embodiment, the layout data may be provided as an input file to the PEX tool, and the SPEF file may be output as an output file from the P&R tool. In operation S140, a timing analysis is performed. The timing analysis of the integrated circuit may be performed. For example, operation S140may be performed by the processor by using an enhanced static timing analysis (STA) tool. The “timing analysis” represents an operation of determining whether timing paths included in the layout of the integrated circuit satisfy timing constraints, and selecting a timing critical path of the integrated circuit. For example, the timing critical path may be a timing path in which a total timing delay from an input (i.e., a start point) to an output (i.e., an end point) exceeds timing requirements among the timing paths according to the determination result. The timing constraints may include setup timing constraints and hold timing constraints. According to the present exemplary embodiment, in operation S140, timing analysis data reflecting process variations of the wire may be generated by performing a timing analysis based on the wire data with respect to the timing paths included in the layout data. According to some exemplary embodiments, the layout data including the wire data may be provided as an input file to the STA tool, and the timing analysis data may be output as an output file from the STA tool. Alternatively, in other exemplary embodiments, each of the layout data and the wire data (as separate files) may be provided as input to the STA tool, and the timing analysis data may be output as an output file from the STA tool. In some exemplary embodiments, the design method may further include an operation of performing engineering change orders (ECO) according to the timing analysis data. In some exemplary embodiments, the design method may further include an operation of performing clock tree synthesis (CTS) by using timing analysis data. In some exemplary embodiments, the design method may further include an operation of performing optimization in a P&R operation by using the timing analysis data. In some exemplary embodiments, the design method may further include an operation of modifying metal routing included in the P&R operation by using the timing analysis data. For example, based on the timing analysis data, the length of one or more wires may be modified and/or the metal layer of a wire may be modified into a wire of another metal layer, in order to improve the timing of the net to which the wire corresponds. FIG.2illustrates an integrated circuit100according to an exemplary embodiment. Referring toFIG.2, the integrated circuit100may include a first cell110, a second cell120, a third cell130, a fourth cell140, a fifth cell150, and a sixth cell160. For example, the second cell120may correspond to a launch flip-flop, and the third cell130may correspond to a capture flip-flop. For example, in operation S140ofFIG.1, a timing analysis may be performed on timing paths included in the integrated circuit100. An operation speed of the integrated circuit100may be determined according to a delay through a timing path. A setup timing path or a hold timing path includes a data path DP, a launch clock path LCP, and a capture clock path CCP, as shown inFIG.2. The data path DP may be defined as a timing path from a clock pin of the second cell120corresponding to the launch flip-flop to a data input pin of the third cell130corresponding to the capture flip-flop. A delay D through the data path DP may be represented as Equation 1 below. D=Dcell+Dnet=∑i=0n-1dcell,i+∑i=0n-1dnet,iEquation1 Herein, “n” denotes the number of cells included in the data path DP. For example, the data path DP may include the second cell120, the fourth cell140, the fifth cell150, and the sixth cell160, and thus in this case, “n” is 4. “Dcell” denotes a total cell delay of the data path DP, and may correspond, for example, to the sum of a delay dcell,0of the second cell120, a delay dcell,1of the fourth cell140, a delay dcell,2of the fifth cell150, and a delay dcell,3of the sixth cell160. “Dnet” denotes a total net delay of the data path DP, and may correspond, for example, to the sum of a delay dnet,0of a net N3connecting the second cell120and the fourth cell140, a delay dnet,1of a net N4connecting the fourth cell140and the fifth cell150, a delay dnet,2of a net N5connecting the fifth cell150and the sixth cell160, and a delay dnet,3of a net N6connecting the sixth cell160and the third cell130. In general, since the data path DP includes a relatively large number of cells, the data path DP may be less sensitive to a total net delay (i.e., a wire delay) than the launch clock path LCP and the capture clock path CCP. The launch clock path LCP may be defined as a timing path from a common clock pin of the clock tree to a clock input pin of the second cell120corresponding to the launch flip-flop. A delay L through the launch clock path LCP may be represented as Equation 2 below. L=Lcell+Lnet=∑i=0j-1lcell,i+∑i=0j-1lnet,iEquation2 Herein, “j” denotes the number of cells included in the launch clock path LCP. For example, the launch clock path LCP may include the first cell110, and thus in this case, “j” is 1. “Lcell” denotes a total cell delay of the launch clock path LCP, and may correspond, for example, to a delay lcell,0of the first cell110. “Lnet” denotes a total net delay of the launch clock path LCP, and may correspond, for example, to a delay lnet,0of a net N1connecting the first cell110and the second cell120. In general, since the launch clock path LCP includes a relatively small number of cells, the launch clock path LCP may be more sensitive to a total net delay (i.e., a wire delay) than the data path DP. The capture clock path CCP may be defined as a timing path from a common clock pin of the clock tree to a clock input pin of the third cell130corresponding to the capture flip-flop. A delay C through the capture clock path CCP may be represented as Equation 3 below. C=Ccell+Cnet=∑i=0k-1ccell,i+∑i=0k-1cnet,iEquation3 Herein, “k” denotes the number of cells included in the capture clock path CCP. For example, the capture clock path CCP may include the first cell110, and thus in this case, “k” is 1. “Cees” denotes a total cell delay of the capture clock path CCP, and may correspond, for example, to a delay ccell,0of the first cell110. “cnet” denotes a total net delay of the capture clock path CCP, and may correspond, for example, to a delay cnet,0of a net N2connecting the first cell110and the third cell130. In general, since the capture clock path CCP includes a relatively small number of cells, the capture clock path CCP may be more sensitive to a total net delay (i.e., a wire delay) than the data path DP. By using Equations 1 to 3, a hold time slack THOLDmay be represented as Equation 4 below. THOLD=L+D−C+α=(Lcell+Lwire)+(Dcell+Dwire)−(Ccell+Cwire)α=(Lcell+Dcell−Ccell)+(Lwire+Dwire−Cwire)+α=Scell+Swire+α Equation 4 Herein, “a” is a constant and denotes the sum of other timing parameters such as a clock uncertainty and a flip-flop hold margin. Herein, “Scell” denotes a hold slack difference due to a cell delay, and “Swire” denotes a hold slack difference due to a wire delay. In Equation 4, “Lwire”, “Dwire”, and “Cwire” may correspond respectively to “Lnet” of Equation 2, “Dnet” of Equation 1, and “Cnet” of Equation 3. For example, in a case in which the integrated circuit ofFIG.2is implemented using only a metal layer D1, when the resistance of the metal layer D1used to implement the integrated circuit100is manufactured to be greater by 20% than a target value of a model, the constant “α” and the hold slack difference “Scell” due to a cell delay are not changed and only the hold slack difference “Swire” due to a wire delay is changed in Equation 4. In this case, a hold time slack difference ΔTHOLDmay be represented as Equation 5 below. ΔTHOLD=THOLD,D1@20%−THOLD=(Scell+SwireD1@20%+α)−(Scell+Swire+α)=SwireD1@20%−Swire=ΔSwireEquation 5 The hold slack difference “Swire” due to a wire delay is used to analyze a wire model-to-hardware correlation (MHC) issue. “MHC” represents the consistency between a model on which a design is based and hardware that is actually implemented in silicon. When the model has electrical characteristics that are different from those measured in silicon, the chip performance expected in a design stage may not be realized. In particular, “wire MHC mismatch” may represent a difference between the modeled resistance/capacitance value of a wire and the resistance/capacitance value of a wire that is actually implemented. For example, a wire MHC mismatch may be caused by process variations of the BEOL, such as metal layer resistance variations, metal layer capacitance variations, or via variations. For example, when an actual resistance of a metal layer is greater than a modeled target resistance, a delay through a timing path including the metal layer may increase, and accordingly, a hold violation may occur as a result of a timing analysis on the timing path. According to a conventional design method, in a timing analysis stage, physical information about a net included in a timing path may not be known. That is, in the timing analysis stage, it may not be known by which metal layer or layers the net is actually implemented. Accordingly, timing analysis data reflecting the process variations of wires may not be generated in the timing analysis stage. However, according to exemplary embodiments, in an operation of generating layout data or an operation of extracting parasitic components, the accuracy of a timing analysis may be improved by generating wire data including layer information of a wire corresponding to a net included in a layout of an integrated circuit and physical information about the wire, and performing a timing analysis by using the generated wire data. Thus, improved mass production may be secured by finding and addressing design vulnerabilities. A timing analysis operation will be described in detail with reference toFIGS.6to12. FIGS.3A and3Billustrate implementation examples (100aand100b) of the clock tree included in the integrated circuit ofFIG.2. Referring toFIG.3A, an integrated circuit100ais an implementation example having a robust clock tree. A net N1aincluded in a launch clock path LCPa may be implemented by a first metal layer D1and a second metal layer D2, and a net N2aincluded in a capture clock path CCPa may also be implemented by the first metal layer D1and the second metal layer D2. For example, a variation may occur only in the first metal layer D1among the first and second metal layers D1and D2and thus the resistance of the first metal layer D1may increase in comparison with a target value. In this case, since both a wire delay through the launch clock path LCPa and a wire delay through the capture clock path CCPa increase simultaneously in Equation 4, a hold time slack difference between LCPa and CCPa is 0 in Equation 5 and a hold violation may not occur. In other words, since the launch clock path LCPa and the capture clock path CCPa include similar wires on similar layers, the hold time slack difference does not occur. Referring toFIG.3B, an integrated circuit100bis an implementation example having a clock tree vulnerable to process variations of wires. A net N1bincluded in a launch clock path LCPb may be implemented by a first metal layer D1and a second metal layer D2, and a net N2bincluded in a capture clock path CCPb may be implemented by a second metal layer D2. For example, a variation may occur only in the first metal layer D1among the first and second metal layers D1and D2and thus the resistance of the first metal layer D1may increase in comparison with a target value. In this case, since a wire delay through the launch clock path LCPb increases and a wire delay through the capture clock path CCPb does not increase, a hold time slack difference may have a value greater than 0 in Equation 5 and a hold violation may occur. In this manner, when the clock tree does not have a robust structure, a hold violation may occur due to a wire model-to-hardware correlation (MHC) mismatch such as a resistance variation and/or a capacitance variation of a metal layer (e.g., D1gets faster, or D2gets slower) corresponding to a wire, a metal, and/or a via variation corresponding to a net. Thus, the inventive concept proposes a new timing analysis method that analyzes a metal routing structure to remove a timing violation caused by a wire MHC and predicts time slacks by using a wire RC variation specification. FIG.4is a flow diagram illustrating an integrated circuit design method according to an exemplary embodiment. Referring toFIG.4, the integrated circuit design method according to the exemplary embodiment may correspond to an implementation example ofFIG.1as a method of performing a timing analysis of an integrated circuit in consideration of wire variations. In operation S210, layout data and wire data are generated by placing and routing standard cells. For example, layout data of the integrated circuit and wire data D10corresponding to a net included in the layout data of the integrated circuit are generated by placing and routing standard cells defining the integrated circuit. The wire data D10may include layer information of at least one wire corresponding to a net included in the integrated circuit and physical information of the at least one wire. For example, the wire data D10may include length information of a wire. According to some exemplary embodiments, an operation of extracting parasitic components from the layout data may be further included between operation S210and operation S220. For example, the operation of extracting the parasitic components may correspond to operation S130ofFIG.1. In operation S220, timing analysis data is generated considering variation of a wire by performing timing analysis based on the wire data. For example, timing analysis data reflecting process variations of the at least one wire is generated by performing a timing analysis based on the wire data D10with respect to a timing path including the net. In some exemplary embodiments, a timing analysis may be performed on the timing path based on unit delay information representing a delay per unit length of at least one wire and physical information of at least one wire. In some exemplary embodiments, a wire delay skew of the timing path may be calculated based on a time constant scaling factor based on the process variations of at least one wire, unit delay information representing a delay per unit length of at least one wire, and physical information of at least one wire. This timing analysis will be described in more detail with reference toFIGS.9to12. In some exemplary embodiments, the design method may further include an operation of performing an engineering change order (ECO) according to the timing analysis data. FIG.5is a block diagram illustrating an integrated circuit design system200for designing an integrated circuit according to an exemplary embodiment. Referring toFIG.5, the integrated circuit design system200may be a computing system for designing an integrated circuit. The integrated circuit design system may include a processor210, a memory230, an input/output (I/O) device250, a storage device270, and a bus290. The integrated circuit design system200may perform an integrated circuit design operation including operations S110to S140ofFIG.1or operations S210and S220ofFIG.4. In the exemplary embodiment shown inFIG.4, the integrated circuit design system200may be implemented as an integrated device, and accordingly, integrated circuit design system200may also be referred to as an integrated circuit design apparatus. The integrated circuit design system200may be provided as a dedicated apparatus for designing an integrated circuit of a semiconductor device, or may be a computer for driving various simulation tools or design tools. The processor210may include one or more microprocessors and may be configured to execute instructions for performing at least one of various operations for designing an integrated circuit. The processor210may communicate with the memory230, the I/O device250, and the storage device270through the bus290. The processor210may execute an integrated circuit design operation by driving a P&R module231, a PEX module233, and an STA module235loaded in the memory230. The memory230may store the P&R module231, the PEX module233, and the STA module235. Also, the memory230may further store a synthesis module. The P&R module231, the PEX module233, and the STA module235may be loaded from the storage device270into the memory230. The memory230may include, for example, a volatile memory such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), or a nonvolatile memory such as PRAM, MRAM, ReRAM, FRAM, or NOR flash memory. The P&R module231may be, for example, a program including a plurality of instructions for performing an enhanced P&R operation according to operation S120ofFIG.1or operation S210ofFIG.4. The PEX module233may be, for example, a program including a plurality of instructions for performing an enhanced parasitic extraction operation according to operation S130ofFIG.1. The STA module235may be, for example, a program including a plurality of instructions for performing an enhanced timing analysis operation according to operation S140ofFIG.1or operation S220ofFIG.4. It is noted that the P&R module231, the PEX module233and the STA module235are shown as separate components inFIG.5. However, this is only an example, and the P&R module231, the PEX module233and the STA module235may be combined together into one module or into two modules. In other words, the modules do not need to be provided as separate modules. The I/O device250may control a user input and output from one or more user interface devices. For example, the I/O device250may include an input device such as a keyboard, a mouse, and/or a touch pad to receive input data defining an integrated circuit. In some exemplary embodiments, the I/O device250may receive various user inputs such as a metal RC variation scaling factor. For example, the I/O device250may include a display device such as a display and/or a speaker to display placement results, routing results, and/or timing analysis results. In some exemplary embodiments, with respect to a wire corresponding to a net included in an integrated circuit, the I/O device250may display a first wire delay based on a target value, a second wire delay based on a process variation, and a wire delay skew generated from the first wire delay and the second wire delay. The storage device270may store various data related to the P&R module231, the PEX module233, and the STA module235. The storage device270may include, for example, a memory card (e.g., MMC, eMMC, SD, or MicroSD), a solid state drive, and/or a hard disk drive. FIG.6is a block diagram illustrating the integrated circuit design system ofFIG.5in more detail. Referring toFIGS.5and6, the program stored in the memory230may include a plurality of procedures, and a procedure may designate a series of instructions for performing a particular task. The procedure may also be referred to as a function, a routine, a subroutine, or a subprogram. According to the exemplary embodiment shown inFIG.6, the procedures may include a placer PLC, a router RT, and a timing analyzer TA. Specifically, the P&R module231may include the placer PLC and the router RT, and the STA module235may include the timing analyzer TA. Also, the procedures may further include a parasitic extractor, and the parasitic extractor may be included, for example, in the PEX module233. Herein, performing an operation by executing a procedure (PLC, RT, or TA) by the processor210ofFIG.5may be represented as performing the operation by the procedure (PLC, RT, or TA). The storage device270may include a cell library database (DB)271, a layout DB273, and a technology (tech) file DB275. The cell library DB271may store information about a standard cell for generating a layout of an integrated circuit, and may be referred to as a standard cell library DB. The layout DB273may store information about a layout generated in procedures, for example, physical information about the layout. The technology file DB275may store a technology file for storing materials and rules used in an integrated circuit manufacturing process. The technology (tech) file DB275may store, for example, layer definitions, device definitions, and/or design tools. In the exemplary embodiment shown inFIG.6, the technology (tech) file DB275may store unit delays corresponding respectively to a plurality of metal layers. The placer PLC may place standard cells according to a netlist D20, and specifically, the placer PLC may perform a placement operation by accessing the cell library DB271of the storage device270. The router RT may generate layout data by routing the standard cells placed by the placer PLC. The generated layout data may be stored in the layout DB273of the storage device270. Also, the router RT may generate wire data D10including layer information of at least one wire corresponding to each of a plurality of nets included in an integrated circuit and physical information of at least one wire. Hereinafter, the wire data D10will be described in more detail with reference toFIGS.7and8. FIG.7illustrates the wire data D10ofFIG.6according to an exemplary embodiment. Referring toFIG.7, the wire data D10may include, for example, layer information (Ma to Mf) of at least one wire corresponding to each of nets N1to N7included in the integrated circuit100of, for example,FIG.2and length information in the metal layer of the at least one wire. For example, the wire data D10may be generated in operation S210ofFIG.4. For example, metal layers corresponding to the net N4may be second metal layer M2, third metal layer M3, fourth metal layer M4and fifth metal layer M5, and the lengths of the second to fifth metal layers M2to M5used to implement the net N4may be respectively λ2to λ5. FIG.8illustrates metal layers corresponding to the net N4ofFIG.7according to an exemplary embodiment. Referring toFIGS.7and8, the net N4may be implemented by using the second to fifth metal layers M2to M5. A total wire length λtotalcorresponding to the net N4corresponds to the sum of the respective lengths λ2to λ5of the wires of the net implement in the second to fifth metal layers M2to M5, respectively. Thus, the total wire length λtotalcorresponding to the net N4shown inFIGS.7and8may be represented as Equation 6 below. λtotal=λ2+λ3+λ4+λ5−(λ1,2+λ7,2)+(λ2,3+λ4,3)+(λ3,4+λ6,4)+λ5,5Equation 6 Herein, λi,mdenotes a metal routing length, “i” denotes a metal routing order, and “m” denotes a metal layer number. FIG.9is a table illustrating parameters used to calculate a wire delay skew Δmby the timing analyzer TA ofFIG.6according to an exemplary embodiment. Referring toFIGS.6and9, the timing analyzer TA may search for a timing critical path by performing a timing analysis on an integrated circuit, generate a cell list about the standard cells included in the timing critical path, and transmit the generated cell list to the P&R module231. The P&R module231may find the nets connected to the standard cells included in the cell list and transmit the wire data D10about the nets that are found to the STA module235, that is, the timing analyzer TA. For example, the wire data D10may include a metal wire length λmof an m-th metal wire. Subsequently, the timing analyzer TA may calculate a wire delay skew Δmbased on the wire data D10, a wire model-to-hardware correlation (MHC) specification D30, and an resistance-capacitance (RC) variation scaling factor D40. Specifically, the timing analyzer TA may receive the wire MHC specification D30including a unit delay τmof the m-th metal wire from the technology (tech) file DB275included in the storage device270. For example, the unit delay τmof the m-th metal wire may be generated by a silicon monitoring circuit. Also, the timing analyzer TA may receive a user input including an RC variation scaling factor are σmof the m-th metal layer (i.e., D40) representing an RC time constant variation of the m-th metal layer from the I/O device250. The RC variation scaling factor σmwill be described below with reference toFIG.10. The timing analyzer TA may calculate a wire delay δtotalof one or more wires in a particular metal layer based on the metal wire length λmof the one or more wires in the m-th metal layer, the unit delay τmof the m-th metal layer including the one or more wires, and the RC variation scaling factor σmof the m-th metal layer including the one or more wires. The wire delay δtotalof the one or more wires in a particular metal layer may be included in the timing analysis data output from the timing analyzer TA. The timing analyzer TA may calculate the wire delay skew Δmbased on the wire delay δtotal. The wire delay skew may be included in the timing analysis data output from the timing analyzer TA. FIG.10is a graph illustrating the resistance and capacitance of an m-th metal layer according to an exemplary embodiment. Referring toFIG.10, a horizontal axis represents the resistance of the m-th metal layer, and a vertical axis represents the capacitance of the m-th metal layer. In the graph ofFIG.10, a box represented by a dotted line represents an allowable range of the modeled RC value of the m-th metal layer. The box may be set experimentally based on measured values of resistance and capacitance for the metal layer. An RC time constant may be set to a first time constant TC1based on a corner value CN of an RC of the m-th metal layer. When the RC of the m-th metal layer has an extra value EV exceeding the allowable range due to the process variation of the m-th metal layer, the RC time constant may be set to a second time constant TC2. In this case, the ratio of the second time constant TC2to the first time constant TC1may be defined as the RC variation scaling factor σm. FIG.11is a flow diagram illustrating a timing analysis method according to an exemplary embodiment. Referring toFIG.11, the timing analysis method according to the exemplary embodiment shown inFIG.11may correspond to an implementation example of operation S220ofFIG.4. For example, the timing analysis method according to the exemplary embodiment shown inFIG.11may be sequentially performed by the integrated circuit design system200ofFIG.5. Hereinafter, the timing analysis method will be described with reference toFIG.11. In operation S310, timing critical paths are searched. For example, the STA module235may determine timing critical paths by performing a timing analysis on the timing paths included in the integrated circuit. For example a timing critical path may be a path for which a hold violation occurs. See, e.g.,FIGS.3A and3Band associated discussion above. The STA module235may determine the timing critical paths. However, the inventive concept is not limited thereto, and in some exemplary embodiments, operation S310may be performed by the P&R module231. In operation S320, a cell delay and a net delay are collected for each timing critical path that is determined. For example a cell delay and a net delay may be collected with respect to a path selected from among the determined timing critical paths. For example, the STA module235may acquire a delay through the selected path by collecting the cell delays of a plurality of cells included in the selected path and the net delays of a plurality of nets included in the selected path. However, the inventive concept is not limited thereto, and in some exemplary embodiments, operation S320may be performed by the P&R module231. For example, referring back toFIG.2and its associated description, when the selected path includes a data path, cell delays dcell,0to dcell,n-1of n cells included in the data path and net delays dnet,0to dnet,n-1of n nets included in the data path may be collected as described in Equation 1 above. When the selected path includes a launch path, cell delays lcell,0to lcell,j-1of j cells included in the launch path and net delays lnet,0to lnet,j-1of j nets included in the launch path may be collected as described in Equation 2 above. When the selected path includes a capture path, cell delays ccell,0to ccell,k-1of k cells included in the capture path and net delays cnet,0to cnet,k-1of k nets included in the capture path may be collected as described in Equation 3 above. In operation S330, physical information about at least one wire corresponding to at least one net is collected with respect to each net. For example, the physical information about at least one wire corresponding to at least one net included in the selected path is collected. In some exemplary embodiments, the selected path may include a plurality of nets, and physical information about at least one wire corresponding to each net may be collected. In some exemplary embodiments, the selected path may include a single net, and physical information about at least one wire corresponding to the single net may be collected. In some exemplary embodiments, the at least one wire may include portions of the at least one wire on a plurality of metal layers, and the physical information may include length information of a portion of the at least one wire on each metal layer. See, e.g.,FIG.7. Hereinafter, in operation S330, collecting length information of each of a plurality of metal layers corresponding to the net included in the selected path will be mainly described. In some exemplary embodiments, the P&R module231may collect length information of a plurality of metal layers corresponding to the net included in the selected path. In some exemplary embodiments, the PEX module233may collect length information of a plurality of metal layers corresponding to the net included in the selected path. In some exemplary embodiments, the STA module235may receive length information of a plurality of metal layers corresponding to the net included in the selected path from the P&R module231or the PEX module233. In some exemplary embodiments, operation S330may be performed on all the nets included in the selected path. However, the inventive concept is not limited thereto, and in some exemplary embodiments, operation S330may be performed only on a portion of the nets included in the selected path. In operation S340, a wire delay is calculated based on the physical information. For example with respect to at least one wire corresponding to the net, a wire delay is calculated based on the physical information of at least one wire. In some exemplary embodiments, the wire delay may be calculated based on the length information of the portion of the wire in the metal layer and the unit delay information of the metal layer. For example, the STA module235may calculate the wire delay of a portion of the wire in the m-th metal layer based on the length information of the m-th metal layer. However, the inventive concept is not limited thereto, and in some exemplary embodiments, operation S340may be performed by the P&R module231. In operation S350, the wire delay is updated based on the process variation of the wire. For example, with respect to at least one wire corresponding to the net, the wire delay is updated based on the process variation of at least one wire. In some exemplary embodiments, the wire delay may be updated based on the RC scaling factor according to the process variation of the metal layer including the portion of the wire. For example, the STA module235may update the wire delay of a portion of the wire in the m-th metal layer based on the RC scaling factor according to the process variation of the m-th metal layer. However, the inventive concept is not limited thereto, and in some exemplary embodiments, operation S350may be performed by the P&R module231. In operation S360, a timing slack is calculated using the updated wire delay. In some exemplary embodiments, with respect to each of a plurality of metal layers, a wire delay skew according to the difference between the wire delay of the metal layer and the updated wire delay may be calculated, and a timing slack may be calculated by using the wire delay skews of a plurality of metal layers. For example, the timing slack may be a hold slack or a setup slack. For example, the STA module235may calculate the wire delay skew of portions of the wire on the m-th metal layer and calculate the timing slack by using the wire delay skews of the portions of the wire on all the metal layers corresponding to the net. However, the inventive concept is not limited thereto, and in some exemplary embodiments, operation S360may be performed by the P&R module231. After operation S360, operation S320may be performed on the next path among the timing critical paths that are determined. FIG.12is a flow diagram illustrating the operation between the P&R module231and the STA module235ofFIG.6according to an exemplary embodiment. Referring toFIG.12, operations S310to S330and S370to S390may be sequentially performed, for example, by the STA module235ofFIG.6, and operations S340to S360may be sequentially performed, for example, by the P&R module231ofFIG.6. Hereinafter, a timing analysis operation will be described with additional reference toFIGS.6to11. In operation S410, the STA module235may determine timing critical paths. The STA module235may determine the timing critical paths by performing a timing analysis on the timing paths included in the integrated circuit. In operation S420, the STA module235may select one of the timing critical paths and generate a cell list including the standard cells related to the selected path. In operation S430, the STA module235may transmit the cell list to the P&R module231. In operation S440, the P&R module231may find the nets connected to the standard cells included in the cell list. In operation S450, the P&R module231may generate wire data including layer information of at least one wire corresponding respectively to the nets and physical information of the at least one wire. In some exemplary embodiments, the P&R module231may generate wire data representing a layer of a wire corresponding to each of a plurality of nets, that is, a type of the wire in a layout data generating operation, and generate wire data corresponding to a particular net by measuring a length of a wire corresponding to a particular net according to the request of the STA module235in operation S450. In some exemplary embodiments, the P&R module231may generate total wire data including a type of a wire corresponding to each of a plurality of nets and length information of the wire in a layout data generating operation, and extract wire data corresponding to a particular net according to the request of the STA module235from the total wire data in operation S450. In operation S460, the P&R module231may transmit the generated wire data to the STA module235. In operation S470, the STA module235may calculate a wire delay skew and a hold slack based on the wire data, the wire model-to-hardware correlation (MHC) specification D30, and the RC variation scaling factor D40. Operation S470will be described in more detail with reference toFIG.13. In operation S480, the STA module235may generate a timing report. For example, the timing report may include a wire MHC slack, a worst metal layer, a wire delay skew for each metal layer, an RC variation scaling factor for each metal layer, and a hold slack difference for each metal layer. In operation S490, the STA module235may determine whether the path selected in operation S420is the last path among the timing critical paths determined in operation S410. As a result of the determination, when the path is not the last path, operation S420is performed, and when the path is the last path, the timing analysis operation is ended. FIG.13is a flow diagram illustrating an integrated circuit timing analysis method according to an exemplary embodiment. Referring toFIG.13, the timing analysis method according to the exemplary embodiment shown inFIG.13may correspond to an implementation example of operation S220ofFIG.4. For example, the timing analysis method according to the exemplary embodiment shown inFIG.13may be sequentially performed by the STA module235ofFIG.6. Hereinafter, an operation of calculating a wire delay corresponding to the net N4by the timing analyzer TA will be described with reference toFIGS.6to13. In operation S510, a first wire delay is calculated based on a target value of a wire. In some exemplary embodiments, a net may correspond to a plurality of wires, and operation S510may be performed on each of the plurality of wires. For example, it may be assumed that a first wire delay δmon the m-th metal layer is proportional to a length λmof the wire on the m-th metal layer and a unit delay τmof the m-th metal layer. Thus, the first wire delay δmof the wire on the m-th metal layer may be represented as Equation 7 below. δm=β×τm×λmEquation 7 For example, it may be said that a first total wire delay δtotalcorresponding to the net N4(ofFIG.7) corresponds to the sum of the first wire delays δ2to δ5of the portions of the wire on the second to fifth metal layers M2to M5(i.e., δtotal=δ2+δ3+δ4+δ5). Thus, the first total wire delay δtotalcorresponding to the net N4may be represented as Equation 8 below. δtotal=β×(τ2×λ2+τ3×λ3+τ4×λ4+τ5×λ5) Equation 8 From Equation 8, β may be represented as Equation 9 below. β=δtotalτ2×λ2+τ3×λ3+τ4×λ4+τ5×λ5=δtotal∑r=1Mτr×λrEquation9 From Equations 8 and 9, the first wire delay δmof the wire on the m-th metal layer may be represented as Equation 10 below. δm=δtotal∑r=1Mτr×λr×τm×λmEquation10 In operation S520, a second wire delay is calculated based on a process variation of the wire. In some exemplary embodiments, a net may correspond to a plurality of wires, and operation S520may be performed on each of the plurality of wires. Specifically, an operation of calculating the second wire delay may be performed using various equations. In some exemplary embodiments, the second wire delay δ′mof a portion of the wire on the m-th metal layer may be represented as Equation 11 below. δm′=δtotal∑r=1M(τr×σr)×λr×(τm×σm)×λmEquation11 Herein, σmis a metal RC variation scaling factor of the m-th metal layer and may be set by the user. Also, in some exemplary embodiments, Equation 11 may be modified by further considering physical information of a via corresponding to the net, for example, the number of vias or a type of the via, and the second wire delay δ′mmay be calculated by using the modified Equation 11. Also, in some exemplary embodiments, Equation 11 may be modified by further considering an RC variation of a via corresponding to the net, and the second wire delay δ′mmay be calculated by using the modified Equation 11. In some exemplary embodiments, the second wire delay δ′mof a portion of the wire on the m-th metal layer may be represented as Equation 12 below. δm′=δtotal∑r=1Mτr×λr×(τm×σm)×λmEquation12 Also, in some exemplary embodiments, Equation 12 may be modified by further considering physical information of a via corresponding to the net, for example, a scaling factor according to a type of the via or the number of vias, and the second wire delay δ′mmay be calculated by using the modified Equation 12. Also, in some exemplary embodiments, Equation 12 may be modified by further considering an RC variation of a via corresponding to the net, and the second wire delay δ′mmay be calculated by using the modified Equation 12. In operation S530, a wire delay skew is calculated based on the first and second wire delays. In some exemplary embodiments, a net may correspond to a plurality of wires, and operation S530may be performed on each of the plurality of wires. For example, when the second wire delay δ′mis calculated according to Equation 11, the wire delay skew Δmof a portion of the wire on the m-th metal layer may be represented as Equation 13 below based on the first and second wire delays δmand δ′m. Δm=δm-δm′=δtotal∑r=1Mτr×λr×τm×λm-δtotal∑r=1M(τr×σr)×λr×(τm×σm)×λmEquation13 For example, when the second wire delay δ′mis calculated according to Equation 12, the wire delay skew Δmof a portion of the wire on the m-th metal layer may be represented as Equation 14 below based on the first and second wire delays δmand δ′m. Δm=δm-δm′=δtotal∑r=1Mτr×λr×τm×λm-δtotal∑r=1Mτr×λr×(τm×σm)×λmEquation14 For example, when a process variation occurs only in the m-th metal layer among all the metal layers and a variation does not occur in the other metal layers, the total wire delay skew may be equal to the wire delay skew Δm of the portion of the wire on the m-th metal layer. In operation S540, a timing slack is calculated. For example, the timing slack of the net may be calculated based on the wire delay skews corresponding respectively to a plurality of wires corresponding to the net. In some exemplary embodiments, the timing slack may be calculated by applying a root sum square (RSS) method to the wire delay skews. For example, the timing slack may be calculated as Equation 15 below. Δ=-1×∑m=1nΔm2Equation15 In operation S550, it is determined whether a timing violation occurs according to the timing slack. In some exemplary embodiments, as a result of the determination, when a timing violation occurs, an action may be taken to remove the timing violation (S560). For example, in some exemplary embodiments, an engineering change order (ECO) may be performed to remove the timing violation. In some exemplary embodiments, when a timing violation occurs, a timing margin of the timing path may be additionally secured by using a timing engine such as clock tree synthesis (CTS). In some exemplary embodiments, when a timing violation occurs, a timing margin of the timing path may be additionally secured by using optimization in a P&R tool. In some exemplary embodiments, when a timing violation occurs, the metal routing may be modified. For example, the length of a metal layer may be modified or a portion of the wire in the metal layer may be changed to another metal layer. As a result of the determination, when a timing violation does not occur, the timing analysis operation may be ended (S570). FIG.14is a block diagram illustrating a computing system300for designing an integrated circuit according to an exemplary embodiment. Referring toFIG.14, an integrated circuit design system300may include a user device310, an integrated circuit design platform330, and a storage device350. For example, the integrated circuit design system300may perform an integrated circuit design operation including operations S110to S140ofFIG.1or operations S210and S220ofFIG.4. In the exemplary embodiment shown inFIG.14, at least one of the user device310, the integrated circuit design platform330, and the storage device350may be a separate device, and the user device310, the integrated circuit design platform330, and the storage device350may be connected through wireless/wired communication or network. In some exemplary embodiments, at least one of the user device310, the integrated circuit design platform330, and the storage device350may be spaced apart from each other. The user device310may include a processor311and a user interface (UI)313. The processor311may include one or more microprocessors or central processing units (CPUs) and may drive the integrated circuit design platform330according to a user input received through the user interface313. The integrated circuit design platform330may include a P&R module331, a PEX module333, and an STA module335as a set of computer-readable instructions for designing an integrated circuit. The P&R module331, the PEX module333, and the STA module335may correspond respectively to the P&R module231, the PEX module233, and the STA module235ofFIG.5. The storage device350may include a cell library DB351, a layout DB353, and a technology file DB355. The cell library DB351, the layout DB353, and the technology file DB355may correspond respectively to the cell library DB271, the layout DB273, and the technology (tech) file DB275ofFIG.6. FIG.15illustrates an integrated circuit400according to an exemplary embodiment. Referring toFIG.15, the integrated circuit400may include a first cell410, a second cell420, a third cell430, a fourth cell440, a fifth cell450and a sixth cell460, and a first net N1, a second net N2, a third net N3, a fourth net N4, and a fifth net N5. The first net N1includes a wire on a first metal layer M1and a wire on a second metal layer M2. The second net N2includes a wire on a third metal layer M3. The third net N3includes a wire on the third metal layer M3, and the fourth and fifth nets N4and N5each includes a wire on a fourth metal layer M4. A capture clock path CCP may include the first cell410and the second cell420, and the first net N1and the second net N2. For example, in the capture clock path CCP, a cell delay may be 3 ps, a net delay may be 3 ps, and a total delay may be 6 ps. A launch clock path LCP may include the third cell430and the third net N3. For example, in the launch clock path LCP, a cell delay may be 1 ps, a net delay may be 1 ps, and a total delay may be 2 ps. A data path DP may include the fourth cell440and the fifth cell450, and the fourth net N4and the fifth net N5. For example, in the data path DP, a cell delay may be 2 ps, a net delay may be 2 ps, and a total delay may be 4 ps. Hereinafter, a timing analysis operation on the integrated circuit400will be described with reference toFIGS.14to17C. FIG.16illustrates wire data D10′ for the integrated circuit400ofFIG.15. Referring toFIG.16, the first net N1may correspond to the first and second metal layers M1and M2, the length of the wire on the first metal layer M1used to implement the first net N1may be λ1, and the length of the wire on the second metal layer M2used to implement the first net N1may be λ2. The second and third nets N2and N3may correspond to the third metal layer M3, the length of the wire on the third metal layer M3used to implement the second net N2may be λ3a, and the length of the wire on the third metal layer M3used to implement the third net N3may be λ3b. The fourth and fifth nets N4and N5may correspond to the fourth metal layer M4, the length of the wire on the fourth metal layer M4used to implement the fourth net N4may be λ4a, and the length of the wire on the fourth metal layer M4used to implement the fourth net N4may be λ4b. FIGS.17A to17Cillustrate an example of a timing analysis for the integrated circuit400ofFIG.15according to an exemplary embodiment. For example, a process variation may occur only in the first and second metal layers M1and M2among the first to fourth metal layers M1to M4corresponding to the first to fifth nets N1to N5included in the integrated circuit400, and a process variation may not occur in the third and fourth metal layers M3and M4. In this case, a wire delay may vary only in the first net N1implemented by the first and second metal layers M1and M2. For example, the resistance of each of the first and second metal layers M1and M2may increase by 15% in comparison with a modeled target value. Hereinafter, the timing analysis operation will be described with reference toFIGS.6and15to17C. FIG.17Aillustrates an operation of calculating the first wire delay corresponding to the first net N1. The STA module235may receive the wire data D10′ including the length λ1of the wire on the first metal layer M1and the length λ2of the wire on the second metal layer M2from the P&R module231. For example, λ1may be 40 μm, and λ2may be 60 μm. Also, the STA module235may receive first unit delay information τ1and second unit delay information τ2included in a technology file. The first unit delay information τ1represents a delay per unit length of the first metal layer M1, and the second unit delay information τ2represents a delay per unit length of the second metal layer M2. For example, τ1may be 19.3 fs/μm, and τ2may be 3.3 fs/μm. As illustrated inFIG.15, when a delay of the first net N1is 1 ps, the first wire delay61of the first metal layer M1may be calculated as 795.9 fs (=1000*19.3*40/(19.3*40+3.3*60)) and the first wire delay62of the second metal layer M2may be calculated as 204.1 fs (=1000*3.3*60/(19.3*40+3.3*60)) from Equation 10 above. FIG.17Billustrates an operation of calculating a second wire delay corresponding to the first net N1. The STA module235may receive a first RC variation scaling factor σ1and a second RC variation scaling factor σ2as a user input. When the resistance of each of the first and second metal layers M1and M2increases by 15% in comparison with a modeled target value, both the first and second RC variation scaling factors σ1and σ2may be 1.15. For example, the second wire delay may be calculated by using Equation 11 or 12 above. In this case, the second wire delay δ1′ of the wire of the first metal layer M1may be calculated as 915.3 fs, and the second wire delay δ2′ of the wire of the second metal layer M2may be calculated as 234.7 fs. However, the inventive concept is not limited thereto, and an equation for calculating the second wire delay may vary according to various exemplary embodiments. FIG.17Cillustrates an operation of calculating a wire delay skew corresponding to the first net N1. Referring toFIG.17C, a wire delay skew Δ1corresponding to the first metal layer M1is −119.4 fs, and a wire delay skew Δ2corresponding to the second metal layer M2is −30.6 fs. For example, a hold slack for the first net N1may be calculated by using Equation 15 above. Accordingly, the hold slack may be calculated as −125 fs (=−√{square root over ((−119.4)2+(−30.6)2)}). FIG.18is a flow diagram illustrating an integrated circuit design method according to an exemplary embodiment. Referring toFIG.18, the integrated circuit design method may correspond to an implementation example ofFIG.1as a method of performing a timing analysis of an integrated circuit in consideration of wire variations. The exemplary embodiment shown inFIG.18may correspond to a modified exemplary embodiment of the method illustrated inFIG.4. Thus, the descriptions made above with reference toFIGS.4to17Cmay also be applied to the exemplary embodiment shown inFIG.18, and redundant descriptions thereof will be omitted for conciseness. In operation S610, layout data of an integrated circuit is generated by placing and routing standard cells defining the integrated circuit. In some exemplary embodiments, in operation S610, wire data D10acorresponding to a net included in the integrated circuit may be further generated. The wire data D10amay include layer information of at least one wire corresponding to a net included in the layout of the integrated circuit and physical information of the at least one wire. For example, the wire data may include length information of a wire. In operation S620, parasitic components are extracted from the layout data. In some exemplary embodiments, in operation S620, wire data D10bcorresponding to a net included in the integrated circuit may be further generated. The wire data D10bmay include layer information of at least one wire corresponding to a net included in the layout of the integrated circuit and physical information of the at least one wire. For example, the wire data may include length information of a wire. The wire data D10agenerated in operation S610and the wire data D10bgenerated in operation S620may be substantially equal to each other. Thus, in some exemplary embodiments, when the wire data D10ais generated in operation S610, the wire data D10bmay not be generated in operation S620. Also, in some exemplary embodiments, when the wire data D10bis generated in operation S620, the wire data D10amay not be generated in operation S610. In this manner, the wire data D10aand the wire data D10bmay be selectively generated. In operation S630, timing analysis data reflecting process variations of the wire is generated by performing a timing analysis based on the physical information of the wire. In some exemplary embodiments, the physical information of the wire may be included in the wire data D10agenerated in operation S610. Thus, in operation S630, the physical information may be acquired from the wire data D10a. In some exemplary embodiments, the physical information of the wire may be included in the wire data D10bgenerated in operation S620. Thus, in operation S630, the physical information may be acquired from the wire data D10b. In some exemplary embodiments, a wire delay skew of the timing path may be calculated based on a time constant scaling factor based on the process variations of the wire, unit delay information representing a delay per unit length of the wire, and physical information thereof. In some exemplary embodiments, the integrated circuit design method may further include an operation of performing an engineering change order (ECO) according to the timing analysis data. In some exemplary embodiments, in the design method, the CTS or optimization in the P&R tool may be again performed according to the timing analysis data. FIG.19is a flow diagram illustrating a semiconductor device manufacturing method according to an exemplary embodiment. Referring toFIG.19, the semiconductor device manufacturing method may be divided into an integrated circuit design process and an integrated circuit manufacturing process. The integrated circuit design process may include operations S710to S740, the integrated circuit manufacturing process may include operations S750and S760, and the integrated circuit manufacturing process may be performed in a semiconductor process module as an operation of manufacturing a semiconductor device according to an integrated circuit based on layout data. The semiconductor device manufacturing method according to the exemplary embodiment shown inFIG.19may manufacture a semiconductor device by performing the integrated circuit design method described above with reference toFIGS.1to18. Specifically, operations S710to S740may correspond respectively to operations S110to S140ofFIG.1, and redundant descriptions thereof will be omitted for conciseness. In operation S750, a mask is generated. The mask may be generated based on the layout data. Specifically, optical proximity correction (OPC) may be first performed based on the layout data, and the OPC may refer to a process of modifying the layout by reflecting an error according to an optical proximity effect. Subsequently, a mask may be manufactured according to the layout modified according to the OPC performance results. In this case, a mask may be manufactured by using the layout reflecting the OPC, for example, the graphic data system (GDS) II reflecting the OPC. In operation S760, a semiconductor device including the integrated circuit is manufactured. The semiconductor device may be manufactured by using the mask. Specifically, a semiconductor device including the integrated circuit is formed by performing various semiconductor processes on a semiconductor substrate such as a wafer by using a plurality of masks. For example, a process using a mask may represent a patterning process based on a lithography process. By the patterning process, a desired pattern may be formed on a semiconductor substrate or a material layer. The semiconductor processes may include a deposition process, an etching process, an ion process, and a cleaning process. Also, the semiconductor process may include a packaging process of mounting a semiconductor device on a PCB and sealing the same with a sealant, and may include a test process of testing a semiconductor device or a package. FIG.20illustrates a computer-readable storage medium1000according to an exemplary embodiment. Referring toFIG.20, the storage medium1000may store a P&R program1100, an STA program1200, layout data1300, and wire data1400. The storage medium1000may be a computer-readable storage medium, and may include a storage medium that may be read by a computer while being used to provide instructions and/or data to the computer. For example, the computer-readable storage medium1000may include a magnetic or optical medium such as disk, tape, CD-ROM, DVD-ROM, CD-R, CD-RW, DVD-R, or DVD-RW, a volatile or nonvolatile memory such as RAM, ROM, or flash memory, a nonvolatile memory accessible through a USB interface, and a microelectromechanical system (MEMS). The computer-readable storage medium may be inserted into the computer, may be integrated into the computer, or may be connected with the computer through a communication medium such as a network and/or a wireless link. The P&R program1100may include a plurality of instructions for performing a method of generating layout data of an integrated circuit by using a standard cell library according to the exemplary embodiments described above. For example, the P&R program1100may be used to perform operation S120ofFIG.1, operation S210ofFIG.4, operations S440to S460ofFIG.12, operation S610ofFIG.18, or operation S720ofFIG.19. The STA program1200may include a plurality of instructions for performing a timing analysis method according to the exemplary embodiments described above. For example, the STA program1200may be used to perform operation S140ofFIG.1, operation S220ofFIG.4, operations S410to S430and S470to S490ofFIG.12, operations S310, S320, and S340to S360ofFIG.11, operations S510to S540ofFIG.13, operation S630ofFIG.18, or operation S740ofFIG.19. The layout data1300may include physical information about the layout generated by the P&R operation. For example the layout data1300may include the space values and the width values of conductive patterns constituting a signal net. The wire data1400may include layer information of at least one wire corresponding to each of the nets included in the integrated circuit and physical information of the at least one wire. Also, the wire data1400may include layer information of at least one via corresponding to each of the nets included in the integrated circuit and physical information of the at least one via. For example, the wire data1400may be generated by the P&R program1100. However, the inventive concept is not limited thereto, and the wire data1400may be generated by a parasitic extraction program. AlthoughFIG.20illustrates the layout data1300and the wire data1400separately, the inventive concept is not limited thereto. In some exemplary embodiments, the layout data1300may include the wire data1400. The exemplary embodiments of the inventive concept have been described above with reference to the drawings. Although particular terms are used herein to describe the exemplary embodiments, they are merely used to describe the technical idea of the inventive concept and are not intended to limit the scope of the inventive concept as described in the following claims. Therefore, those of ordinary skill in the art will understand that various modifications and other equivalent embodiments may be derived therefrom. Thus, the spirit and scope of the inventive concept should be defined by the appended claims. While the inventive concept has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the following claims. | 68,625 |
11861282 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components, values, operations, materials, arrangements, or the like, are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Other components, values, operations, materials, arrangements, or the like, are contemplated. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. In various embodiments, a method, system, and structure correspond to an IC layout diagram of a cell including fin, field-effect transistors (FinFETs) having differing numbers of fins. For a given cell height, including at least one FinFET having a greater number of fins than at least one other FinFET increases a driving ability of the associated IC device compared to approaches in which each FinFET of a cell includes a same number of fins. In some embodiments, the IC layout diagram includes a fin track arrangement configured to support placement of a variety of cells, including those with FinFETs having differing numbers of fins, thereby enabling the increased driving ability of multiple cells compared to approaches that do not include fin track arrangements configured to support placement of cells with FinFETs having differing numbers of fins. FIG.1is a flowchart of a method100of operating an IC manufacturing system, in accordance with some embodiments. In some embodiments, operating the IC manufacturing system includes generating an IC layout diagram, e.g., an IC layout diagram200or300discussed below with respect toFIGS.2and3, corresponding to an IC structure, e.g., an IC structure900discussed below with respect toFIG.9, manufactured based on the generated IC layout diagram as part of an IC device. Non-limiting examples of IC devices include memory circuits, logic devices, processing devices, signal processing circuits, or the like. In some embodiments, some or all of method100is executed by a processor of a computer. In some embodiments, some or all of method100is executed by a processor1102of an IC layout diagram generation system1100, discussed below with respect toFIG.11. Some or all of the operations of method100are capable of being performed as part of a design procedure performed in a design house, e.g., a design house1220discussed below with respect toFIG.12. In some embodiments, the operations of method100are performed in the order depicted inFIG.1. In some embodiments, the operations of method100are performed simultaneously and/or in an order other than the order depicted inFIG.1. In some embodiments, one or more operations are performed before, between, during, and/or after performing one or more operations of method100. FIGS.2and3are depictions of non-limiting examples of respective IC layout diagrams200and300generated by executing one or more operations of method100as discussed below, in some embodiments. IC layout diagrams200and300are simplified for the purpose of illustration. In various embodiments, one or more of IC layout diagrams200and300includes features in addition to those depicted inFIGS.2and3, e.g., one or more transistor elements, power rails, isolation structures, wells, conductive elements, or the like. Each ofFIGS.2and3further depicts an X direction and a Y direction perpendicular to the X direction. The X direction being depicted as horizontal with respect to the page and the Y direction being depicted as vertical are a non-limiting example for the purpose of illustration. In various embodiments, the X and Y directions are perpendicular to each other and have orientations other than those depicted inFIGS.2and3. The X direction includes a positive X direction depicted inFIGS.2and3and a negative X direction (not labeled) opposite to the positive X direction. The Y direction includes a positive Y direction depicted inFIGS.2and3and a negative Y direction (not labeled) opposite to the positive Y direction. At operation110, in some embodiments, an IC layout diagram of a cell is received. In some embodiments, receiving the IC layout diagram of the cell is referred to as receiving the cell. In some embodiments receiving the IC layout diagram of the cell is part of receiving one or more IC layout diagrams of a plurality of cells. In various embodiments, receiving the IC layout diagram of the cell includes receiving the IC layout diagram of a standard cell, a custom cell, an engineering change order (ECO) cell, a logic gate cell, a memory cell, or another type of cell or combination of cells capable of being defined in an IC layout diagram. In various embodiments, a logic gate cell includes one or more of an AND, OR, NAND, NOR, XOR, INV, AND-OR-Invert (AOI), OR-AND-Invert (OAI), MUX, flip-flop, BUFF, latch, delay, or clock device. In various embodiments, a memory cell includes one or more of a static random access memory (SRAM), a dynamic RAM (DRAM), a resistive RAM (RRAM), a magnetoresistive RAM (MRAM), or read only memory (ROM) cell, or another device capable of having multiple states representative of logical values. Receiving the IC layout diagram of the cell includes receiving the cell including a pair of active regions. An active region, e.g., an active region AR1or AR2discussed below with respect toFIGS.2and3, is a region in an IC layout diagram included in a manufacturing process as part of defining an active area, also referred to as an oxide diffusion or definition (OD) in some embodiments, in a semiconductor substrate. An active area is a continuous section of the semiconductor substrate having either n-type or p-type doping that includes various semiconductor structures, including one or more fins of a FinFET in some embodiments. In various embodiments, an active area is located within a well, i.e., either an n-well or a p-well, within the semiconductor substrate and/or is electrically isolated from other elements in the semiconductor substrate by one or more isolation structures, e.g., one or more shallow trench isolation (STI) structures. A fin is a raised, elongated portion of an active area extending in a first direction including one or more of an elementary semiconductor, e.g., silicon (Si) or germanium (Ge), a compound semiconductor, e.g., silicon germanium (SiGe), silicon carbide (SiC), gallium arsenide (GaAs), gallium phosphide (GaP), indium phosphide (InP), indium arsenide (InAs), or indium antimonide (ISb), or an alloy semiconductor, e.g., GaAsP, AlInAs, AlGaAs, GaInAs, GaInP, or GaInAsP, or the like. In some embodiments, an active area includes one or more source/drain (S/D) structures corresponding to one or more S/D regions within the active region used to define the active area. An S/D structure is a semiconductor structure within an active area, adjacent to or including portions of the one or more fins, and configured to have a doping type opposite to that of other portions of the active area. In some embodiments, an S/D structure is configured to have lower resistivity than other portions of the active area, e.g., by including one or more portions having doping concentrations greater than one or more doping concentrations otherwise present throughout the active area. In various embodiments, S/D structures include epitaxial regions of a semiconductor material, e.g., Si, SiGe, and/or SiC. In some embodiments, receiving the IC layout diagram of the cell includes receiving the IC layout diagram including one or more of a gate region, a metal-like defined (MD) region, a conductive region, or a via region. A gate region, e.g., one of gate regions GR0-GR3depicted inFIGS.2and/or3, is a region in an IC layout diagram included in a manufacturing process as part of defining a gate structure overlying the semiconductor substrate. In the non-limiting examples depicted inFIGS.2and3, gate regions GR0-GR3have an orientation along the Y direction. As indicated inFIG.3, in some cases, a location at which a gate region intersects an active region in an IC layout diagram corresponds to a transistor, e.g., one of transistors P1, P2, N1, or N2, in the corresponding IC structure that includes the portion of the corresponding gate structure overlying the corresponding active area, portions of the active area below and partially surrounded by the gate structure, and S/D structures adjacent to the gate structure. In other cases, a gate region, e.g., one of gate regions GR0or GR3, intersects an active region, e.g., one of active regions AR1or AR2at a location that does not correspond to a transistor, and the corresponding gate structure is referred to as a dummy gate structure in some embodiments. A gate structure is a volume including one or more conductive segments including one or more conductive materials, e.g., polysilicon, one or more metals, and/or one or more other suitable materials, substantially surrounded by one or more insulating materials, e.g., silicon dioxide and/or one or more other suitable materials, the one or more conductive segments thereby being configured to control a voltage provided to underlying and adjacent dielectric layers. In various embodiments, a dielectric layer includes one or more of silicon dioxide and/or a high-k dielectric material, e.g., a dielectric material having a k value higher than 3.8 or 7.0. In some embodiments, a high-k dielectric material includes aluminum oxide, hafnium oxide, lanthanum oxide, or another suitable material. An MD region, e.g., one of MD regions MDR1-MDR5depicted inFIG.3, is a conductive region in an IC layout diagram included in a manufacturing process as part of defining an MD segment in and/or on a semiconductor substrate. In the non-limiting examples depicted inFIG.3, MD regions MDR1-MDR5have an orientation along the Y direction. In some embodiments, an MD segment includes a portion of at least one metal layer, e.g., a contact layer, overlying and contacting the substrate and having a thickness sufficiently small to enable formation of an insulation layer between the MD segment and an overlying metal layer, e.g., a metal zero layer. In various embodiments, an MD segment includes one or more of copper (Cu), silver (Ag), tungsten (W), titanium (Ti), nickel (Ni), tin (Sn), aluminum (Al) or another metal or material suitable for providing a low resistance electrical connection between IC structure elements, i.e., a resistance level below a predetermined threshold corresponding to one or more tolerance levels of a resistance-based effect on circuit performance. In various embodiments, an MD segment includes a section of the semiconductor substrate and/or an epitaxial layer having a doping level, e.g., based on an implantation process, sufficient to cause the segment to have the low resistance level. In various embodiments, a doped MD segment includes one or more of silicon (Si), silicon-germanium (SiGe), silicon-carbide (SiC), boron (B), phosphorous (P), arsenic (As), gallium (Ga), a metal as discussed above, or another material suitable for providing the low resistance level. In some embodiments, an MD segment includes a dopant having a doping concentration of about 1*1016per cubic centimeter (cm−3) or greater. In various embodiments, one or more MD regions, e.g., one or more of MD regions MDR1-MDR5, overlaps one or more active regions, e.g., one or both of active regions AR1or AR2, and the corresponding one or more MD segments includes at least a portion within the corresponding one or more active areas. In various embodiments, one or more MD segments abuts or includes some or all of one or more S/D structures in the corresponding one or more active areas. A conductive region, e.g., one of conductive regions M0R or M1R depicted inFIG.3, is a conductive region in an IC layout diagram included in a manufacturing process as part of defining a segment of a conductive layer of the manufacturing process. A conductive segment, e.g., a polysilicon, metal zero, metal one, or metal two segment, is a portion of a corresponding polysilicon or metal layer, e.g., a metal zero, metal one, or metal two layer, that includes one or more of polysilicon, copper (Cu), silver (Ag), tungsten (W), titanium (Ti), nickel (Ni), tin (Sn), aluminum (Al) or another metal or material suitable for providing a low resistance electrical connection between IC structure elements. A via region, e.g., one of via regions VR1-VR5depicted inFIG.3, is a region in an IC layout diagram included in a manufacturing process as part of defining a via structure configured to provide a low resistance electrical connection between conductive segments in two or more levels and/or layers of the manufacturing process. Via structures include one or more of copper (Cu), silver (Ag), tungsten (W), titanium (Ti), nickel (Ni), tin (Sn), aluminum (Al) or another metal or material suitable for providing low resistance electrical connections between IC structure layers. Receiving the IC layout diagram of the cell includes receiving the active regions of the pair of active regions corresponding to different ones of the n-type or p-type doping. In some embodiments, receiving the active regions includes receiving each of the pair of active regions configured to define a same number of fins of one or more FinFETs extending in the first direction. In various embodiments, receiving the IC layout diagram of the cell includes receiving each of the pair of active regions configured to define one, two, or three fins of one or more FinFETs. In some embodiments, receiving each of the pair of active regions configured to define a same number of fins includes receiving each of the pair of active regions having a same height in a cell height direction perpendicular to the first direction. In some embodiments, receiving each of the pair of active regions having the same height includes receiving each of the pair of active regions having a height AH2discussed below with respect toFIGS.2and3. In some embodiments, receiving the IC layout diagram of the cell includes receiving the IC layout diagram of the cell from a cell library, i.e., a database or collection of electronic files configured to store and provide access to a plurality of IC layout diagrams of various cells. In some embodiments, receiving the IC layout diagram of the cell includes receiving the IC layout diagram of the cell from a cell library1120of IC layout generation system1100, discussed below with respect toFIG.11. In some embodiments, receiving the IC layout diagram of the cell includes receiving one or more electronic files containing data usable by an IC manufacturing system as part of an IC manufacturing flow, e.g., IC manufacturing system1200discussed below with respect toFIG.12. At operation120, in some embodiments, the n-type or p-type active region of the cell is determined to be the first active region, discussed below with respect to operation130. Determining whether the n-type or p-type active region is the first active region is based on a timing critical path of the cell. In some embodiments, the n-type active region is determined to be the first active region if the timing critical path includes one or more n-type transistors having a significant effect on timing-related cell performance, or the p-type active region is determined to be the first active region if the timing critical path includes one or more p-type transistors having a significant effect on timing-related cell performance. The significance of an effect on timing-related cell performance is based on one or more predetermined criteria, e.g., rise time, fall time, switching speed, circuit bandwidth, or the like. In various embodiments, determining whether the n-type or p-type active region is the first active region is performed by receiving user input and/or by executing one or more algorithms, e.g., one or more circuit simulations, based on a layout design corresponding to the IC layout diagram of the cell. In various embodiments, determining whether the n-type or p-type active region is the first active region is based on one or more manufacturing recipe parameters, one or more circuit performance specifications, and/or one or more circuit configuration criteria, e.g., parallel or series transistor arrangements. At operation130, the first active region is positioned along the cell height direction in the IC layout diagram, the first active region being one of the n-type or the p-type and including a first total number of fins. In some embodiments, positioning the first active region in the IC layout diagram is performed in conjunction with positioning the second active region in the IC layout diagram as discussed below with respect to operation140. In some embodiments, positioning the first active region in the IC layout diagram includes positioning the first active region in the IC layout diagram of the cell received in operation110. In some embodiments, positioning the first active region in the IC layout diagram includes creating a new IC layout diagram of a cell and positioning a newly created first active region in the newly created IC layout diagram of the cell. In some embodiments, positioning the first active region in the IC layout diagram includes positioning the first active region determined by performing operation120. In some embodiments, positioning the first active region in the IC layout diagram includes positioning an active region otherwise designated as the first active region, e.g., based on a user input. The first active region including the first total number of fins includes the first active region having a predetermined total number of fins. The predetermined total number of fins included in a given active region is based on various manufacturing design criteria, e.g., a combination of IC feature sizes and circuit performance specifications. In various embodiments, the predetermined total number of fins included in the first active region is equal to two, three, or four fins. Positioning the first active region includes positioning the first active region having a first height in the cell height direction. In some embodiments, positioning the first active region having the first height includes the first height corresponding to the first total number of fins. In some embodiments, positioning the first active region having the first height includes increasing a height of an active region of the IC layout diagram received in operation110along the cell height direction. In some embodiments, positioning the first active region having the first height includes defining the first height of a newly created first active region in a newly created IC layout diagram of the cell in the cell height direction. In some embodiments, positioning the first active region includes positioning the first active region a first distance from a first cell border segment along the cell height direction. In some embodiments, positioning the first active region the first distance from the first cell border segment includes the first distance being greater than or equal to a first minimum spacing rule. In some embodiments, the first minimum spacing rule defines a minimum separation distance between an active region and a cell border in a given manufacturing recipe. Positioning the first active region the first distance from the first cell border segment is further discussed below with respect to operation140. FIG.2depicts IC layout diagram200of a cell200C including a boundary BR, a cell height CH in the Y direction, active region AR1including fins F1-F3extending in the X direction, active region AR2including fins F4and F5extending in the X direction, and gate regions GR1and GR2extending in the Y direction and intersecting each of active regions AR1and AR2, thereby defining, at least in part, one or more transistors (not labeled) of cell200C. In various embodiments, active region AR1is a p-type active region and active region AR2is an n-type active region, or active region AR1is an n-type active region and active region AR2is a p-type active region. In various embodiments, cell200C has a configuration, e.g., a logic gate, that includes one or more features, e.g., MD, via, and/or conductive regions, in addition to those depicted inFIG.2that are not shown for the purpose of illustration. In some embodiments, positioning the first active region in the IC layout diagram includes positioning active region AR1, including three fins F1-F3and having a height AH1in the Y direction, a distance D1along the Y direction from a border segment B1of border BR of cell200C, as further discussed below with respect to operation140. FIG.3depicts IC layout diagram300of a cell300C including boundary BR, p-type active region AR1including fins F1-F3(not shown inFIG.3), n-type active region AR2including fins F4and F5(not shown inFIG.3), gate regions GR0-GR3intersecting each of active regions AR1and AR2, MD regions MDR1-MDR5, via regions VR1-VR5, and conductive regions M0R and M1R. P-type transistor P1includes gate region GR1, the portion of active region AR1overlapped by gate region GR1, and S/D regions (not labeled) of active region AR1adjacent to gate region GR1; p-type transistor P2includes gate region GR2, the portion of active region AR1overlapped by gate region GR2, and S/D regions (not labeled) of active region AR1adjacent to gate region GR2; n-type transistor N1includes gate region GR1, the portion of active region AR2overlapped by gate region GR1, and S/D regions (not labeled) of active region AR2adjacent to gate region GR1; and n-type transistor N2includes gate region GR2, the portion of active region AR2overlapped by gate region GR2, and S/D regions (not labeled) of active region AR2adjacent to gate region GR2. Gate region GR1overlaps the portion of active region AR1corresponding to transistor P1, the portion of active region AR2corresponding to transistor N1, and via region VR2, thereby partially defining an input node (not otherwise shown) configured to be electrically connected through a via defined by via region VR2. Gate region GR2overlaps the portion of active region AR1corresponding to transistor P2, the portion of active region AR2corresponding to transistor N2, and via region VR3, thereby partially defining an input node (not otherwise shown) configured to be electrically connected through a via defined by via region VR3. MD region MDR1overlaps an S/D region of active region AR1between gate regions GR0and GR1and corresponding to transistor P1, thereby partially defining a conductive path (not otherwise shown) between transistor P1and a power supply voltage source (not shown). The S/D region of active region AR1between gate regions GR1and GR2is shared by transistors P1and P2, thereby defining a series connection between transistors P1and P2. The S/D region of active region AR1between gate regions GR2and GR3and corresponding to transistor P2is overlapped by MD region MDR2, via region VR1, and conductive region M1R. Conductive region M1R also overlaps via region VR5and conductive region M0R, which overlaps via region VR4, MD region MDR4, and the S/D region of active region AR2between gate regions GR1and GR2shared by transistors N1and N2. MD regions MDR2and MDR4, via regions VR2, VR4, and VR5, and conductive regions M0R and M1R thereby partially define an output node (not otherwise shown) including transistors P2, N1, and N2and configured to be electrically connected through a metal segment defined by conductive region M1R. MD region MDR3overlaps an S/D region of active region AR2between gate regions GR0and GR1and corresponding to transistor N1, thereby partially defining a conductive path (not otherwise shown) between transistor N1and a power supply voltage, or ground, reference (not shown). MD region MDR5overlaps an S/D region of active region AR2between gate regions GR2and GR3and corresponding to transistor N2, thereby partially defining a conductive path (not otherwise shown) between transistor N2and the power supply voltage reference. By the configuration depicted inFIG.3and discussed above, IC layout diagram300of cell300C corresponds to a NOR gate including gates of transistors P1and N1arranged as a first input, gates of transistors P2and N2arranged as a second input, transistors P1and P2connected in series between the power supply voltage and the output node, and transistors N1and N2connected in parallel between the output node and the power supply voltage reference. In some embodiments, positioning the first active region in the IC layout diagram includes positioning active region AR1having height AH1in cell300C, as further discussed below with respect to operation140. At operation140, a second active region is positioned along the cell height direction in the IC layout diagram, the second active region being the other of the n-type or the p-type and including a second total number of fins less than the first total number of fins. In some embodiments, positioning the second active region in the IC layout diagram is performed in conjunction with positioning the first active region in the IC layout diagram as discussed above with respect to operation130. In some embodiments, positioning the second active region in the IC layout diagram includes positioning the second active region in the IC layout diagram of the cell received in operation110. In some embodiments, positioning the second active region in the IC layout diagram includes positioning a newly created second active region in the IC layout diagram of the cell newly created in operation130. In some embodiments, positioning the second active region in the IC layout diagram includes positioning the second active region determined along with determining the first active region by performing operation120. In some embodiments, positioning the second active region in the IC layout diagram includes positioning an active region otherwise designated as the second active region, e.g., based on a user input. The second active region including the second total number of fins includes the second active region having a predetermined total number of fins. The predetermined total number of fins included in the second active region is less than the predetermined total number of fins included in the first active region. In some embodiments, a difference between the predetermined numbers of fins in the first and second active regions is equal to one. In various embodiments, the predetermined total number of fins included in the second active region is equal to one, two, or three fins. Positioning the second active region includes positioning the second active region having a second height in the cell height direction smaller than the first height of the first active region. In some embodiments, positioning the second active region having the second height includes the second height corresponding to the second total number of fins. In some embodiments, positioning the second active region having the second height includes maintaining a height of an active region of the IC layout diagram received in operation110. In some embodiments, positioning the second active region having the second height includes defining the second height in the cell height direction of a newly created second active region in the IC layout diagram of the cell newly created in operation130. In some embodiments, positioning the second active region includes positioning the second active region a second distance along the cell height direction from a second cell border segment opposite the first cell border segment. In some embodiments, positioning the second active region the second distance from the second cell border segment includes the second distance being greater than or equal to the first minimum spacing rule. In some embodiments, positioning the second active region the second distance from the second cell border segment includes the second distance being equal to the first distance between the first active region and the first cell border segment discussed above with respect to operation130. In some embodiments, one or both of positioning the first active region as discussed in operation130or positioning the second active region includes separating the first and second active regions by a third distance along the cell height direction. In some embodiments, separating the first and second active regions by the third distance includes the third distance being greater than or equal to a second minimum spacing rule. In some embodiments, the second minimum spacing rule defines a minimum separation distance between adjacent active regions in a given manufacturing recipe. In some embodiments, a combination of the first and second active regions having the respective first and second heights, positioning the first active region the first distance from the first cell border segment, positioning the second active region the second distance from the second cell border segment, and separating the first and second active regions by the third distance includes a sum of the first and second heights and the first through third distances being equal to a height of the cell. In some embodiments, positioning the second active region in the IC layout diagram includes positioning active region AR2, having height AH2in the Y direction corresponding to two fins F4and F5, distance D1along the Y direction from a border segment B2of border BR in IC layout diagram200of cell200C depicted inFIG.2. In some embodiments, positioning one or both of the first or second active regions includes positioning one or both of active regions AR1or AR2separated by a distance D2along the Y direction such that a sum of heights AH1and AH2and distances D1(2×) and D2is equal to cell height CH as depicted inFIG.2. In some embodiments, distance D1and/or the first minimum spacing rule have one or more values ranging from 10 nanometers (nm) to 50 nm. In some embodiments, distance D1and/or the first minimum spacing rule have one or more values ranging from 25 nm to 40 nm. In some embodiments, distance D2and/or the second minimum spacing rule have one or more values ranging from 20 nm to 120 nm. In some embodiments, distance D2and/or the second minimum spacing rule have one or more values ranging from 50 nm to 100 nm. In some embodiments, height AH1has a value ranging from 30 nm to 100 nm. In some embodiments, height AH1has a value ranging from 45 nm to 85 nm. In some embodiments, height AH2has a value ranging from 20 nm to 65 nm. In some embodiments, height AH2has a value ranging from 35 nm to 50 nm. In some embodiments, height CH has a value ranging from 100 nm to 400 nm. In some embodiments, height CH has a value ranging from 200 nm to 300 nm. In the embodiment depicted inFIG.2, distance D1is equal to the first minimum spacing rule, distance D2is greater than or equal to the second minimum spacing rule, and height AH1is greater than height AH2by a height difference DAH. In some embodiments, height difference DAH has a value ranging from 5 nm to 50 nm. In some embodiments, height difference DAH has a value ranging from 10 nm to 35 nm. Height difference DAH thereby represents a difference between distance D2and a larger distance D2+DAH that would otherwise separate active regions AR1and AR2if each of active regions AR1and AR2were to have height AH2corresponding to two fins. Conversely, if each of active regions AR1and AR2were to have height AH1corresponding to three fins, height difference DAH would represent the difference between distance D2and a shorter distance D2−DAH that would otherwise separate active regions AR1and AR2. In the embodiment depicted inFIG.2, the shorter distance D2−DAH is less than the second minimum spacing rule such that positioning each of active regions AR1and AR2having height AH1in cell200C is not possible without violating the first or second minimum spacing rule and/or increasing cell height CH. In the embodiment depicted inFIG.2, the first and second minimum spacing rules, heights AH1and AH2, cell height CH, and distances D1and D2are thereby related such that, for the given cell height CH, the total number of five fins (fins F1-F3in active region AR1plus fins F4and F5in active region AR2) is a maximum total number of fins capable of being included in regions AR1and AR2positioned in IC layout diagram200of cell200C in operations130and140. In various embodiments, cells other than cell200C are similarly based on minimum spacing rules and include heights and distances configured such that, for a given cell height, maximum total numbers of three, five, or seven fins are capable of being included in first and second active regions positioned in IC layout diagrams of the cells in operations130and140. In some embodiments, positioning the second active region in the IC layout diagram includes positioning active region AR2having height AH2in IC layout diagram300of cell300C depicted inFIG.3. Positioning active region AR2having height AH2in IC layout diagram300of cell300C corresponds to each of n-type transistors N1and N2including a total of two fins, and positioning active region AR1having height AH1in IC layout diagram300of cell300C in operation130corresponds to each of p-type transistors P1and P2including a total of three fins. In the embodiment depicted inFIG.3, p-type transistors P1and P2are part of a timing critical path of the NOR gate corresponding to IC layout diagram300of cell300C. Transistors P1and P2are thereby capable of having an increased driving current compared to approaches in which p-type transistors of a NOR gate include fewer fins. In some embodiments, by including three fins in transistors P1and P2, the NOR gate corresponding to IC layout diagram300of cell300C has a switching speed increase of 10-12% compared to an approach in which similarly arranged p-type transistors include two fins. In various embodiments, IC layout diagrams of cells other than cell300C corresponding to a NOR gate, e.g., cells corresponding to other NOR gate arrangements or NAND, OAI, AOI, or other logic gates, are otherwise configured such that one or more transistors in a timing critical path are capable of having increased driving current compared to approaches in which the one or more transistors have relatively fewer fins. At operation150, in some embodiments, third and fourth active regions are positioned in the cell along the cell height direction. Positioning the third active region includes positioning the third active region being the same type as the second active region and including a total number of fins the same as the total number of fins in the first active region. Positioning the fourth active region includes positioning the fourth active region being the same type as the first active region and including a total number of fins the same as the total number of fins in the second active region. Positioning the third and fourth active regions includes positioning the third active region between the second and fourth active regions. Because the first and fourth active regions are a same type, the second and third active regions are a same type, the first and third active regions include a same total number of fins, and the second and fourth active regions include a same total number of fins, positioning the third and fourth active regions causes the IC layout diagram of the cell to have equal total numbers of fins of each type, in some embodiments. In various embodiments, positioning the third and fourth active regions causes the IC layout diagram of the cell to have the total number of fins of each type equal to three, five, or seven. In some embodiments, the IC layout diagram of the cell including the third and fourth active regions positioned as discussed above is capable of being placed in an IC layout diagram including fin tracks corresponding to the first through fourth active regions, e.g., an IC layout diagram700discussed below with respect to method400andFIGS.4and7. At operation160, in some embodiments, the IC layout diagram is generated and stored in a storage device. Generating the IC layout diagram is performed by a processor, e.g., processor1102of IC layout diagram generation system1100discussed below with respect toFIG.11. In some embodiments, generating the IC layout diagram includes generating some or all of IC design layout diagram1222discussed below with respect toFIG.12. In various embodiments, storing the IC layout diagram in the storage device includes storing the IC layout diagram in a non-volatile, computer-readable memory or a cell library, e.g., a database, and/or includes storing the IC layout diagram over a network. In various embodiments, storing the IC layout diagram in the storage device includes storing the IC layout diagram in cell library1120or over network1114of IC layout diagram generation system1100, discussed below with respect toFIG.11. In various embodiments, generating and storing the IC layout diagram includes generating and storing one or more of IC layout diagrams200or300discussed above with respect toFIGS.2and3or IC layout diagrams500-800discussed below with respect toFIGS.4-8. At operation170, in some embodiments, at least one of one or more semiconductor masks, or at least one component in a layer of a semiconductor IC is fabricated based on the IC layout diagram. Fabricating one or more semiconductor masks or at least one component in a layer of a semiconductor IC is discussed below with respect to IC manufacturing system1200andFIG.12. In various embodiments, fabricating one or more semiconductor masks or at least one component in the layer of the semiconductor IC is based on one or more of IC layout diagrams200or300discussed above with respect toFIGS.2and3or IC layout diagrams500-800discussed below with respect toFIGS.4-8. In some embodiments, fabricating one or more semiconductor masks or at least one component in the layer of the semiconductor IC is part of a method1000of manufacturing an IC structure discussed below with respect toFIG.10. At operation180, in some embodiments, one or more manufacturing operations are performed based on the IC layout diagram. In some embodiments, performing one or more manufacturing operations includes performing one or more lithographic exposures based on the IC layout diagram. Performing one or more manufacturing operations, e.g., one or more lithographic exposures, based on the IC layout diagram is discussed below with respect toFIG.12. In various embodiments, performing one or more manufacturing operations is based on one or more of IC layout diagrams200or300discussed above with respect toFIGS.2and3or IC layout diagrams500-800discussed below with respect toFIGS.4-8. In some embodiments, performing the one or more manufacturing operations is part of method1000of manufacturing an IC structure discussed below with respect toFIG.10. By executing some or all of the operations of method100, an IC layout diagram, e.g., one of IC layout diagrams200or300, is generated in which a cell includes at least one FinFET having a greater number of fins than at least one other FinFET in the cell. For a given cell height, the differing number of fins enables an increased driving ability of an associated IC device compared to approaches in which each FinFET of a cell includes a same number of fins. Further, the relative increase in the total number of fins, and thereby driving ability, is achieved without increasing cell area compared to approaches in which each FinFET of a cell includes a same number of fins. FIG.4is a flowchart of a method400of operating an IC manufacturing system, in accordance with some embodiments. In some embodiments, operating the IC manufacturing system includes generating an IC layout diagram, e.g., an IC layout diagram500-800discussed below with respect toFIGS.5-8, corresponding to an IC structure, e.g., IC structure900discussed below with respect toFIG.9, manufactured based on the generated IC layout diagram as part of an IC device. In some embodiments, some or all of method400is executed by a processor of a computer. In some embodiments, some or all of method400is executed by processor1102of an IC layout diagram generation system1100, discussed below with respect toFIG.11. Some or all of the operations of method400are capable of being performed as part of a design procedure performed in a design house, e.g., design house1220discussed below with respect toFIG.12. In some embodiments, the operations of method400are performed in the order depicted inFIG.4. In some embodiments, the operations of method400are performed simultaneously and/or in an order other than the order depicted inFIG.4. In some embodiments, one or more operations are performed before, between, during, and/or after performing one or more operations of method400. FIGS.5-8are depictions of non-limiting examples of corresponding IC layout diagrams500-800generated by executing one or more operations of method400as discussed below, in some embodiments. IC layout diagrams500-800are simplified for the purpose of clarity. In various embodiments, one or more of IC layout diagrams500-800includes features in addition to those depicted inFIGS.5-8, e.g., one or more transistor elements, power rails, isolation structures, wells, conductive elements, or the like. Each ofFIGS.5-8further depicts the X and Y directions discussed above with respect toFIGS.2and3. At operation410, a first plurality of fin tracks is arranged into a first subset having a first number of fin tracks corresponding to a first type, and a second subset having a second number of fin tracks corresponding to a second type, the first number being greater than the second number. Arranging the first plurality of fin tracks includes arranging the first plurality of fin tracks extending in a first direction in an IC layout diagram. Fin tracks are lines in the IC layout diagram that define, at least in part, potential locations of FinFET fins and correspond to active regions usable to define p-type or n-type active areas as discussed above with respect to method100andFIGS.1-3. In various embodiments, arranging the first plurality of fin tracks includes the first subset having fin tracks corresponding to the first type being p-type fins and the second subset having fin tracks corresponding to the second type being n-type fins, or includes the first subset having fin tracks corresponding to the first type being n-type fins and the second subset having fin tracks corresponding to the second type being p-type fins. In some embodiments, arranging the first plurality of fin tracks includes the first number of fin tracks being greater than the second number of fin tracks by one. In various embodiments, arranging the first plurality of fin tracks includes the first subset having two, three, or four fin tracks. In various embodiments, arranging the first plurality of fin tracks includes the second subset having one, two, or three fin tracks. In some embodiments, arranging the first plurality of fin tracks includes arranging the first plurality of fin tracks corresponding to a first row of cells in the IC layout diagram. In some embodiments, arranging the first plurality of fin tracks includes arranging fin tracks FT1-FT5extending in the X direction in IC layout diagram500depicted inFIG.5and/or IC layout diagram700depicted inFIG.7. Arranging fin tracks FT1-FT5includes arranging fin tracks FT1-FT5into first and second subsets corresponding to a subset S11having the first number equal to three fin tracks FT1-FT3and a subset S12having the second number equal to two fin tracks FT4and FT5. In various embodiments, subset S11corresponds to the first type being p-type fins and subset S12corresponds to the second type being n-type fins, or subset S11corresponds to the first type being n-type fins and subset S12corresponds to the second type being p-type fins. Arranging fin tracks FT1-FT5includes arranging fin tracks FT1-FT5corresponding to a row R1having cell height CH discussed above with respect toFIGS.1-3. At operation420, a second plurality of fin tracks extending in the first direction is arranged into a first subset having the first number of fin tracks corresponding to the second type, and a second subset having the second number of fin tracks corresponding to the first type. Arranging the second plurality of fin tracks includes arranging the second plurality of fin tracks in the IC layout diagram. In some embodiments, arranging the second plurality of fin tracks includes arranging the second plurality of fin tracks corresponding to a second row of cells in the IC layout diagram. In some embodiments, arranging the second plurality of fin tracks includes arranging fin tracks FT6-FT10extending in the X direction in IC layout diagram500depicted inFIG.5and/or IC layout diagram700depicted inFIG.7. Arranging fin tracks FT6-FT10includes arranging fin tracks FT6-FT10into first and second subsets corresponding to a subset S21having the first number equal to three fin tracks FT6-FT8, and a subset S22having the second number equal to two fin tracks FT9and FT10. Subset S21corresponds to the second type fins of subset S12, and subset S22corresponds to the first type fins of subset S11. Arranging fin tracks FT6-FT10includes arranging fin tracks FT6-FT10corresponding to a row R2having cell height CH. At operation430, the second subset of the first plurality of fin tracks is abutted with the first subset of the second plurality of fin tracks. Abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes positioning a fin track of the second subset of the first plurality of fin tracks adjacent to a fin track of the first subset of the second plurality of fin tracks along a second direction perpendicular to the first direction. In some embodiments, abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes abutting the first row with the second row. Abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes an area in the IC layout diagram between the first and second pluralities of fin tracks being free from including a fin track. In various embodiments, abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes the area in the IC layout diagram between the first and second pluralities of fin tracks including one or more features, e.g., a conductive region corresponding to a power rail or an MD region, other than a fin track. In some embodiments, abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes abutting subset S12with subset S21and abutting row R1with row R2by positioning fin track FT5adjacent to fin track FT6along the Y direction as depicted inFIGS.5and7. In some embodiments, abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes abutting the second subset of the second plurality of fin tracks with a first subset of an additional plurality of fin tracks along the second direction, the additional plurality of fin tracks having a same configuration as the first plurality of fin tracks. In some embodiments, abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes abutting a second subset of an additional plurality of fin tracks with the first subset of the first plurality of fin tracks along the second direction, the additional plurality of fin tracks having a same configuration as the second plurality of fin tracks. In some embodiments, abutting the second subset of the first plurality of fin tracks with the first subset of the second plurality of fin tracks includes abutting the first and second pluralities of fin tracks with one or more additional first and second pluralities of fin tracks along the second direction, thereby arranging the first and second pluralities of fin tracks within a pattern of repeated pluralities of first and second fin tracks. At operation440, in some embodiments, a third plurality of fin tracks is aligned with one of the first plurality of fin tracks or the second plurality of fin tracks along the first direction. Aligning the third plurality of fin tracks includes aligning the third plurality of fin tracks having a first subset having the second number of fin tracks and being the same type as the first subset of the one of the first or second pluralities of fin tracks, and a second subset having the second number of fin tracks and being the same type as the second subset of the one of the first or second pluralities of fin tracks. Aligning the third plurality of fin tracks includes aligning each fin track of the first subset of the third plurality of fin tracks with a corresponding fin track of the first subset of the one of the first or second pluralities of fin tracks along the first direction, and aligning each fin track of the second subset of the third plurality of fin tracks with a corresponding fin track of the second subset of the one of the first or second pluralities of fin tracks along the first direction. Because the first subset of the third plurality of fin tracks has the second number of fin tracks less than the first number of fin tracks of the first subset of the one of the first or second pluralities of fin tracks, at least one fin track of the first subset of the one of the first or second pluralities of fin tracks does not align with a fin track of the first subset of the third plurality of fin tracks along the first direction. Aligning the first subset of the third plurality of fin tracks thereby includes generating a fin track discontinuity between the first subsets of the third plurality of fin tracks and the one of the first or second pluralities of fin tracks. In embodiments in which the first number of fin tracks is greater than the second number of fin tracks by one, generating the fin track discontinuity includes generating the fin track discontinuity based on a single fin track of the first subset of the one of the first or second pluralities of fin tracks not aligning with a fin track of the first subset of the third plurality of fin tracks along the first direction. In some embodiments, generating the fin track discontinuity includes generating the fin track discontinuity based on more than one fin track of the first subset of the one of the first or second pluralities of fin tracks not aligning with a fin track of the first subset of the third plurality of fin tracks along the first direction. In some embodiments, aligning the third plurality of fin tracks includes separating the third plurality of fin tracks from the one of the first or second pluralities of fin tracks by a gap. Separating the third plurality of fin tracks from the one of the first or second pluralities of fin tracks by the gap corresponds to the fin track discontinuity between the first subsets of the one of the first or second pluralities of fin tracks and the third plurality of fin tracks. In some embodiments, separating the third plurality of fin tracks from the one of the first or second pluralities of fin tracks by the gap is part of conforming to one or more manufacturing recipe rules based on the fin track discontinuity. In some embodiments, the gap has a value ranging from 20 nm to 150 nm. In some embodiments, the gap has a value ranging from 50 nm to 100 nm. In some embodiments, aligning the third plurality of fin tracks includes aligning fin tracks FT11-FT14with fin tracks FT1-FT5along the X direction in IC layout diagram500depicted inFIG.5and/or IC layout diagram700depicted inFIG.7. Aligning fin tracks FT11-FT14includes aligning fin tracks FT11-FT14having first and second subsets corresponding to a subset S31having the second number of two fin tracks FT11and FT12of the same type as subset S11, and a subset S32having the second number of two fin tracks FT13and FT14of the same type as subset S12. Aligning subset S31with subset S11includes aligning fin track FT11with fin track FT1along the X direction and aligning fin track FT12with fin track FT2along the X direction. Aligning subset S32with subset S12includes aligning fin track FT13with fin track FT4along the X direction and aligning fin track FT13with fin track FT5along the X direction. Because subset S11has the first number of three fin tracks greater than the first number of two fin tracks of subset S31, fin track FT3does not align with a fin track of subset S31along the X direction, and aligning fin tracks FT11-FT14with fin tracks FT1-FT5generates a fin track discontinuity between subsets S11and S31at a gap G1. In the non-limiting example depicted inFIGS.5and7, fin track FT3not aligning with a fin track of subset S31corresponds to fin track FT3aligning along the X direction with a space between subsets S31and S32. In various embodiments, a given fin track of the first subset of the one of the first or second pluralities of fin tracks not aligning along the first direction with a fin track of the first subset of the third plurality of fin tracks, e.g., fin track FT3not aligning with a fin track of subset S31along the X direction, corresponds to the given fin track aligning along the first direction with a space other than a space between the first and second subsets of the third plurality of fin tracks, i.e., a space between adjacent fin tracks of the first subset of the third plurality of fin tracks or a space outside a space occupied by the first and second subsets of the third plurality of fin tracks. In some embodiments, aligning the third plurality of fin tracks with one of the first plurality of fin tracks or the second plurality of fin tracks along the first direction includes aligning the third plurality of fin tracks with the first plurality of fin tracks along the first direction and aligning a fourth plurality of fin tracks with the second plurality of fin tracks along the first direction, each of the third and fourth pluralities of fin tracks having a same configuration as discussed above. In some embodiments, the first and second pluralities of fin tracks are arranged within a pattern of repeated pluralities of first and second fin tracks, the third plurality of fin tracks is one of multiple third pluralities of fin tracks configured as discussed above, and aligning the third plurality of fin tracks includes aligning each of the multiple third pluralities of fin tracks with various ones of the pluralities of first and second fin tracks along the first direction. At operation450, in some embodiments, a cell is aligned with one of the first plurality of fin tracks, the second plurality of fin tracks, or the third plurality of fin tracks based on a cell type. Aligning the cell with the one of the first, second, or third pluralities of fin tracks includes aligning a fin configuration of the cell with the fin track arrangement of the one of the first, second, or third pluralities of fin tracks along the first direction. In some embodiments, the cell includes a fin configuration free from including a fin, and aligning the cell with the one of the first, second, or third pluralities of fin tracks includes aligning the cell in the gap between the third plurality of fin tracks and the one of the first or second pluralities of fin tracks. In some embodiments, aligning the cell with one of the first plurality of fin tracks, the second plurality of fin tracks, or the third plurality of fin tracks based on the cell type includes placing the cell in the IC layout diagram including the first, second, and/or third plurality of fin tracks. In some embodiments, placing the cell in the IC layout diagram is part of an automated placement and routing (APR) method as discussed below. In some embodiments, aligning the cell with the one of the first, second, or third pluralities of fin tracks includes receiving an IC layout diagram of the cell. In some embodiments, receiving the IC layout diagram of the cell includes receiving the IC layout diagram of the cell from a cell library as discussed above with respect to method100andFIG.1. In some embodiments, aligning the cell with the one of the first, second, or third pluralities of fin tracks includes receiving one or more of IC layout diagrams200or300and aligning one or more of cells200C or300C, each discussed above with respect toFIGS.1-3. In some embodiments, aligning the cell with the one of the first, second, or third pluralities of fin tracks includes generating one or more cells, e.g., one or more of cells200C or300C. In some embodiments, aligning the cell with the one of the first, second, or third pluralities of fin tracks includes aligning one or more of cells600A-600D of an IC layout diagram600depicted inFIG.6. In the embodiment depicted inFIG.6, IC layout diagram600includes each of cells600A-600D. In various embodiments, IC layout diagram600includes a subset of cells600A-600D and/or one or more cells (not shown) in addition to cells600A-600D. As depicted inFIG.6, each of cells600A-600D has cell height CH discussed above with respect toFIGS.2and5. Cell600A is free from including a fin configuration and accordingly has a total number of fins equal to zero. Cell600B has a fin configuration that includes a first subset of two p-type fins FP1and FP2and a second subset of two n-type fins FN1and FN2. Cell600C has a fin configuration that includes a first subset of three p-type fins FP1-FP3and a second subset of two n-type fins FN1and FN2. Cell600D has a fin configuration that includes a first subset of two p-type fins FP1and FP2and a second subset of three n-type fins FN1-FN3. In some embodiments, aligning the cell with the one of the first, second, or third pluralities of fin tracks includes aligning cells600A-600D with pluralities of fin tracks FT1-FT5, FT6-FT10, and FT11-FT14corresponding to rows R1and R2discussed above with respect toFIG.5, in IC layout diagram700depicted inFIG.7. In the embodiment depicted inFIG.7, each of subsets S11, S22, and S31corresponds to the p-type and each of subsets S12, S21, and S32corresponds to the n-type. In the embodiment depicted inFIG.7, aligning cell600A is based on cell600A having zero fins and includes placing cell600A in gap G1between fin tracks FT1-FT5and fin tracks FT11-FT14, thereby placing cell600A in row R1. Aligning cell600B includes aligning p-type fins FP1and FP2with respective p-type fin tracks FT11and FT12and aligning n-type fins FN1and FN2with respective n-type fin tracks FT13and FT14, thereby placing cell600B in row R1. Aligning cell600C includes aligning p-type fins FP1-FP3with respective p-type fin tracks FT1-FT3and aligning n-type fins FN1and FN2with respective n-type fin tracks FT4and FT5, thereby placing cell600C in row R1. Based on the configuration of cell600D, aligning cell600D includes inverting cell600D with respect to the Y direction, thereby aligning n-type fins FN3-FN1with respective fin tracks FT6-FT8and aligning p-type fins FP2and FP1with respective p-type fin tracks FT9and FT10, thereby placing cell600D in row R2. In the embodiment depicted inFIG.7, aligning the cell with the one of the first, second, or third pluralities of fin tracks further includes aligning a cell710with fin tracks FT1-FT10corresponding to first and second pluralities of fin tracks. Cell710has a configuration that includes a height (not labeled) equal to twice cell height CH, a first subset of three p-type fins FP1-FP3, a second subset of n-type fins FN1and FN2, a third subset of three n-type fins FN3-FN5, and a fourth subset of two p-type fins FP4and FP5. Aligning cell710with the first and second pluralities of fin tracks includes aligning p-type fins FP1-FP3with respective p-type fin tracks FT1-FT3, aligning n-type fins FN1and FN2with respective n-type fin tracks FT4and FT5, aligning n-type fins FN3-FN5with respective fin tracks FT6-FT8, and aligning p-type fins FP4and FP5with respective p-type fin tracks FT9and FT10, thereby placing cell710in rows R1and R2. Aligning cell710with the first and second pluralities of fin tracks thereby includes aligning cell710having a same total number (five) of p-type fins and n-type fins. As illustrated by the non-limiting example depicted inFIG.7, in some embodiments, the cell is one cell of a plurality of cells, and aligning the cell with the one of the first, second, or third pluralities of fin tracks includes aligning one or more cells of the plurality of cells with corresponding one or more of the first, second, or third pluralities of fin tracks. In various embodiments, the plurality of cells includes subsets corresponding to each of one or more fin configurations having zero or one or more fins, and aligning the one or more cells of the plurality of cells includes aligning each subset with a corresponding one of the one or more pluralities of fin tracks arranged as discussed above, thereby placing the plurality of cells in the IC layout diagram. In some embodiments, the fin configuration of a given cell, e.g., one of cells600B-600D or710, corresponds to a cell type based on timing criteria as discussed above with respect to method100andFIGS.1-3, and aligning the given cell with the one of the first, second, or third pluralities of fin tracks is thereby based on the cell type and the timing criteria. In some embodiments, placing the plurality of cells in the IC layout diagram includes placing the plurality of cells in IC layout diagram800depicted inFIG.8. IC layout diagram800includes continuous areas810,830,840, and860, and gaps820and850. Continuous areas810and830have differing fin track arrangements such that a fin track discontinuity is generated at gap820as discussed above, and continuous areas840and860have differing fin track arrangements such that a fin track discontinuity is generated at gap850. In the embodiment depicted inFIG.8, placing the plurality of cells in IC layout diagram800includes placing a first subset of the plurality of cells in continuous area810based on the fin configuration of the first subset matching the fin track arrangement of continuous area810, placing a second subset of the plurality of cells in continuous area830based on the fin configuration of the second subset matching the fin track arrangement of continuous area830, placing a third subset of the plurality of cells in continuous area840based on the fin configuration of the third subset matching the fin track arrangement of continuous area840, placing a fourth subset of the plurality of cells in continuous area860based on the fin configuration of the fourth subset matching the fin track arrangement of continuous area860, and placing a fifth subset of the plurality of cells in gaps820and850based on the fifth subset having a fin configuration including zero fins. In some embodiments, some or all of aligning the cell with the one of the first plurality of fin tracks, the second plurality of fin tracks, or the third plurality of fin tracks based on a cell type, including placing the plurality of cells in the IC layout diagram, is part of an APR method performed by an APR system. In some embodiments, the APR method further includes some or all of operations410through430. In various embodiments, the APR method includes one or a combination of a constructive algorithm, an iterative algorithm, or an integrated algorithm. In a constructive algorithm, operations of placing and routing are performed on a cell-by-cell basis. After an IC layout diagram has been updated to include placement of a given cell and its associated routing connections, an additional layout diagram revision includes placement of an additional cell and its associated routing connections. In an iterative algorithm, an initial IC layout diagram including multiple cells and associated routing connections is iteratively analyzed and revised based on circuit performance and trade-off criteria. In an integrated algorithm, circuit performance and trade-off criteria are applied as an IC layout diagram is being revised to include placement of a given cell and/or its routing connections. In various embodiments, method400includes one or more of operations160-180, each discussed above with respect to method100andFIG.1. By executing some or all of the operations of method400, an IC layout diagram, e.g., one of IC layout diagrams500-800, is generated in which pluralities of fin tracks have an arrangement capable of supporting placement of a variety of cells, including those with FinFETs having differing numbers of fins. IC layout diagrams including the pluralities of fin tracks thereby enable the manufacture of IC devices including the increased driving ability of the cells with FinFETs having differing numbers of fins compared to approaches that do not include fin track arrangements configured to support placement of cells with FinFETs having differing numbers of fins. Further, in the various embodiments, by executing some or all of the operations of method400, IC layout diagrams are generated in which the pluralities of fin tracks support placement of cells in addition to those with FinFETs having differing numbers of fins, e.g., cells600B and/or710. The corresponding cell placement operations and resultant IC layout diagrams thereby efficiently integrate the cells including FinFETs having differing numbers of fins with those including FinFETs having the same number of fins. FIG.9is a diagram of an IC structure900, in accordance with some embodiments. IC structure900is formed by executing some or all of the operations of methods100and/or400and is configured in accordance with one or more of IC layout diagrams200,300, or500-800, discussed above with respect toFIGS.1-8. In some embodiments, IC structure900is formed in accordance with method1000of manufacturing an IC structure discussed below with respect toFIG.10. The depiction of IC structure900inFIG.9is simplified for the purpose of clarity.FIG.9depicts a plan view of IC structure900with various features included and excluded to facilitate the discussion below.FIG.9further depicts the X and Y directions, discussed above with respect toFIGS.2and3. As depicted inFIG.9, IC structure900includes pluralities of fins PF1-PF6located on a substrate900S, and an IC device900D including pluralities of fins PF1-PF6. In some embodiments, IC structure900does not include IC device900D. Each of plurality of fins PF1-PF6includes one or more of a p-type or n-type fin extending in the X direction in an active area (not shown) in substrate900S and configured in accordance with a FinFET manufacturing process as discussed above with respect to method100andFIGS.1-3. Plurality of fins PF1is a first plurality of fins of a first type of the n-type or the p-type and corresponds to the first subset of the first plurality of fin tracks, e.g., subset S11including fin tracks FT1-FT3, discussed above with respect to method400andFIGS.4-7. Plurality of fins PF2is a second plurality of fins of a second type of the n-type or the p-type, is parallel to and adjacent to plurality of fins PF1, and corresponds to the second subset of the first plurality of fin tracks, e.g., subset S12including fin tracks FT4and FT5discussed above. Plurality of fins PF3is a third plurality of fins of the second type, is parallel to and adjacent to the second plurality of fins, and corresponds to the first subset of the second plurality of fins, e.g., subset S21including fin tracks FT6-FT8discussed above. Plurality of fins PF4is a fourth plurality of fins of the first type, is parallel to and adjacent to the third plurality of fins, and corresponds to the second subset of the second plurality of fin tracks, e.g., subset S22including fin tracks FT9and FT10discussed above. Plurality of fins PF1and plurality of fins PF3have a same first number of fins, plurality of fins PF2and plurality of fins PF4have a same second number of fins, and the first number is greater than the second number. In the embodiment depicted inFIG.9, the first number of fins is equal to three and the second number of fins is equal to two. In various embodiments, one or both of the first or second number of fins have respective values other than three and four in accordance with the embodiments discussed above with respect to method400. In the embodiment depicted inFIG.9, IC structure900includes plurality of fins PF5and PF6. In some embodiments, IC structure900does not include one or both of pluralities of fins PF5or PF6. Plurality of fins PF5is a fifth plurality of fins of the first type and has the second number of fins. Plurality of fins PF5corresponds to the first subset of the third plurality of fin tracks, e.g., subset S31including fin tracks FT11and FT12, discussed above and is accordingly aligned with a subset of first plurality of fins PF1and separated from plurality of fins PF1by a fin discontinuity region900G corresponding to gap G1discussed above with respect toFIGS.5and7. Plurality of fins PF6is a sixth plurality of fins of the second type, is parallel to and adjacent to the fifth plurality of fins, and correspond to the second subset of the third plurality of fin tracks, e.g., subset S32including fin tracks FT13and FT14discussed above. In some embodiments, IC structure900includes one or more pluralities of fins (not shown) in addition to plurality of fins PF1-PF4and, in some embodiment, plurality of fins PF5and PF6. In some embodiments, plurality of fins PF1-PF4are included in a repeating pattern of pluralities of fins in accordance with the discussion above with respect to method400andFIGS.4-8. IC device900D is an IC device including IC structure900and one or more IC features, e.g., one or more FinFETs including one or more gates, configured in accordance with one or both of method100and IC layout diagrams200and300discussed above with respect toFIGS.1-3, or method400and IC layout diagrams500-800discussed above with respect toFIGS.4-8. Details of IC device900D are not depicted inFIG.9for the purpose of illustration. FIG.10is a flowchart of a method1000of manufacturing an IC structure, in accordance with some embodiments. Method1000is operable to form an IC structure, e.g., IC structure900discussed above with respect toFIG.9. In some embodiments, method1000is usable by an IC manufacturing system as part of an IC manufacturing flow, e.g., IC manufacturing system1200discussed below with respect toFIG.12. The sequence in which the operations of method1000are depicted inFIG.10is for illustration only; the operations of method1000are capable of being executed simultaneously and/or in sequences that differ from that depicted inFIG.10. In some embodiments, operations in addition to those depicted inFIG.10are performed before, between, during, and/or after the operations depicted inFIG.10. At operation1010, first through fourth parallel and adjacent pluralities of fins are formed. In some embodiments, forming the first through fourth parallel and adjacent pluralities of fins corresponds to forming pluralities of fins PF1-PF4discussed above with respect toFIG.9. Forming a plurality of fins, e.g., one or more of plurality of fins PF1-PF4, includes using one or more suitable processes, e.g., photolithography and/or etch processes. In some embodiments, the photolithography process includes forming a photoresist layer overlying a substrate, e.g., substrate900S, exposing the photoresist layer to a pattern, performing a post-exposure bake processes, and developing the photoresist layer to form a masking element including the photoresist layer. In some embodiments, the masking element is used to protect predetermined regions of the substrate while an etch process, e.g., a reactive ion etch, is used to form recesses in the substrate, leaving an extending fin. At operation1020, in some embodiments, fifth and sixth pluralities of fins are formed aligned with the first and second or third and fourth pluralities of fins. In some embodiments, forming the fifth and sixth pluralities of fins corresponds to forming pluralities of fins PF5and PF6discussed above with respect toFIG.9. At operation1030, in some embodiments, an IC device is constructed including the first through fourth pluralities of fins. In some embodiments, constructing the IC device includes constructing IC device900D discussed above with respect toFIG.9. The operations of method1000are usable to form an IC structure, e.g., IC structure900, that includes first through fourth pluralities of fins arranged in accordance with method400, and is thereby configured to have the properties, and thus the benefits, discussed above with respect to methods100and400. FIG.11is a block diagram of IC layout diagram generation system1100, in accordance with some embodiments. In some embodiments, IC layout diagram generation system1100includes an electronic design automation (EDA). In some embodiments, IC layout diagram generation system1100includes or is part of an APR system. Methods described herein of designing IC layout diagrams representing fin arrangements, in accordance with one or more embodiments, are implementable, for example, using IC layout diagram generation system1100, in accordance with some embodiments. In some embodiments, IC layout diagram generation system1100is a general purpose computing device including processor1102and a non-transitory, computer-readable storage medium1104. Computer-readable storage medium1104, amongst other things, is encoded with, i.e., stores, computer program code1106, i.e., a set of executable instructions. Execution of instructions1106by processor1102represents (at least in part) an IC layout diagram generation tool which implements a portion or all of, e.g., method100discussed above with respect toFIG.1and/or method400discussed above with respect toFIG.4(hereinafter, the noted processes and/or methods). Processor1102is electrically coupled to computer-readable storage medium1104via a bus1108. Processor1102is also electrically coupled to an I/O interface1110by bus1108. A network interface1112is also electrically connected to processor1102via bus1108. Network interface1112is connected to a network1114, so that processor1102and computer-readable storage medium1104are capable of connecting to external elements via network1114. Processor1102is configured to execute computer program code1106encoded in computer-readable storage medium1104in order to cause IC layout diagram generation system1100to be usable for performing a portion or all of the noted processes and/or methods. In one or more embodiments, processor1102is a central processing unit (CPU), a multi-processor, a distributed processing system, an application specific integrated circuit (ASIC), and/or a suitable processing unit. In one or more embodiments, computer-readable storage medium1104is an electronic, magnetic, optical, electromagnetic, infrared, and/or a semiconductor system (or apparatus or device). For example, computer-readable storage medium1104includes a semiconductor or solid-state memory, a magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and/or an optical disk. In one or more embodiments using optical disks, computer-readable storage medium1104includes a compact disk-read only memory (CD-ROM), a compact disk-read/write (CD-R/W), and/or a digital video disc (DVD). In one or more embodiments, computer-readable storage medium1104stores computer program code1106configured to cause IC layout diagram generation system1100(where such execution represents (at least in part) the IC layout diagram generation tool) to be usable for performing a portion or all of the noted processes and/or methods. In one or more embodiments, computer-readable storage medium1104also stores information which facilitates performing a portion or all of the noted processes and/or methods. In one or more embodiments, computer-readable storage medium1104stores library1120of standard cells including IC layout diagrams as disclosed herein, e.g., one or more of IC layout diagrams200,300, or500-800discussed above with respect toFIGS.1-8. In one or more embodiments, computer-readable storage medium1104stores one or more fin track arrangements1122as disclosed herein, e.g., discussed above with respect to method400andFIGS.4-8. IC layout diagram generation system1100includes I/O interface1110. I/O interface1110is coupled to external circuitry. In one or more embodiments, I/O interface1110includes a keyboard, keypad, mouse, trackball, trackpad, touchscreen, and/or cursor direction keys for communicating information and commands to processor1102. IC layout diagram generation system1100also includes network interface1112coupled to processor1102. Network interface1112allows IC layout diagram generation system1100to communicate with network1114, to which one or more other computer systems are connected. Network interface1112includes wireless network interfaces such as BLUETOOTH, WIFI, WIMAX, GPRS, or WCDMA; or wired network interfaces such as ETHERNET, USB, or IEEE-1364. In one or more embodiments, a portion or all of noted processes and/or methods, is implemented in two or more IC layout diagram generation systems1100. IC layout diagram generation system1100is configured to receive information through I/O interface1110. The information received through I/O interface1110includes one or more of instructions, data, design rules, libraries of standard cells, and/or other parameters for processing by processor1102. The information is transferred to processor1102via bus1108. IC layout diagram generation system1100is configured to receive information related to a user interface (UI) through I/O interface1110. In some embodiments, a portion or all of the noted processes and/or methods is implemented as a standalone software application for execution by a processor. In some embodiments, a portion or all of the noted processes and/or methods is implemented as a software application that is a part of an additional software application. In some embodiments, a portion or all of the noted processes and/or methods is implemented as a plug-in to a software application. In some embodiments, at least one of the noted processes and/or methods is implemented as a software application that is a portion of an EDA tool. In some embodiments, a portion or all of the noted processes and/or methods is implemented as a software application that is used by IC layout diagram generation system1100. In some embodiments, a layout diagram which includes standard cells is generated using a tool such as VIRTUOSO® available from CADENCE DESIGN SYSTEMS, Inc., or another suitable layout generating tool. In some embodiments, the processes are realized as functions of a program stored in a non-transitory computer readable recording medium. Examples of a non-transitory computer readable recording medium include, but are not limited to, external/removable and/or internal/built-in storage or memory unit, e.g., one or more of an optical disk, such as a DVD, a magnetic disk, such as a hard disk, a semiconductor memory, such as a ROM, a RAM, a memory card, and the like. FIG.12is a block diagram of IC manufacturing system1200, and an IC manufacturing flow associated therewith, in accordance with some embodiments. In some embodiments, based on a layout diagram, at least one of (A) one or more semiconductor masks or (B) at least one component in a layer of a semiconductor integrated circuit is fabricated using manufacturing system1200. InFIG.12, IC manufacturing system1200includes entities, such as a design house1220, a mask house1230, and an IC manufacturer/fabricator (“fab”)1250, that interact with one another in the design, development, and manufacturing cycles and/or services related to manufacturing an IC device1260. The entities in system1200are connected by a communications network. In some embodiments, the communications network is a single network. In some embodiments, the communications network is a variety of different networks, such as an intranet and the Internet. The communications network includes wired and/or wireless communication channels. Each entity interacts with one or more of the other entities and provides services to and/or receives services from one or more of the other entities. In some embodiments, two or more of design house1220, mask house1230, and IC fab1250is owned by a single larger company. In some embodiments, two or more of design house1220, mask house1230, and IC fab1250coexist in a common facility and use common resources. Design house (or design team)1220generates an IC design layout diagram1222. IC design layout diagram1222includes various geometrical patterns, e.g., one or more of IC layout diagram200,300, or500-800discussed above with respect to FIGS.1-8, designed for an IC device1260, e.g., an IC device including IC structure900discussed above with respect toFIGS.9and10. The geometrical patterns correspond to patterns of metal, oxide, or semiconductor layers that make up the various components of IC device1260to be fabricated. The various layers combine to form various IC features. For example, a portion of IC design layout diagram1222includes various IC features, such as an active region, gate electrode, source and drain, metal lines or vias of an interlayer interconnection, and openings for bonding pads, to be formed in a semiconductor substrate (such as a silicon wafer) and various material layers disposed on the semiconductor substrate. Design house1220implements a proper design procedure to form IC design layout diagram1222. The design procedure includes one or more of logic design, physical design, or place and route. IC design layout diagram1222is presented in one or more data files having information of the geometrical patterns. For example, IC design layout diagram1222can be expressed in a GDSII file format or DFII file format. Mask house1230includes data preparation1232and mask fabrication1244. Mask house1230uses IC design layout diagram1222to manufacture one or more masks1245to be used for fabricating the various layers of IC device1260according to IC design layout diagram1222. Mask house1230performs mask data preparation1232, where IC design layout diagram1222is translated into a representative data file (“RDF”). Mask data preparation1232provides the RDF to mask fabrication1244. Mask fabrication1244includes a mask writer. A mask writer converts the RDF to an image on a substrate, such as a mask (reticle)1245or a semiconductor wafer1253. The design layout diagram1222is manipulated by mask data preparation1232to comply with particular characteristics of the mask writer and/or requirements of IC fab1250. InFIG.12, mask data preparation1232and mask fabrication1244are illustrated as separate elements. In some embodiments, mask data preparation1232and mask fabrication1244can be collectively referred to as mask data preparation. In some embodiments, mask data preparation1232includes optical proximity correction (OPC) which uses lithography enhancement techniques to compensate for image errors, such as those that can arise from diffraction, interference, other process effects and the like. OPC adjusts IC design layout diagram1222. In some embodiments, mask data preparation1232includes further resolution enhancement techniques (RET), such as off-axis illumination, sub-resolution assist features, phase-shifting masks, other suitable techniques, and the like or combinations thereof. In some embodiments, inverse lithography technology (ILT) is also used, which treats OPC as an inverse imaging problem. In some embodiments, mask data preparation1232includes a mask rule checker (MRC) that checks the IC design layout diagram1222that has undergone processes in OPC with a set of mask creation rules which contain certain geometric and/or connectivity restrictions to ensure sufficient margins, to account for variability in semiconductor manufacturing processes, and the like. In some embodiments, the MRC modifies the IC design layout diagram1222to compensate for limitations during mask fabrication1244, which may undo part of the modifications performed by OPC in order to meet mask creation rules. In some embodiments, mask data preparation1232includes lithography process checking (LPC) that simulates processing that will be implemented by IC fab1250to fabricate IC device1260. LPC simulates this processing based on IC design layout diagram1222to create a simulated manufactured device, such as IC device1260. The processing parameters in LPC simulation can include parameters associated with various processes of the IC manufacturing cycle, parameters associated with tools used for manufacturing the IC, and/or other aspects of the manufacturing process. LPC takes into account various factors, such as aerial image contrast, depth of focus (“DOF”), mask error enhancement factor (“MEEF”), other suitable factors, and the like or combinations thereof. In some embodiments, after a simulated manufactured device has been created by LPC, if the simulated device is not close enough in shape to satisfy design rules, OPC and/or MRC are be repeated to further refine IC design layout diagram1222. It should be understood that the above description of mask data preparation1232has been simplified for the purposes of clarity. In some embodiments, data preparation1232includes additional features such as a logic operation (LOP) to modify the IC design layout diagram1222according to manufacturing rules. Additionally, the processes applied to IC design layout diagram1222during data preparation1232may be executed in a variety of different orders. After mask data preparation1232and during mask fabrication1244, a mask1245or a group of masks1245are fabricated based on the modified IC design layout diagram1222. In some embodiments, mask fabrication1244includes performing one or more lithographic exposures based on IC design layout diagram1222. In some embodiments, an electron-beam (e-beam) or a mechanism of multiple e-beams is used to form a pattern on a mask (photomask or reticle)1245based on the modified IC design layout diagram1222. Mask1245can be formed in various technologies. In some embodiments, mask1245is formed using binary technology. In some embodiments, a mask pattern includes opaque regions and transparent regions. A radiation beam, such as an ultraviolet (UV) beam, used to expose the image sensitive material layer (e.g., photoresist) which has been coated on a wafer, is blocked by the opaque region and transmits through the transparent regions. In one example, a binary mask version of mask1245includes a transparent substrate (e.g., fused quartz) and an opaque material (e.g., chromium) coated in the opaque regions of the binary mask. In another example, mask1245is formed using a phase shift technology. In a phase shift mask (PSM) version of mask1245, various features in the pattern formed on the phase shift mask are configured to have proper phase difference to enhance the resolution and imaging quality. In various examples, the phase shift mask can be attenuated PSM or alternating PSM. The mask(s) generated by mask fabrication1244is used in a variety of processes. For example, such a mask(s) is used in an ion implantation process to form various doped regions in semiconductor wafer1253, in an etching process to form various etching regions in semiconductor wafer1253, and/or in other suitable processes. IC fab1250includes wafer fabrication1252. IC fab1250is an IC fabrication business that includes one or more manufacturing facilities for the fabrication of a variety of different IC products. In some embodiments, IC Fab1250is a semiconductor foundry. For example, there may be a manufacturing facility for the front end fabrication of a plurality of IC products (front-end-of-line (FEOL) fabrication), while a second manufacturing facility may provide the back end fabrication for the interconnection and packaging of the IC products (back-end-of-line (BEOL) fabrication), and a third manufacturing facility may provide other services for the foundry business. IC fab1250uses mask(s)1245fabricated by mask house1230to fabricate IC device1260. Thus, IC fab1250at least indirectly uses IC design layout diagram1222to fabricate IC device1260. In some embodiments, semiconductor wafer1253is fabricated by IC fab1250using mask(s)1245to form IC device1260. In some embodiments, the IC fabrication includes performing one or more lithographic exposures based at least indirectly on IC design layout diagram1222. Semiconductor wafer1253includes a silicon substrate or other proper substrate having material layers formed thereon. Semiconductor wafer1253further includes one or more of various doped regions, dielectric features, multilevel interconnects, and the like (formed at subsequent manufacturing steps). Details regarding an integrated circuit (IC) manufacturing system (e.g., system1200ofFIG.12), and an IC manufacturing flow associated therewith are found, e.g., in U.S. Pat. No. 9,256,709, granted Feb. 9, 2016, U.S. Pre-Grant Publication No. 20150278429, published Oct. 1, 2015, U.S. Pre-Grant Publication No. 20140040838, published Feb. 6, 2014, and U.S. Pat. No. 7,260,442, granted Aug. 21, 2007, the entireties of each of which are hereby incorporated by reference. In some embodiments, a method of manufacturing an IC structure includes forming a first plurality of fins extending in a first direction on a substrate, forming a second plurality of fins extending in the first direction on the substrate adjacent to the first plurality of fins, forming a third plurality of fins extending in the first direction on the substrate adjacent to the second plurality of fins, and forming a fourth plurality of fins extending in the first direction on the substrate adjacent to the third plurality of fins. Forming each fin of each of the first and fourth pluralities of fins includes forming one of an n-type fin or a p-type fin, forming each fin of each of the second and third pluralities of fins includes forming the other of the n-type fin or the p-type fin, forming each of the first and third pluralities of fins includes forming a first total number of fins, and forming each of the second and fourth pluralities of fins includes forming a second total number of fins fewer than the first total number of fins. In some embodiments, forming the second total number of fins fewer than the first total number of fins includes forming the second total number of fins one fewer than the first total number of fins. In some embodiments, forming the first through fourth pluralities of fins includes forming the first through fourth pluralities of fins in corresponding first through fourth active areas, the first and third active areas have a same first height, and the second and fourth active areas have a same second height less than the first height. In some embodiments, the first and second heights are first and second active area heights of a manufacturing process, a distance between a first fin of the first plurality of fins and a first fin of the third plurality of fins corresponds to a cell height of the manufacturing process, and a sum of the first and second total numbers of fins is a maximum total number of fins capable of being included in the cell height in accordance with first and second minimum spacing rules of the first and second active area heights. In some embodiments, the cell height has a value ranging from 200 nm to 300 nm. In some embodiments, forming the first through fourth pluralities of fins corresponds to forming a pattern of the first through fourth pluralities of fins, and forming the pattern is part of forming a plurality of patterns of the first through fourth pluralities of fins. In some embodiments, the method includes forming first through fourth transistors corresponding to the first through fourth pluralities of fins, wherein forming each of the first through fourth transistors includes forming S/D structures in each fin of the corresponding one of the first through fourth pluralities of fins. In some embodiments, a method of manufacturing an IC structure includes forming a first plurality of fins extending in a first direction on a substrate, forming a second plurality of fins extending in the first direction on the substrate adjacent to the first plurality of fins, forming a third plurality of fins extending in the first direction on the substrate adjacent to the second plurality of fins, and forming a fourth plurality of fins extending in the first direction on the substrate adjacent to the third plurality of fins. Forming each fin of each of the first and fourth pluralities of fins includes forming one of an n-type fin or a p-type fin, forming each fin of each of the second and third pluralities of fins includes forming the other of the n-type fin or the p-type fin, forming each of the first and third pluralities of fins includes forming a total of three fins, and forming each of the second and fourth pluralities of fins includes forming a total of two fins. In some embodiments, forming the total of three fins includes forming the total of three fins within a first active area height of a manufacturing process, and forming the total of two fins includes forming the total of two fins within a second active area height less than the first active area height of the manufacturing process. In some embodiments, a distance between a first fin of the first plurality of fins and a first fin of the third plurality of fins corresponds to a cell height of the manufacturing process, and a maximum total number of fins capable of being included in the cell height in accordance with first and second minimum spacing rules of the first and second active area heights is equal to five. In some embodiments, forming the first through fourth pluralities of fins corresponds to forming a pattern of the first through fourth pluralities of fins, and forming the pattern is part of forming a plurality of patterns of the first through fourth pluralities of fins. In some embodiments, the pattern has a height ranging from 200 nanometers (nm) to 300 nm. In some embodiments, the method includes forming first through fourth transistors corresponding to the first through fourth pluralities of fins, wherein forming each of the first through fourth transistors includes forming S/D structures in each fin of the corresponding one of the first through fourth pluralities of fins. In some embodiments, a method of manufacturing an IC structure includes forming a first plurality of fins extending in a first direction on a substrate, forming a second plurality of fins extending in the first direction on the substrate adjacent to the first plurality of fins, forming a third plurality of fins extending in the first direction on the substrate adjacent to the second plurality of fins, forming a fourth plurality of fins extending in the first direction on the substrate adjacent to the third plurality of fins, forming a fifth plurality of fins on the substrate aligned with a subset of the first plurality of fins and separated from the first plurality of fins by a fin discontinuity region, and forming a sixth plurality of fins on the substrate aligned with the second plurality of fins and separated from the second plurality of fins by the fin discontinuity region. Forming each fin of each of the first, fourth, and fifth pluralities of fins includes forming one of an n-type fin or a p-type fin, forming each fin of each of the second, third, and sixth pluralities of fins includes forming the other of the n-type fin or the p-type fin, forming each of the first and third pluralities of fins includes forming a first total number of fins, and forming each of the second and fourth through sixth pluralities of fins includes forming a second total number of fins fewer than the first total number of fins. In some embodiments, forming the second total number of fins fewer than the first total number of fins includes forming the second total number of fins one fewer than the first total number of fins. In some embodiments, the forming the fifth plurality of fins aligned with the subset of the first plurality of fins includes aligning a fin of the first plurality of fins with a space between the fifth and sixth pluralities of fins. In some embodiments, forming the fifth plurality of fins aligned with the subset of the first plurality of fins includes aligning a fin of the first plurality of fins with a space, and the fifth plurality of fins is positioned between the space and the sixth plurality of fins. In some embodiments, each of forming the fifth plurality of fins separated from the first plurality of fins by the fin discontinuity region and forming the sixth plurality of fins separated from the second plurality of fins by the fin discontinuity region includes forming the corresponding pluralities of fins separated by a distance ranging from 50 nanometers (nm) to 100 nm. In some embodiments, a distance between a first fin of the first plurality of fins and a first fin of the third plurality of fins corresponds to a cell height of a manufacturing process, and a sum of the first and second total numbers of fins is a maximum total number of fins capable of being included in the cell height in accordance with minimum spacing rules of the manufacturing process. In some embodiments, the method includes forming first through sixth transistors corresponding to the first through sixth pluralities of fins, wherein forming each of the first through sixth transistors includes forming S/D structures in each fin of the corresponding one of the first through sixth pluralities of fins. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 100,231 |
11861283 | DETAILED DESCRIPTION Specific embodiments of the present invention are further described in detail below with reference to the accompanying drawings, however, the embodiments described are not intended to limit the present invention and it is not intended for the description of operation to limit the order of implementation. Moreover, any device with equivalent functions that is produced from a structure formed by a recombination of elements shall fall within the scope of the present invention. Additionally, the drawings are only illustrative and are not drawn to actual size. The using of “first”, “second”, “third”, etc. in the specification should be understood for identifying units or data described by the same terminology, but are not referred to particular order or sequence. FIG.2is a diagram illustrating cells and nets in accordance with an embodiment. Four cells201-204are illustrated inFIG.2. The cells201-204form nets211-214which are configured to connect pins of the cells201-204. Each net connects at least two cells. For example, the net211connects one input pin of the cell201and an output pin of the cell202, and so on. The number of branches of a net is defined as the number of the connected pins. For example, the number of branches of the net211is two, and the number of branches of the net214is three. In the embodiment, the cells201-204are logic gates, but they may be flip flops or any other suitable circuits in other embodiments. A placement method is provided for spreading the cells in a placement region.FIG.3is a flow chart of a placement method in accordance with an embodiment. Referring toFIG.3, the provided placement method is based on multilevel framework that includes three steps301-303. The step301is also referred to coarsening for taking all cells as clusters and merging the clusters in each level from bottom to up. In the step302, an initial placement is performed. Any conventional approach such as quadratic placement may be adopted in the step302. The step303is also referred to uncoarsening or refinement for disassembling the clusters in each level from up to bottom and determining the location of each cluster. Detail embodiments will be provided below. FIG.4is a diagram illustrating the procedures of the step301in accordance with an embodiment. In step401, a Design Hierarchy Tree (DHT) is established by the reference of Lin, Jai-Ming, Szu-Ting Li, and Yi-Ting Wang. “Routability-driven Mixed-size Placement Prototyping Approach Considering Design Hierarchy and Indirect Connectivity Between Macros.” Proceedings of the 56th Annual Design Automation Conference 2019. Let cildenotes a cluster in a level l which is initialized as zero. In the level 0, each cell is taken as a cluster ci0. In the step402, two clusters cjland cklare selected with a largest score S(cjl, ckl) between the cluster cjland the cluster ckl, and a new cluster cilis generated to contain the cluster cjland the cluster ckl. The score is calculated by the following Equation 1. S(cjl,ckl)=e-(Ajl+Akl)μ×Aavg×∑em∈Ej,k1d(em)-1[Equation1] Ajldenotes the area of the cluster cjl. Akldenotes the area of the cluster ckl. Aavgdenotes the average area of all clusters. μ is a predetermined real number (e.g. determined by the user). Ej,kdenotes a set of the nets connecting the cluster cjland the cluster ckl. emdenotes one of the nets connecting the cluster cjland the cluster ckl. d(em) denotes the number of branches of the net em. Larger score S(cjl, ckl) represents that the clusters cjl, cklhave smaller areas and stronger connectivity intensity (i.e. the second term on the right side of the equal sign in Equation 1). In the embodiment, the score S(cjl, ckl) is negatively related to the areas of the clusters cjl, ckl. In particular, the score S(cjl, ckl) is negatively related to the number of branches d(em) because fewer branches of a net represents the corresponding two clusters connects with the each other with fewer routes (i.e. stronger connectivity intensity). In contrast, if the number of branches is large, it means the corresponding two clusters can be connected by multiple routes and therefore the connectivity intensity is relatively weak. In step403, an internal connectivity intensity (ICI) of the new cluster is calculated by the following Equation 2. η(cil)=S(cjl,ckl)+η(cjl)+η(ckl) [Equation 2] η(cil) denotes the internal connectivity intensity of the new cluster cil. η(cjl) denotes the internal connectivity intensity of the cluster cjl. η(ckl) denotes the internal connectivity intensity of the cluster ckl. In other words, the internal connectivity intensity of the new cluster is calculated by summing up the internal connectivity intensity of the cluster Cjl, the internal connectivity intensity of the cluster ckl, and the score S(cjl, ckl) between these two clusters. When a cluster belongs to the lowest level (i.e. l=0), the internal connectivity intensity of this cluster is equal to zero, and that is η(ci0)=0. Note that larger score S(cjl, ckl) results in a larger internal connectivity intensity η(cil). In step404, it is determined whether to stop merging. In some embodiments, it is determined whether to stop merging based on the number of the cluster because this number is getting smaller. For example, it is determined if an inequality Ni/N′l>α holds where Nldenotes the initial number of the clusters in the level l, N′ldenotes the current number of the clusters, and a denotes a predetermined real number (e.g. 1.7). If the inequality equation holds, then the merging stops to perform step408, otherwise the merging continues (step405is performed). In the step405, it is determined if there is suitable clusters to be merged. Based on the algorithm of the design hierarchy tree, only the clusters in the same layer are selected. If the result of the step405is “No”, then the design hierarchy tree is amended in the step406, and step407is performed to process a new layer in the design hierarchy tree. The detail of the steps405to407may be referred to the reference of the Design Hierarchy Tree mentioned above. In the step408, it is determined whether to stop establishing new levels. In some embodiments, this is also based on the number of the clusters. For example, it is determined if the inequality N′lβ holds where denotes a predetermined real number (e.g. β=0.05×N0). If the inequality holds, then the procedure ends, otherwise step409is performed to process the next higher level (i.e. l=l+1). In the embodiments, the clusters are merged based on the Design Hierarchy Tree algorithm, but the clusters may be merged based on any other suitable algorithm as long as the internal connectivity intensity of each cluster is calculated in each level. In other words, another calculation of the score different from the Equation 1 may be adopted when merging the clusters. In addition, the score of merging the clusters may be different from that of calculating the internal connectivity intensity. It is in the spirit of the disclosure as long as the score S(cjl, ckl) is positively related to the connectivity intensity between these two clusters and negatively related to the number of branches of the nets when calculating the internal connectivity intensity. People in the art should be able to devise another score S(cjl, ckl) based on the disclosure. Next, refinement is performed. In some conventional approaches, inflation technology is used to increase a size of a cell so that the cell may be removed from a congested region to avoid local routing congestion. However, this approach relies on the congestion map which is composed of overflow values of grids of the placement region. However, the congestion map in high levels may not be accurate.FIG.5is a diagram illustrating congestion maps and layout in a high level and a low level in accordance with an embodiment. Referring toFIG.5, a placement region510shows layout result in the high level, and a congestion map520is also calculated in the high level. A placement region530is the layout result in the low level, and a congestion map540is calculated in the low level. It is shown inFIG.5that there is a significant difference between the congestion map520and the congestion map540because the congestion inside the clusters is unknown. In the embodiment, the inflation is performed according to the congestion map in addition to the internal connectivity intensities of the clusters to reduce local routing congestion. FIG.6is a diagram illustrating the procedures of the step303in accordance with an embodiment. In step601, the congestion map is updated. The congestion map includes overflow values of all the grids. The overflow value of a grid is defined as a routing number that the grid needs minus a routing number that the grid provides. The larger the overflow value is, the more the grid is congested. The overflow value of each grid is updated based on the current layout. People in the art should be able to appreciate the congestion map, and therefore the detail thereof is not described herein. In step602, the clusters in the current level l are disassembled. In step603, an inflation ratio of each cluster is calculated according to the congestion map and its internal connectivity intensity. To be specific, the inflation ratio may be calculated based on the following Equations 3-5. γ(cil)=min{γmin+o^(cil)+lNL×ηˆ(cil),γmax}[Equation3]oˆ(cil)=max{o(cil)-oTomax,0}[Equation4]ηˆ(cil)=max{η(cil)-ηTηmax,0}[Equation5] γ(cil) denotes the inflation ratio of the cluster cil. Since the cluster cilmay overlap with multiple grids, the grid g having greatest overlapping area with the cluster cilis considered. o(cil) denotes the overflow value of the grid g. ô(cil) denotes a normalized overflow value. omaxdenotes the greatest overflow value among all the grids. η(cil) denotes the internal connectivity intensity of the cluster cil. {circumflex over (η)}(cil) denotes a normalized internal connectivity intensity. ηmaxdenotes the greatest internal connectivity intensity among all the clusters. oT, ηT,γmin, and γmaxare predetermined real numbers (e.g. determined by the user). NLdenotes the number of all the levels. In some embodiments, the inflation ratio is limited in the range from 1 to 2, and therefore γmin=1, and γmax=2. In some embodiment, ηT, is twice the average of the internal connectivity intensities of all clusters in the current level l. Note that l/NLin the Equation 3 is a weight which is positively related to the current level l. The current level l is decreased during refinement, and therefore the weight is decreased from 1 of the highest level to 0 of the lowest level (each cluster only contains one cell in the lowest level). The weight l/NLis decreased because there exists more internal connectivity intensity in a cluster in a higher level. However, the overflow values in the high level are less accurate. On the contrary, there exists less internal connectivity intensity in a cluster in a lower level with more accurate overflow values. After the inflation ratio γ(cil) is calculated, the size of the cluster cil(i.e. the area the cluster ciloccupies in the placement region) is adjusted according to the inflation ratio. In step604, a wirelength-driven distribution is performed. In detail, an objective function is established to determine locations of the clusters so that the objective function outputs an optimal value. The objective function is written in the following Equation 6. minλ1W(x,y)+λ2∑b(Ub(x,y)-Mb)2[Equation6] In the embodiments, the placement region is divided into multiple bins. The size of the bins may be larger, equal, or smaller than that of the grids. W(x, y) denotes a total length of the wires. Ub(x,y) denotes the total area of the cells in the bin b. Mbdenotes the maximum allowable area of the cells in the bin b. λland λ2are user-specified real numbers. The Equation 6 may be referred to the reference of Chen, Tung-Chieh, et al. “NTUplace3: An analytical placer for large-scale mixed-size designs with preplaced blocks and density constraints.” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 27.7 (2008): 1228-1240, and the detail thereof is not described herein. In step605, it is determined if spreading is enough. If the result of the step605is “yes”, step606is performed, otherwise the step604is performed. In some embodiments, an overflow area of each bin is calculated. The overflow area is defined as the total area of the cells (e.g. clusters) in the bin minus the area of the bin. The less the overflow area is, the less the bin is congested. The total overflow area of all bins is divided by the total area of all movable clusters to obtain an overflow area ratio. If the overflow area ratio is less than a threshold (e.g. 0), then it is determined that the spreading is enough. In the step606, the overflow value of each grid is updated. In step607, for each cluster, an inflation ratio is calculated according to the overflow value of the grid and the internal connectivity intensity of the corresponding clusters, and a size of the cluster is adjusted according to the inflation ratio. The step607may be referred to the step603, and therefore the detail is not repeated herein. In step608, a routability-driven distribution is performed. The global routing congestion cannot be solved when only considering the wirelength. In the embodiment, each net is considered as a movable soft bounding box.FIG.7is a diagram illustrating nets and bounding boxes in accordance with an embodiment. Referring toFIG.7, a net710connects pins721-724, and a bounding box730is a minimum rectangle to cover all the pins721-724. In the embodiment, a penalty is given to the region that each net occupies. When the bounding box730covers larger or more congested regions, the corresponding net receives larger penalty. The shape and the location of the bounding box730will change to move the net to a low congested region.FIG.8is a diagram illustrating nets and congested regions in accordance with an embodiment. Referring toFIG.8, a scenario810illustrates a layout based on a conventional approach. The scenario810includes three congested regions821-823and three nets831-833. In the embodiment, the shape and location of the nets can change, and accordingly as shown in a scenario820, the shapes of the nets831-833change, and the nets831-833are moved to low congested regions. To be specific, let nidenotes the ithnet. Bidenotes the bounding box corresponding to the net ni. Given a placement, a routing congestion penalty of the net niis calculated as the following Equation 7. Ci=∑gk∈G∩Biρk×πk[Equation7] Cidenotes the routing congestion penalty of the net ni. G denotes a set including all the grids. gkdenotes the grid covered by the bounding box Bi. πkdenotes the overflow value of the grid gk. ρkdenotes a ratio of an overlapping area between the grid gkand the bounding box Bito the area of the bounding box Bi. For example,FIG.9is a diagram illustrating overlap between the bounding box and the grids in accordance with an embodiment. Referring toFIG.9, four grids901-904are shown in which the overflow value of the grid901is equal to 1.41, the overflow value of the grid902is equal to 1.02, the overflow value of the grid903is equal to 0.9, and the overflow value of the grid904is equal to 0.89. The bounding box910is overlapped with the grids901-904in which 15% of the area of the bounding box910is overlapped with the grid901, 22% of the area of the bounding box910is overlapped with grid902, 28% of the area of the bounding box910is overlapped with the grid903, and 35% of the area of the bounding box910is overlapped with the grid904. Therefore, the routing congestion penalty of the net nicorresponding to the bounding box910is calculated as 0.15×1.41+0.22×1.02+0.28×0.9+0.35×0.89=0.9994. When the net nicovers more congested regions, it receives larger penalty. In other words, the routing congestion penalty servers as routability. The larger the routing congestion penalty is, the lower the routability is. In order to present the Equation 7 by the coordinates of cells, we transform the Equation 7 into the following Equation 8. Ci(x,y)=∑gk∈GΦ(Bi,gk)[Equation8] Φ(Bi, gk) denotes the penalty contributed by any grid gk. Φ(Bi, gk) can be calculated as the following Equation 9. Φ(Bi,gk)=Px(Bi,gk)×Py(Bi,gk)∑gk∈GPx(Bi,gk)×Py(Bi,gk)×πk[Equation9] Px(Bi, gk) denotes the width of the overlapping are between the bounding box Biand the grid gk. Py(Bi, gk) denotes the height of the overlapping are between the bounding box Biand the grid gk. Since Px(Bi, gk) and Py(Bi,gk) cannot be differentiated, they are replaced by {circumflex over (P)}x(Bi, gk) and {circumflex over (P)}y(Bi, gk) according to the bell-shaped potential function as the following Equation 10 and 11. Without loss of generality, only {circumflex over (P)}x(Bi,gk) is illustrated. {circumflex over (P)}y(Bi,gk) can be done similarly. [Equation10]Pˆx(Bi,gk)={1-adx2,0≤dx≤wBi/2+wgkb(dx-wBi/2-2wgk)2,wBi/2+wgk≤dx≤wBi/2+wgk0,wBi/2+2wgk≤dxa=4/(wBi+2wgk)(wBi+4wgk)b=2/wgk(wBi+4wgk)[Equation11] dxdenotes the distance between the center of the bounding box Biand the center of the grid gkin the x-axis. wBidenotes the width of the bounding box Bi. wgkdenotes the width of the grid gk. To compute the distance dx, we have to get the center coordinate of the bounding box Bi, which is denoted by (xBi, γBi). xBiis computed by the following Equation 12, and the width wBiis computed by the following Equation 13. xBi=maxvk∈nixk+minvk∈nixk2[Equation12]wBi=maxvk∈nixk-minvk∈nixk[Equation13] vkdenotes a cell of the net ni. xkdenotes the x-coordinate of the center of the cell vk. Because the function maxck∈nixkand minck∈nixkare not differentiable, we apply the log-sum-exp function to compute the value. According to the aforementioned disclosure, the objective function established in the step608is written in the following Equation 14. minλ1W(x,y)+λ2∑b(Ub(x,y)-Mb)2+λ3C(x,y)[Equation14] The difference between the Equation 14 and the Equation 6 is routing congestion penalty C(x,y) which is defined in the following Equation 15. C(x,y)=∑ni∈ΨCi(x,y)[Equation15] Ψ denotes a list (i.e. set) including all the nets. In some embodiments, λ1and λ2are set according to the following Equations 16 and 17. λ3is a predetermined real number (e.g. determined by the user) such as 1 in some embodiments. λ1=∑❘"\[LeftBracketingBar]"∂C(x,y)❘"\[RightBracketingBar]"∑❘"\[LeftBracketingBar]"∂W(x,y)❘"\[RightBracketingBar]"[Equation16]λ2=∑❘"\[LeftBracketingBar]"∂C(x,y)❘"\[RightBracketingBar]"∑❘"\[LeftBracketingBar]"∂Ub(x,y)❘"\[RightBracketingBar]"[Equation17] In some embodiments, to make the cells evenly distributed over the placement region, λ1and λ3are fixed, and λ2is increased (e.g. double) every iteration. In some embodiments, the Equation 14 may be solved for computing the coordinates x and y of every cluster by a conjugate gradient (CG) approach so that the objective function output an optimal value (e.g. minimum in the embodiment), but how the Equation 14 is solved is not limited in the disclosure. In step609, it is determined if the current level is the lowest level, then then procedure ends if the result is “yes”, otherwise step610is performed to process the next lower level (i.e. l=l−1). The steps603-605may be collectively referred to a second placement procedure620, and the steps606-608may be collectively referred to a first placement procedure630. The objective function of the step608includes the routing congestion penalty C(x,y), but the objective function of the step604does not include the routing congestion penalty. This is because the clusters are located nearby in the initial stage and thus the routing congestion penalty cannot be considered. The wirelength-driven distribution is performed first to spread the clusters in the initial stage. The disclosure is not limited to the flow chart ofFIG.6, any suitable objective function may be adopted in the step604or the second placement procedure620may be omitted. In addition, the disclosure is not limited to the objective function of Equation 14, other objective functions may be adopted as long as it includes the routing congestion penalty C(x, y). From another aspect, a non-transitory computer readable storage media is provided. The media may be a random access memory, a read-only memory, a flash memory, floppy disks, hard disks, CD-ROMs, pen drives, tapes, databases accessible via the Internet for storing instructions which are configured to be executed to perform the placement method. The characteristics of the disclosure at least include: 1) We target on global and local routing congestion and use different strategies to resolve the problems in the multilevel framework; 2) We propose a congestion-aware net penalty model to reduce global congestion; 3) We propose a novel inflation technique by considering the internal connectivity intensity of a cluster as well as the congestion value occupied by the cluster to alleviate local congestion. Although the present invention has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein. It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims. | 21,835 |
11861284 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “over,” “under”, “upper,” “top,” “bottom,” “front,” “back,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the Figure(s). The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. Because components in various embodiments can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration only and is in no way limiting. When used in conjunction with layers of an integrated circuit, semiconductor device, or electronic device, the directional terminology is intended to be construed broadly, and therefore should not be interpreted to preclude the presence of one or more intervening layers or other intervening features or elements. Thus, a given layer that is described herein as being formed on, over, or under, or disposed on, over, or under another layer may be separated from the latter layer by one or more additional layers. Integrated circuits are commonly used in various electronic devices. Integrated circuits include circuits that provide or contribute to the functionality or functionalities of the integrated circuit. Non-limiting example circuits are logic components such as a flip flop, latch, inverter, NAND, OR, AND, and NOR circuits, as well as amplifiers, buffers, and transistors. Conductive interconnects, such as conductors made of one or more conductive materials, are commonly used to route signals and voltage sources to and from the circuits (or contact pads associated with the circuits). Conventional routing schemes for the conductors, known as Manhattan routing, route the conductors orthogonally with respect to a design boundary. In a non-limiting example, the design boundary is the edges of a chip or die of the integrated circuit. However, in some instances, the orthogonal routing is not the shortest distance between two components. Embodiments disclosed herein provide various techniques for selecting a conductor scheme (e.g., a metal scheme) and planning the tracks for the conductor scheme in mixed-diagonal-Manhattan routing. A track represents a path or a route for a conductor in an integrated circuit, such as a route for a metal line. These and other embodiments are discussed below with reference toFIGS.1-28. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these Figures is for explanatory purposes only and should not be construed as limiting. FIG.1depicts a cross-sectional view of an example integrated circuit in accordance with some embodiments. The integrated circuit100includes a substrate102, a circuit104, and an interconnect structure106. The substrate102is implemented with any suitable substrate. For example, the substrate102can be a semiconductor substrate, a gallium nitride substrate, or a silicon carbide substrate. The circuit104is disposed in, on and/or above the substrate102and can include passive and/or active components. Example circuits include, but are not limited to, a NAND circuit, a NOR circuit, an inverter, a flip flop, a latch, an amplifier, a resistor, a capacitor, a transistor, a diode, or combinations thereof. The interconnect structure106includes conductor layers108,110,112,114(e.g., M0-M3metal layers) that are arranged sequentially above the circuit104. Each conductor layer108,110,112,114includes conductors that interconnect a component of the circuit104to another component of the circuit104and/or to one or more power sources (e.g., VDD and VSS). The conductors can be made of any suitable conductive material or materials, such as metal. In one embodiment, the conductors in at least one conductor layer are implemented as metal lines. Additionally or alternatively, the conductors in at least one conductor layer are configured as metal pillars. AlthoughFIG.1presents four conductor layers108,110,112,114and one circuit104, other embodiments can include any number of conductor layers and/or any number of circuits. In some embodiments, octilinear Steiner trees are generated for all of the nets in the integrated circuit and used to select a conductor scheme for the integrated circuit. An octilinear Steiner tree is composed of horizontal, vertical, and/or diagonal lines that represent the connections between the input(s) pins and the output(s) pins of each net. An octilinear Steiner tree depicts a route (e.g., a minimum route) that can be used to connect the input(s) pins and the output(s) pins of a net. However, other embodiments are not limited to the use of an octilinear Steiner tree. Any suitable type of tree or other representation of a net may be used to diagram the nets in an integrated circuit. In one non-limiting example, minimum spanning trees may be used. FIG.2illustrates example octilinear Steiner trees for three nets in an integrated circuit in accordance with some embodiments. Although three octilinear Steiner trees are shown, any number of octilinear Steiner trees may be produced for an integrated circuit. The octilinear Steiner tree200includes three edges202,204,206and three pins208,210,212. Each edge202,204,206represents a connection between two pins208,210,212, and a pin portrays an input pin or an output pin of the net. In the illustrated embodiment, edge202is a horizontal edge and edges204,206are diagonal edges. The octilinear Steiner tree214includes two edges216,218and two pins220,222. Edge216is a vertical edge and edge218is a diagonal edge. The octilinear Steiner tree224includes four edges226,228,230,232and three pins234,236,238. In the illustrated embodiment, edge228is a vertical edge, edges226,232are horizontal edges, and edge230is a diagonal edge. As will be described in more detail in conjunction withFIGS.3and4, the orientation of the edges and the number of edges or the length of the edges are considered when selecting a conductor scheme for an integrated circuit. FIG.3depicts a flowchart of an example first method of determining a number of conductor layers for diagonal and Manhattan routing in accordance with some embodiments. In general, the number of diagonal layers to be used in an integrated circuit is based on a diagonal edge length ratio, and the number of Manhattan layers is based on a Manhattan edge length ratio. In one embodiment, the diagonal edges include the edges oriented at forty-five (45) degrees and at one hundred and thirty-five (135) degrees with respect to a design boundary, and the Manhattan edges include the vertical and the horizontal edges (edges oriented at ninety (90) degrees and zero (0) degrees, respectively, with respect to the design boundary). Initially, as shown in block300, a total edge length for the diagonal edges (a “total diagonal edge length”) is determined. For example, inFIG.2, the lengths of the diagonal edges204,206,218,230are summed to produce the total edge length for the diagonal edges. Next, as shown in block302, a total edge length is determined. The total edge length is calculated by summing the lengths of all of the Manhattan and diagonal edges in the trees (e.g., the octilinear Steiner trees). With respect toFIG.2, the lengths of the edges202,204,206,216,218,226,228,230,232are summed to determine the total edge length at block302. The diagonal edge length ratio is then calculated at block304. In a non-limiting embodiment, Equation 1 is used to determine the diagonal edge length ratio: DiagonalEdgeLengthRatio(d%)=TotalDiagonalEdgeLengthTotalEdgeLengthEquation1 The process continues at block306where the total edge length for the Manhattan edges (a “total Manhattan edge length”) is calculated. As noted earlier, the Manhattan edges are the vertical and the horizontal edges in the trees (e.g., the octilinear Steiner trees). For example, inFIG.2, the lengths of the Manhattan edges202,216,226,228,232are summed to produce the total edge length for the Manhattan edges. The Manhattan edge length ratio is then calculated at block308. In a non-limiting embodiment, Equation 2 is used to determine the Manhattan edge length ratio: ManhattanEdgeLengthRatio(m%)=TotalManhattanEdgeLengthTotalEdgeLengthEquation2 The number of diagonal layers and the number of Manhattan layers are determined at block310based on the diagonal edge length ratio and the Manhattan edge length ratio. In one embodiment, assuming an integrated circuit will include n conductor layers, where n is a number greater than one, the number of the n conductor layers that is assigned to Manhattan routing (e.g., the Manhattan layer number or MLN) is MLN=n×m %. The MLN includes a number of conductor layers assigned to vertical routings and a number of conductor layers assigned to horizontal routings. The number of the n conductor layers that is assigned to diagonal routing (e.g., the diagonal layer number or DLN) is DLN=n×d %. The DLN includes the number of conductor layers assigned to forty-five (45) degree routing and the number of conductor layers assigned to one hundred and thirty-five (135) degree routings. In one embodiment, the MLN is divided by two to provide a number of conductor layers assigned to horizontal routings and a number of conductor layers assigned to vertical routings. Additionally or alternatively, the DLN is divided by two to calculate the number of conductor layers that is assigned to forty-five (45) degree routing and the number of conductor layers assigned to one hundred and thirty-five (135) degree routings. FIG.4illustrates a flowchart of an example second method of determining a number of conductor layers for diagonal and Manhattan routing in accordance with some embodiments. The process shown inFIG.4is similar to the method ofFIG.3except that a total number of edges is used in place of a sum of edge lengths. Initially, as shown in block400, a count (e.g., a total number) of diagonal edges is determined (a “total diagonal edges”). For example, inFIG.2, the total number of diagonal edges204,206,218,230is four (4). Next, as shown in block402, a total number of edges is determined (a “total edges”). The total number of edges is calculated by counting the number of Manhattan edges and the number of diagonal edges in the trees (e.g., the octilinear Steiner trees). With respect toFIG.2, the total number of edges202,204,206,216,218,226,228,230,232is nine (9). The diagonal edge ratio is then calculated at block404. In a non-limiting embodiment, Equation 3 is used to determine the diagonal edge ratio: DiagonalEdgeRatio(d%)=TotalDiagonalEdgesTotalEdgesEquation3 The process continues at block406where the total number of Manhattan edges is calculated (a “total Manhattan edges”). For example, inFIG.2, the total number of Manhattan edges202,216,226,228,232is five (5). The Manhattan edge ratio is then calculated at block408. In a non-limiting embodiment, Equation 4 is used to determine the Manhattan edge ratio: ManhattanEdgeRatio(m%)=TotalManhattanEdgesTotalEdgesEquation4 The number of diagonal layers and the number of Manhattan layers are determined at block410based on the diagonal edge ratio and the Manhattan edge ratio. In one embodiment, assuming an integrated circuit will include n conductor layers, where n is a number greater than one, the number of the n conductor layers assigned to Manhattan routing (e.g., the Manhattan layer number or MLN) is MLN=n×m %. The MLN includes a number of conductor layers that is assigned to vertical routings and a number of conductor layers that is assigned to horizontal routings. The number of the n conductor layers that is assigned to diagonal routing (e.g., the diagonal layer number or DLN) is DLN=n×d %. In one embodiment, the DLN includes the number of conductor layers assigned to forty-five (45) degree routing and the number of conductor layers assigned to one hundred and thirty-five (135) degree routings. In one embodiment, the MLN and/or the DLN is divided by two to provide a number of conductor layers assigned to each type of routing (e.g., horizontal, vertical, forty-five (45) degree, 135 degree). In other embodiments, the blocks shown inFIGS.3and4can be arranged in a different order and/or one or more blocks may be omitted or added. For example, block302inFIG.3can be performed before block300. Additionally or alternatively, blocks406and408shown inFIG.4can occur before block400. In some embodiments, a tree other than an octilinear Steiner tree is used in the DLN and MLN calculations.FIG.5Adepicts a minimum spanning tree for a net in accordance with some embodiments. The minimum spanning tree500includes two edges502,504and three pins506,508,510. In the illustrated embodiment, edge502is neither a forty-five (45) degree diagonal edge nor a one hundred and thirty-five (135) degree diagonal edge, and edge504is a forty-five degree (45) diagonal edge. To calculate the Manhattan edge ratio and the diagonal edge ratio, each non-Manhattan, non-45-degree, or non-135-degree edge is decomposed into Manhattan and diagonal segments. As shown inFIG.5B, the edge502(FIG.5A) is decomposed into a horizontal (Manhattan) segment512and a shorter diagonal segment514. The converted tree516can be used when determining the number of conductor layers to assign to diagonal and Manhattan routings. For example, the converted tree516may be used in the processes shown inFIGS.3,4,7,9, and10. The types of edges and the lengths of the edges or the number of the edges are considered when estimating a demand for resources (e.g., a demand for routing and layers) and an estimate for the numbers of diagonal and Manhattan layers. In some instances, shorter diagonal edges raise issues in a design, particularly when the diagonal layers are at the higher conductor layers. To contact the pins of the shorter diagonal layers, the vias to the diagonal layers are typically stacked. Issues can arise with stacked vias, including via misalignment and violations of design rules regarding a minimum area and a maximum number of stacked vias.FIGS.6and8illustrate example methods of processing the shorter diagonal edges to reduce the impact of the shorter diagonal edges in an integrated circuit design. FIG.6depicts a flowchart of a first example method of handling shorter diagonal edges in accordance with some embodiments. Initially, the length of a diagonal edge is calculated at block600. A determination is made at block602as to whether the determined length is less than a threshold length. If the determined length is greater than the threshold length, the method passes to block604where the diagonal edge is maintained in the design (e.g., in a tree). When the determined length is less than the threshold length, the process continues at block606where a Manhattan edge (e.g., horizontal or vertical edge) is used in the design instead of the diagonal edge. A determination is made at block608as to whether there is another diagonal edge to be processed. If so, the process returns to block600and blocks600,602,604or606, and608repeat until all of the diagonal edges have been processed. When a determination is made at block608that all of the diagonal edges have been handled, the method passes to block610where the numbers of diagonal and Manhattan layers are determined. For example, the processes shown inFIG.3or4can be performed to identify the n conductor layers as either a diagonal layer or a Manhattan layer. FIG.7illustrates example octilinear Steiner trees for three nets in an integrated circuit in accordance with some embodiments. The octilinear Steiner trees214,224are the same as the octilinear Steiner trees214,224shown inFIG.2. The octilinear Steiner tree700is a modified version of the octilinear Steiner tree200depicted inFIG.2. Essentially, the octilinear Steiner trees214,224,700are the octilinear Steiner trees200,214,224inFIG.2after the process ofFIG.6is performed on the diagonal edges204,206,218,230(seeFIG.2). The diagonal edges218,230are maintained but the diagonal edges204,206(FIG.2) are replaced with Manhattan edges702,704. In particular, vertical Manhattan edges702,704are used instead of the diagonal edges204,206. The use of the vertical Manhattan edges702,704reduces the total diagonal edge length (block300inFIG.3) while increasing the total Manhattan edge length (block306). The decreased total diagonal edge length causes the diagonal edge length ratio (block304) to be smaller. The increased total Manhattan edge length results in a larger Manhattan edge length ratio (bock308). The decreased diagonal edge length ratio and the increased Manhattan edge length ratio can cause fewer conductor layers in an integrated circuit design to be assigned as diagonal layers and a greater number of conductor layers to be identified as Manhattan layers. Similarly, the vertical Manhattan edges702,704reduce the total number of diagonal edges (block400inFIG.4) and increase the total number of Manhattan edges (block406). The decreased diagonal edge ratio and the increased Manhattan edge ratio can cause fewer conductor layers in an integrated circuit design to be assigned as diagonal layers and a greater number of conductor layers to be identified as Manhattan layers to be identified as Manhattan layers. FIG.8depicts a flowchart of a second example method of handling shorter diagonal edges in accordance with some embodiments.FIG.8is similar toFIG.6except for blocks800and802. As such, blocks600,602,608are not described in detail again for brevity. When a determination is made at block602that the determined length of a diagonal edge is less than the threshold length, a weight or a scale is applied to the determined length. For example, the weight can be a number less than one (1) that when applied to the determined length, reduces or scales the length. Non-limiting examples of a weight include, but are not limited to, 0.5 or 0.2. When a determination is made at block608that another diagonal edge will not be processed (e.g., all of the diagonal edges have been processed), the method passes to block802where the number of Manhattan and diagonal layers are determined. In one embodiment, Equation 5 is used to determine the diagonal edge length ratio while Equation 6 is used to calculate the Manhattan edge length ratio. DiagonalEdgeLengthRatio(d%)=TotalWeightedDiagonalEdgeLengthTotalManhattanEdgeLength+TotalWeightedDiagonalEdgeLengthEquation5ManhattanEdgeLengthRatio(m%)=TotalManhattanEdgeLengthTotalManhattanEdgeLength+TotalWeightedDiagonalEdgeLengthEquation6 As described earlier, the number of diagonal layers and the number of Manhattan layers are determined based on the diagonal edge length ratio and the Manhattan edge length ratio. Assuming an integrated circuit will include n conductor layers, where n is a number greater than one, the number of the n conductor layers assigned to Manhattan routing (e.g., the Manhattan layer number or MLN) is MLN=n×m %. The MLN includes a number of conductor layers assigned to vertical routings and a number of conductor layers assigned to horizontal routings. The number of the n conductor layers that is assigned to diagonal routing (e.g., the diagonal layer number or DLN) is DLN=n×d %. In one embodiment, the DLN includes the number of conductor layers assigned to forty-five (45) degree routing and the number of conductor layers assigned to one hundred and thirty-five (135) degree routings. In one embodiment, the MLN and/or the DLN is divided by two to provide a number of conductor layers assigned to each type of routing (horizontal, vertical, forty-five (45) degree, 135 degree). FIG.9illustrates a flowchart of a third example method of handling shorter diagonal edges in accordance with some embodiments. Initially, a count (e.g., a total number) of diagonal edges is determined at block400. For example, inFIG.2, the total number of diagonal edges204,206,218,230is four (4). A determination is then made at block900as to whether the determined edge count is less than a threshold count. If so, the process continues at block902where a weight is applied to the determined edge count. In one embodiment, the weight is a number less than one (1) that when applied to the determined count, reduces or scales the count. Non-limiting examples of a weight include, but are not limited to, 0.5 or 0.2. After block902, or when a determination is made at block900that the edge count is greater than the threshold count, the method passes to block904where a determination is made as to whether another diagonal edge is to be processed. If so, the process returns to block400and blocks400,900, and902and/or904repeat until all of the diagonal edges have been processed. When a determination is made at block904that another diagonal edge will not be processed (e.g., all of the diagonal edges have been processed), the method continues at block906where the number of Manhattan and diagonal layers are determined. In one embodiment, Equation 7 is used to determine the diagonal edge ratio while Equation 8 is used to calculate the Manhattan edge ratio. DiagonalEdgeRatio(d%)=TotalWeightedDiagonalEdgesTotalManhattanEdges+TotalWeightedDiagonalEdgesEquation7ManhattanEdgeRatio(m%)=TotalManhattanEdgesTotalManhattanEdges+TotalWeightedDiagonalEdgesEquation8 The number of diagonal layers and the number of Manhattan layers are determined based on the diagonal edge ratio and the Manhattan edge ratio. Assuming an integrated circuit will include n conductor layers, where n is a number greater than one, the number of the n conductor layers assigned to Manhattan routing (e.g., the Manhattan layer number or MLN) is MLN=n×m %. The MLN includes a number of conductor layers assigned to vertical routings and a number of conductor layers assigned to horizontal routings. The number of the n conductor layers that is assigned to diagonal routing (e.g., the diagonal layer number or DLN) is DLN=n×d %. The DLN includes the number of conductor layers assigned to forty-five (45) degree routing and the number of conductor layers assigned to one hundred and thirty-five (135) degree routings. In one embodiment, the MLN and/or the DLN is divided by two to provide a number of conductor layers assigned to each type of routing (horizontal, vertical, forty-five (45) degree, 135 degree). FIG.10depicts an example first method of selecting a conductor scheme for an integrated circuit in accordance with some embodiments. In a non-limiting example, the process ofFIG.10is used to select a metal scheme for an integrated circuit. Initially, placement for the integrated circuit is determined at block1000. Placement determines the location of each component (e.g., active elements) in the integrated circuit. The trees or representations of the nets are then constructed at block1002. As noted earlier, a net represents the connection between an input (or inputs) and an output (or outputs) in a circuit or between components of a circuit in the integrated circuit. Next, as shown in block1004, the diagonal and the Manhattan ratios are calculated based on the trees generated at block1002. The diagonal and the Manhattan ratios can be the diagonal and the Manhattan edge length ratios (e.g.,FIG.3,6, or8) or the diagonal and the Manhattan edge ratios (e.g.,FIG.4or9). In some embodiments, multiple diagonal and Manhattan ratios are determined (e.g. diagonal and Manhattan edge length ratios as well as diagonal and Manhattan edge ratios), and the best or optimum ratios are used. Based on the diagonal and the Manhattan ratios, a conductor scheme for the integrated circuit is selected and used during fabrication of the integrated circuit (block1006). FIG.11illustrates an example second method of selecting a conductor scheme for an integrated circuit in accordance with some embodiments. Initially, an initial conductor scheme is selected at block1100. The initial conductor scheme may be selected based on the type of integrated circuit being designed and/or on a previously used conductor scheme that was selected for a design that had similar components, trees, and/or numbers of diagonal and Manhattan layers. Placement for the integrated circuit is then determined using the initial conductor scheme (block1102). The trees for the nets are created, and based on the trees, the diagonal and the Manhattan ratios are determined (blocks1002,1004). A determination is then made at block1104as to whether the current conductor scheme is to be used for the integrated circuit. For example, when the method is first performed, the current conductor scheme is the initial conductor scheme. In one embodiment, the initial conductor scheme and additional conductor schemes are compared against the diagonal and the Manhattan ratios to determine if the initial conductor scheme is the best or optimum conductor scheme. The initial and the additional conductor schemes can be stored in a storage device and accessed for the comparison operation. For example, the initial and the additional conductor schemes can be templates or previously designed and/or used conductor schemes that are stored in a database in the storage device. If a determination is made at block1104that the current conductor scheme is to be used, the process passes to block1106where the current conductor scheme is selected and used during fabrication of the integrated circuit. When a determination is made at block1104that the current scheme will not be used, the method continues at block1108where a new conductor scheme is selected and the process returns to block1102. Blocks1102,1002,1004,1104repeat until a determination is made at block1104that the current conductor scheme is to be used. Once the conductor scheme is selected for an integrated circuit, and prior to fabrication of the integrated circuit, the resources for the integrated circuit are planned and the diagonal and Manhattan routings are determined. In some embodiments, a software tool, such as an electronic design application (EDA), is used to generate a global routing for the integrated circuit. The global routing can be determined for the entire integrated circuit (e.g., 2D global routing) or on a layer-by-layer process for all of the conductor layers in the integrated circuit (e.g., 3D global routing). The EDA adds the conductors (e.g., metal lines and/or metal pillars) needed to properly connect the placed components while obeying the design rules for the integrated circuit. As discussed earlier, Manhattan routing placed the conductors along vertical and horizontal tracks. For global routing, the Manhattan routing for the integrated circuit (or for each conductor layer) is typically divided into bins, also known as G-Cells. Embodiments disclosed herein provide techniques for dividing a mixed-diagonal-Manhattan routing into bins. FIG.12depicts an octagon-shaped bin for use with mixed-diagonal-Manhattan routing in accordance with some embodiments. The sides of the octagon-shaped bin1200are used to determine the supply and/or the demand for the Manhattan edges (vertical and horizontal edges) and of the diagonal edges. The supply provides a maximum limit of the number of edges that can pass through (e.g., intersect) a side. The demand is a count of the number of edges that will intersect the side. InFIG.12, the sides labeled “V” are used to determine the supply and/or the demand of the vertical edges, and the sides labeled “H” are used for the horizontal edges. The sides labeled “S” are used to determine the supply and/or the demand of the diagonal edges that are oriented at forty-five (45) degrees, and the sides labeled “B” are used for the diagonal edges oriented at one hundred and thirty-five (135) degrees. FIG.13illustrates a layout of the octagon-shaped bins in accordance with some embodiments. Although nine octagon-shaped bins are shown inFIG.13, other embodiments can include any number of octagon-shaped bins. In the layout1300, some of the sides of the octagon-shaped bins are shared between two or more bordering bins. For example, side1302is shared by bin1304and bin1306. During global routing, a side of an octagon-shaped bin is annotated with a supply and a demand. As noted earlier, the supply provides a maximum limit of the number of edges that can pass through (e.g., intersect) a side, and the demand is a count of the number of edges that will intersect the side. An example process of global routing and annotation is described in more detail in conjunction withFIG.16. FIG.14depicts a flowchart of an example method of global routing with mixed-diagonal-Manhattan routing in accordance with some embodiments. The method is described in conjunction withFIGS.15A-15D. Initially, the design of the integrated circuit is partitioned into bins at block1400. In one embodiment, partitioning the design into bins includes determining the dimensions of the octagon-shaped bins and the locations of the octagon-shaped bins to produce a layout for the octagon-shaped bins.FIG.15Aillustrates an example layout1500for nine octagon-shaped bins1501. Next, as shown in block1402, rectangles are applied to the layout such that the V and the H sides of the octagon-shaped bins align with the sides of the rectangles. A rectangle is associated with at least one octagon-shaped bin. In one embodiment, the rectangles are applied in two layers (e.g., first rectangles and second rectangles). The first rectangles are used with one set of the Manhattan edges and the second rectangles are used with the other set of the Manhattan edges. For example, the first rectangles can be used to determine the supply and/or the demand for the vertical edges and the second rectangles may be used to ascertain the supply and/or the demand for the horizontal edges. In one embodiment, the first rectangles are overlaid on the layout.FIG.15Bdepicts the first rectangles1502and the second rectangles1504applied to the layout1500. Since the first and the second rectangles1502,1504have the same shape and the same orientation, the second rectangles1504align with the first rectangles1502. Rectangles for the diagonal edges are applied to the layout at block1404. For example, the rectangles may be applied in two layers (e.g., third rectangles and fourth rectangles). The third and the fourth rectangles are used with the diagonal edges. As such, the third and the fourth rectangles are rotated such that the S and the B sides of the octagon-shaped bins align with, or are parallel to, some of the sides of the third and the fourth rectangles. In one embodiment, the third and the fourth rectangles are overlaid on the layout. FIG.15Cillustrates the third rectangles1506and the fourth rectangles1508applied to the layout1500. The third rectangles1506have sides that align with, or are parallel to, the S sides of the octagon-shaped bins1501. The fourth rectangles1508have sides that align with, or are parallel to, the B sides of the octagon-shaped bins1501. For clarity, only one fourth rectangle1508is shown inFIG.15C. After the first, the second, the third, and the fourth rectangles are applied to the layout, the supply and/or the demand of the vertical edges, the diagonal edges (e.g., forty-five (45) degrees and 135 degrees), and the horizontal edges are determined at block1406using the layout with the first, the second, the third, and the fourth rectangles.FIG.15Ddepicts the layout with the applied first, second, third, and fourth rectangles in accordance with some embodiments. For simplicity and clarity, only the first, the second, the third, and the fourth rectangles1502,1504,1506,1508along the top and bottom edges are identified inFIG.15Dwith the reference numbers1502,1504,1506,1508. In an example embodiment, the first rectangles1502can be used to determine the supply and/or the demand of the vertical edges in the design and the second rectangles1504may be used to ascertain the supply and/or the demand of the horizontal edges. The third rectangles1506may be used to determine the supply and/or the demand of the diagonal edges that are oriented at forty-five (45) degrees and the fourth rectangles1508can be used to determine the supply and/or the demand of the diagonal edges oriented at one hundred and thirty-five (135) degrees in the design. AlthoughFIG.14is described in conjunction with specific order for the first, the second, the third, and the fourth rectangles in blocks1402,1404, other embodiments can apply the first, the second, the third, and the fourth rectangles for the horizontal, vertical, and diagonal edges in any order. FIG.16illustrates a diagonal track intersecting two octagon-shaped bins with applied first rectangles and second rectangles in accordance with some embodiments.FIG.16is used to describe an example determination of the demand of the diagonal edge1600for the two octagon-shaped bins1501a,1501b.In one embodiment, the supply for the sides of the octagon-shaped bins1501a,1501bis determined prior to calculating the demand by, for example, an EDA. The diagonal edge1600crosses the sides1602,1604in the octagon-shaped bin1501aand the sides1606,1608in the octagon-shaped bin1501b.The side1602is parallel to, and associated with (indicated by arrow S), the side1610of the second rectangle1612. The sides1604,1606are parallel to, and associated with (indicated by arrows T and U), the shared side1614of the second rectangles1612,1616. The side1608is parallel to, and associated with (indicated by arrow V), the side1618of the second rectangle1616. Because the diagonal edge1600crosses the sides1602,1604,1606,1608, the demand (D1) for side1602is one (1), the demand (D2) for sides1604,1606is one (1), and the demand (D3) for side1608is one (1) in the illustrated embodiment. In2D global routing, the conductor layers above a given conductor layer are collapsed into the given conductor layer. Thus,FIG.15Drepresents all conductor layers and the supply and/or the demand for all of the conductor layers are determined based on the illustrated octagon-shaped bins1501and the first, the second, the third, and the fourth rectangles. In 3D routing, the supply and/or the demand are determined for each conductor layer.FIG.17depicts a flowchart of an example method of determining supply and demand in a 3D global routing process in accordance with some embodiments. Initially, the design of a conductor layer is partitioned into bins at block1700. In one embodiment, partitioning the design into bins includes determining the dimensions of the octagon-shaped bins and the locations of the octagon-shaped bins to produce a layout for the octagon-shaped bins. Next, as shown in block1702, first rectangles are applied to the layout. Generally, a first rectangle is associated with one or more of the octagon-shaped bins. In the illustrated embodiment, the first rectangles are used with one set of the Manhattan edges. For example, the first rectangles can be used to determine the supply and/or the demand for the vertical edges in the conductor layer. Second rectangles for diagonal edges are applied to the layout at block1704. A second rectangle is associated with one or more of the octagon-shaped bins. The second rectangles are used with one set of diagonal edges. For example, the second rectangles can be used to determine the supply and/or the demand for the diagonal edges in the conductor layer that are oriented at forty-five (45) degrees. At block1706, third rectangles for diagonal edges are applied to the layout. Generally, a third rectangle is associated with one or more of the octagon-shaped bins. The third rectangles are used with the other set of diagonal edges. For example, the third rectangles can be used to determine the supply and/or the demand for the diagonal edges in the conductor layer that are oriented at one hundred and thirty-five (135) degrees. Next, as shown in block1708, fourth rectangles are applied to the layout. A first rectangle is associated with one or more of the octagon-shaped bins. The fourth rectangles are used with the other set of the Manhattan edges. For example, the fourth rectangles can be used to determine the supply and/or the demand for the horizontal edges in the conductor layer. After the first, the second, the third, and the fourth rectangles are applied to the layout, the supply and/or the demand of the vertical, diagonal (45 degrees and 135 degrees), and horizontal edges are determined at block1710. A determination is then made at block1712as to whether global routing is to be performed for another conductor layer. If so, the process continues at block1714where the next conductor layer is selected. The process then returns to block1700and blocks1700,1702,1704,1706,1708,1710,1712,1714repeat until a determination is made at block1712that all of the conductor layers have been processed and the method ends (block1716). AlthoughFIG.17is described in conjunction with specific edges in blocks1702,1704,1706,1708, other embodiments can apply the first, the second, the third, and the fourth rectangles for the horizontal, vertical, and diagonal edges in any order. FIG.18illustrates an example first, second, third, and fourth rectangles for an octagon-shaped bin associated with a conductor layer in accordance with some embodiments. The first rectangle1800is used for one of the Manhattan edges (e.g., the vertical edges). The second rectangle1802is used for one of the diagonal edges (e.g., diagonal edges oriented at 135 degrees). The third rectangle1804is used for the other diagonal edges (e.g., diagonal edges oriented at forty-five (45) degrees). The fourth rectangle1806is used for the other Manhattan edges (e.g., horizontal edges). As described earlier, the first, the second, the third, and the fourth rectangles1800,1802,1804,1806assist in determining the demands for the horizontal, vertical, and diagonal sides of the octagon-shaped bin1808. FIG.19depicts an example layout of first rectangles that can be used in 3D global routing in accordance with some embodiments. The illustrated layout1900includes nine (9) first rectangles1902, although other embodiments are not limited to this number. After a conductor layer is divided into bins (not shown), a first rectangle1902can be associated with each bin in the conductor layer. As described earlier, the first rectangles1902are used to determine the supply and/or the demand for one of the Manhattan edges (e.g., the vertical edges). FIG.20illustrates an example layout of second rectangles that can be used in 3D global routing in accordance with some embodiments. The illustrated layout2000includes nine (9) second rectangles2002, although other embodiments are not limited to this number. In one embodiment, a second rectangle2002is associated with each bin in the conductor layer. The second rectangles2002are used to determine the supply and/or the demand for one of the diagonal edges (e.g., the forty-five (45) degree diagonal edges). FIG.21depicts an example layout of third rectangles that can be used in 3D global routing in accordance with some embodiments. The illustrated layout2100includes nine (9) third rectangles2102, although other embodiments are not limited to this number. In one embodiment, a third rectangle2102is associated with each bin in the conductor layer. The third rectangles2102are used to determine the supply and/or the demand for the other diagonal edges (e.g., the 135 degree diagonal edges). FIG.22illustrates an example layout of fourth rectangles that can be used in 3D global routing in accordance with some embodiments. The illustrated layout2200includes nine (9) fourth rectangles2202, although other embodiments are not limited to this number. In one embodiment, a fourth rectangle2202is associated with each bin in the conductor layer. The fourth rectangles2202are used to determine the supply and/or the demand for the other Manhattan edges (e.g., the horizontal edges). FIG.23depicts an example first pitch for Manhattan and diagonal tracks in accordance with some embodiments. The horizontal tracks2300represent paths or routes for the conductors in the horizontal direction. The vertical tracks2302represent the routes for the conductors in the vertical direction. The diagonal tracks2304represent the routes for the conductors in the diagonal direction. Pitch2306represents the minimum pitch for the tracks2300,2302,2304. In one embodiment, the minimum pitch is defined by one or more design rules for the integrated circuit, and the minimum pitch is the same for the spacing between the horizontal tracks2300, the vertical tracks2302, and the diagonal tracks2304. However, as shown inFIG.23, in some instances the pitch2306results in the diagonal tracks2304not crossing or intersecting the same point that the horizontal and vertical tracks2300,2302cross (see highlighted area2308). The diagonal tracks2304intersect the vertical tracks2302at one point and cross the horizontal tracks2300at a different point. Thus, the diagonal tracks2304are misaligned with respect to the intersection points. In some implementations, diagonal track misalignment can cause issues with stacked vias. FIG.24illustrates an example pitch for the Manhattan tracks and an example pitch for the diagonal tracks in accordance with some embodiments. When the horizontal tracks2300, the vertical tracks2302, and the diagonal tracks2304are aligned to intersect at the same point (see highlighted area2400), the pitch2402between the diagonal tracks2304is reduced and does not match the pitch2306. For example, the pitch2402can be defined by (minimum pitch (2306)÷1.414), where 1.414 is the square root of two (2). In some instances, the reduced pitch2402can cause the conductors2404that are placed along adjacent diagonal tracks2304to be too close to each other. The pitch2402can violate the minimum pitch design rule for the conductors2404. FIG.25depicts an example pitch between conductors disposed along the diagonal tracks in accordance with some embodiments. InFIG.25, the horizontal tracks2300, the vertical tracks2302, and the diagonal tracks2304are aligned to intersect at the same point (see highlighted area2500) and the pitch2402is less than the pitch2306. To compensate for the reduced pitch2402, conductors are positioned at different locations along the diagonal tracks2304. For example, the conductors2502,2504are disposed on the diagonal tracks2304such that one diagonal track is between the conductors2502,2504. Similarly, the conductors2506,2508are disposed on the diagonal tracks2304such that one diagonal track is between the conductors2506,2508. Additionally, the conductors2506,2508are positioned at different locations on the diagonal tracks2304compared to the locations of the conductors2502,2504. In the illustrated embodiment, the locations of the conductors2506,2508do not overlap with (e.g., are not adjacent to) the locations of the conductors2502,2504. Accordingly, the pitch2510between the conductors2502,2504,2506,2508is greater than the pitch2306. In a non-limiting example, the pitch2402is defined by (minimum pitch (2306)±1.414) and the pitch2510is determined by (minimum pitch (2306)×1.414). In some embodiments, a design for an IC is provided by a computer system such as an Electronic Computer-Aided Design (ECAD) system. ECAD tools and methods facilitate the design, partition, and placement of circuits and/or components in an IC on a semiconductor substrate. The ECAD process typically includes turning a behavioral description of an IC into a functional description, which is then decomposed into logic functions and mapped into cells that implement the logic or other electronic functions. Such cells may be defined and stored in a cell library. Once mapped, a synthesis is performed to turn the structural design into a physical layout. In some instances, the design may be optimized post layout. FIG.26illustrates a block diagram of an example system that is suitable for designing an integrated circuit in accordance with some embodiments. The design process may be implemented by a computer system, such as an ECAD system. Some or all of the operations for design (e.g., layout) methods disclosed herein are capable of being performed as part of a design procedure performed in a design house, such as the design house2702discussed below in conjunction withFIG.27. In some embodiments, the system2600includes an automated place and route (APR) system. In some embodiments, the system2600includes a processing device2602and a non-transitory, computer-readable storage medium2604(“storage device”). The processing device2602is any suitable processing device or processing devices. Example processing devices include, but are not limited to, a central processing unit, a microprocessor, a distributed processing system, an application specific integrated circuit, a graphics processing unit, a field programmable gate array, or combinations thereof. The storage device2604may be encoded with or store, for example, computer program code (e.g., a set of executable instructions2606). Execution of the executable instructions2606by the processing device2602represents (at least in part) an ECAD tool that implements a portion or all of, the methods described herein to produce the designs for the structures and the ICs disclosed herein. Further, the fabrication tools2608may be included for layout and physical implementation of the ICs. In one or more embodiments, the storage device2604is an electronic, magnetic, optical, electromagnetic, infrared, and/or a semiconductor system (or apparatus or device). For example, the storage device2604includes a semiconductor or solid-state memory, a magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and/or an optical disk. In one or more embodiments using optical disks, the storage device2604includes a compact disk-read only memory (CD-ROM), a compact disk-read/write (CD-R/W), and/or a digital video disc (DVD). The processing device2602is operably connected to the storage device2604via a bus2610. The processing device2602is also operably connected to an input/output (I/O) interface2612and a network interface2614by the bus2610. The network interface2614is operably connected to a network2616so that the processing device2602and the storage device2604are capable of connecting to external elements via the network2616. In one or more embodiments, the network2616is illustrative of any type of wired and/or wireless network, such as an intranet and/or a distributed computing network (e.g., the Internet). The network interface2614allows the system2600to communicate with other computing or electronic devices (not shown) via the network2616. The network interface2614includes wireless network interfaces and/or wired network interfaces. Example wireless network interfaces include BLUETOOTH, WIFI, WIMAX, GPRS, or WCDMA. Example wired network interfaces include ETHERNET, USB, or IEEE-1364. In one or more embodiments, some or all of the processes and/or methods disclosed herein are implemented in a distributed system via the network2616. The processing device2602is configured to execute the executable instructions2606encoded in the storage device2604to cause the system2600to be usable for performing some or all of the processes and/or methods. For example, an electronic design application (e.g., in an ECAD system or as a standalone application) can be configured to perform the methods and techniques shown inFIGS.2-25. Given the complexity of integrated circuits, and since integrated circuits include thousands, millions, or billions of components, the human mind is unable to perform the methods and techniques depicted inFIGS.2-25. Unlike the human mind, an electronic design application is able to perform the operations associated withFIGS.2-25. In one or more embodiments, the storage device2604stores the executable instructions2606configured to cause the system2600to be usable for performing some or all of the processes and/or methods. In one or more embodiments, the storage device2604also stores information that facilitates execution of a portion of or all of the processes and/or methods. In one or more embodiments, the storage device2604stores a cell library2618that includes (at least in part) standard and/or previously designed cells. The I/O interface2612is operably connected to I/O devices2620. In one or more embodiments, the I/O devices2620include one or more of an image capture device, a microphone, a scanner, a keyboard, a keypad, a mouse, a trackpad, a touchscreen, and/or cursor direction keys for communicating information and commands to the processing device2602. The I/O devices2620may also include one or more displays, one or more speakers, a printer, headphones, a haptic or tactile feedback device, and the like. The system2600is configured to receive information through the I/O interface2612. The information received through the I/O interface2612includes one or more of instructions, data, design rules, cell libraries, and/or other parameters for processing by the processing device2602. The information is transferred to the processing device2602via the bus2610. The system2600is configured to receive information related to a user interface (UI) through the I/O interface2612. The information is stored in the storage device2604as a UI2622or for presentation in the UI2622. In some embodiments, a portion or all of the processes and/or methods is implemented as a standalone software application (e.g., an EDA) for execution by a processing device (e.g., processing device2602). In some embodiments, a portion or all of the processes and/or methods is implemented as a software application that is a part of an additional software application. In some embodiments, a portion or all of the processes and/or methods is implemented as a plug-in to a software application. In some embodiments, at least one of the processes and/or methods is implemented as a software application that is a portion of an EDA tool. In some embodiments, a portion or all of the processes and/or methods is implemented as a software application that is used by the system2600. In some embodiments, a layout diagram which includes standard and/or previously designed cells is generated using a tool such as VIRTUOSO available from CADENCE DESIGN SYSTEMS, Inc., or another suitable layout generating tool. In some embodiments, the processes are realized as functions of a program stored in a non-transitory computer readable recording medium (e.g., the storage device2604). Examples of a non-transitory computer readable recording medium include, but are not limited to, external/removable and/or internal/built-in storage or memory unit, e.g., one or more of an optical disk, such as a DVD, a magnetic disk, such as a hard disk, a semiconductor memory, such as a ROM, a RAM, a memory card, and the like. As noted above, embodiments of the system2600may include the fabrication tools2608for implementing the processes and/or methods stored in the storage device2604. For instance, a synthesis may be performed on a design in which the behavior and/or functions desired from the design are transformed to a functionally equivalent logic gate-level circuit description by matching the design to cells selected from the cell library2618. The synthesis results in a functionally equivalent logic gate-level circuit description, such as a gate-level netlist. Based on the gate-level netlist, a photolithographic mask may be generated that is used to fabricate the IC by the fabrication tools2608. Further aspects of device fabrication are disclosed in conjunction withFIG.27, which is a block diagram of an integrated circuit manufacturing system, and a manufacturing flow associated therewith, in accordance with some embodiments. In some embodiments, based on a layout diagram, at least one of: (a) one or more semiconductor masks; or (b) at least one component in a layer of a semiconductor IC is fabricated using the manufacturing system2700. In the illustrated embodiment, the IC manufacturing system2700includes entities, such as a design house2702, a mask house2704, and an IC manufacturer/fabricator (“fab”)2706, that interact with one another in the design, development, and manufacturing cycles and/or services related to manufacturing an IC2708, such as the ICs disclosed herein. The entities in the system2700are operably connected by a communication network (not shown). In some embodiments, the communication network is a single network. In some embodiments, the communication network is a variety of different networks, such as an intranet and the Internet. The communication network includes wired and/or wireless communication channels. Each entity interacts with one or more of the other entities and provides services to and/or receives services from one or more of the other entities. In some embodiments, two or more of the design house2702, the mask house2704, and the IC fab2706is owned by a single company. In some embodiments, two or more of the design house2702, the mask house2704, and the IC fab2706coexist in a common facility and use common resources. The design house (or design team)2702generates an IC design layout diagram2710. The IC design layout diagram2710includes various geometrical patterns, or IC layout diagrams designed for the IC2708to be fabricated. The geometrical patterns correspond to patterns of metal, oxide, or semiconductor layers that make up the various components of the IC2708to be fabricated. The various layers combine to form various IC features. For example, a portion of the IC design layout diagram2710includes various IC features, such as active regions, gate electrodes, source and drain, metal lines or local vias, and openings for bonding pads, to be formed in a semiconductor substrate (such as a silicon wafer) and various material layers disposed on the semiconductor substrate. The design house2702implements a design procedure to form the IC design layout diagram2710. The design procedure includes one or more of logic design, physical design or place and route. The IC design layout diagram2710is presented in one or more data files having information of the geometrical patterns. For example, the IC design layout diagram2710can be expressed in a GDS file format, a GDSII file format, or a DFII file format. The mask house2704includes mask data preparation2712and mask fabrication2714. The mask house2704uses the IC design layout diagram2710to manufacture one or more masks2716to be used for fabricating the various layers of the IC2708according to the IC design layout diagram2710. The mask house2704performs mask data preparation2712, where the IC design layout diagram2710is translated into a representative data file (“RDF”). The mask data preparation2712provides the RDF to the mask fabrication2714. The mask fabrication2714includes a mask writer (not shown) that converts the RDF to an image on a substrate, such as a mask (reticle)2716on a semiconductor wafer. The IC design layout diagram2710is manipulated by the mask data preparation2712to comply with particular characteristics of the mask writer and/or requirements of the IC fab2706. InFIG.27, the mask data preparation2712and the mask fabrication2714are illustrated as separate elements. In some embodiments, the mask data preparation2712and the mask fabrication2714can be collectively referred to as a mask data preparation. In some embodiments, the mask data preparation2712includes an optical proximity correction (OPC) that uses lithography enhancement techniques to compensate for image errors, such as those that can arise from diffraction, interference, other process effects and the like. The OPC adjusts the IC design layout diagram2710. In some embodiments, the mask data preparation2712includes further resolution enhancement techniques (RET), such as off-axis illumination, sub-resolution assist features, phase-shifting masks, other suitable techniques, and the like or combinations thereof. In some embodiments, inverse lithography technology (ILT) is also used, which treats OPC as an inverse imaging problem. In some embodiments, the mask data preparation2712includes a mask rule checker (MRC) (not shown) that checks the IC design layout diagram2710that has undergone processes in OPC with a set of mask creation rules that contain certain geometric and/or connectivity restrictions to ensure sufficient margins, to account for variability in semiconductor manufacturing processes, and the like. In some embodiments, the MRC modifies the IC design layout diagram2710to compensate for limitations during the mask fabrication, which may undo part of the modifications performed by OPC in order to meet mask creation rules. In some embodiments, the mask data preparation2712includes lithography process checking (LPC) (not shown) that simulates processing that will be implemented by the IC fab2706to fabricate the IC2708. LPC simulates this processing based on the IC design layout diagram2710to create a simulated manufactured device, such as the IC2708. The processing parameters in LPC simulation can include parameters associated with various processes of the IC manufacturing cycle, parameters associated with tools used for manufacturing the IC, and/or other aspects of the manufacturing process. LPC takes into account various factors, such as aerial image contrast, depth of focus (“DOF”), mask error enhancement factor (“MEEF”), other suitable factors, and the like or combinations thereof. In some embodiments, after a simulated manufactured device has been created by LPC, and if the simulated device is not sufficiently close in shape to satisfy design rules, OPC and/or MRC are be repeated to further refine the IC design layout diagram2710. It should be understood that the above description of the mask data preparation2712has been simplified for the purposes of clarity. In some embodiments, the mask data preparation2712includes additional features such as a logic operation (LOP) to modify the IC design layout diagram2710according to manufacturing rules. Additionally, the processes applied to the IC design layout diagram2710during the mask data preparation2712may be executed in a variety of different orders. After the mask data preparation2712and during the mask fabrication2714, a mask2716or a group of masks2716are fabricated based on the IC design layout diagram2710. In some embodiments, the mask fabrication2714includes performing one or more lithographic exposures based on the IC design layout diagram2710. In some embodiments, an electron-beam (e-beam) or a mechanism of multiple e-beams is used to form a pattern on a mask(s)2716(photomask or reticle) based on the IC design layout diagram2710. The mask(s)2716can be formed in various technologies. For example, in some embodiments, the mask(s)2716is formed using binary technology. In some embodiments, a mask pattern includes opaque regions and transparent regions. A radiation beam, such as an ultraviolet (UV) beam, used to expose the image sensitive material layer (e.g., photoresist) which has been coated on a wafer, is blocked by the opaque region and transmits through the transparent regions. In one example, a binary mask version of the mask(s)2716includes a transparent substrate (e.g., fused quartz) and an opaque material (e.g., chromium) coated in the opaque regions of the binary mask. In another example, the mask(s)2716is formed using a phase shift technology. In a phase shift mask (PSM) version of the mask(s)2716, various features in the pattern formed on the phase shift mask are configured to have a proper phase difference to enhance the resolution and imaging quality. In various examples, the phase shift mask can be attenuated PSM or alternating PSM. The mask(s)2716generated by the mask fabrication2714is used in a variety of processes. For example, a mask(s)2716is used in an ion implantation process to form various doped regions in the semiconductor wafer, in an etching process to form various etching regions in the semiconductor wafer, and/or in other suitable processes. The IC fab2706includes wafer fabrication2718. The IC fab2706is an IC fabrication business that includes one or more manufacturing facilities for the fabrication of a variety of different IC products. In some embodiments, the IC fab2706is a semiconductor foundry. For example, there may be a manufacturing facility for the front end fabrication of a plurality of IC products (FEOL fabrication), while a second manufacturing facility may provide the back end fabrication for the interconnection and packaging of the IC products (BEOL fabrication), and a third manufacturing facility may provide other services for the foundry business. The IC fab2706uses the mask(s)2716fabricated by the mask house2704to fabricate the IC2708. Thus, the IC fab2706at least indirectly uses the IC design layout diagram2710to fabricate the IC2708. In some embodiments, a semiconductor wafer2720is fabricated by the IC fab2706using the mask(s)2716to form the IC2708. In some embodiments, the IC fab2706includes performing one or more lithographic exposures based at least indirectly on the IC design layout diagram2710. The semiconductor wafer2720includes a silicon substrate or other proper substrate having material layers formed thereon. The semiconductor wafer2720further includes one or more of various doped regions, dielectric features, multilevel interconnects, and the like (formed at subsequent manufacturing steps). FIG.28illustrates an example flowchart of a method of providing an integrated circuit in accordance with some embodiments. Initially, as shown in block2800, a placement of the circuit(s) and/or the components in the integrated circuit is determined. At block2802, the number of diagonal layers and the number of Manhattan layers for an integrated circuit are determined. One or more of the embodiments described in conjunction withFIGS.3-9can be used in block2802. The supply and the demand of the vertical edges, the diagonal edges, and the horizontal edges for each conductor layer (e.g., 3D routing) or for all conductor layers (e.g., 2D global routing) are determined (block2804). One or more of the embodiments described in conjunction withFIGS.12-22can be used in block2804. A minimum pitch for the horizontal and the vertical tracks and the minimum pitch for the diagonal tracks are determined at block2806. One or more of the embodiments described in conjunction withFIGS.23and25can be used in block2806. Next, a conductor scheme is selected at block2808. One of the embodiments described in conjunction withFIGS.10-11can be used in block2808. Other information on the integrated circuit is received at block2810. The other information is any other suitable information that is used in the design and/or the manufacture of the integrated circuit. The other information can include, but is not limited to, design check rules and the types, number, and placement of the various components and/or circuits in the integrated circuit. A layout of the integrated circuit is then generated at block2812. In one embodiment, the layout is produced by a design house (e.g., design house2702inFIG.27). The layout can be represented or defined by an IC design layout diagram (e.g., IC design layout diagram2710inFIG.27). The integrated circuit is then fabricated at block2814. In one embodiment, the integrated circuit is manufactured using at least the mask house2704and an IC fab2706as shown and described in conjunction withFIG.27. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. In one aspect, a method of designing an integrated circuit includes a processing device determining a placement of components in the integrated circuit and determining a representation for each respective net in the integrated circuit based on the placement. In one embodiment, the representation is a tree representation. A number of conductor layers to be used for Manhattan routing of conductors is determined, where the determining includes calculating a first ratio for the Manhattan edges in the trees based on a total length of the Manhattan edges or on a total count of the Manhattan edges. A number of conductor layers to be used for diagonal routing of conductors is determined, where the determining comprising calculating a second ratio for the diagonal edges in the trees based on a total length of the diagonal edges or on a total count of the diagonal edges. Based on the first and the second ratios, a conductor scheme for the integrated circuit is selected. A layout of the integrated circuit is then generated. In another aspect, a system includes a processing device and a memory operably connected to the processing device. The memory stores instructions, that when executed by the processing device, cause operations to be performed. The operations include determining a placement of components in the integrated circuit and determining a representation for each respective net in the integrated circuit based on the placement. In one embodiment, the representation is a tree representation. A number of conductor layers to be used for Manhattan routing of conductors is determined, where the determining includes calculating a first ratio for the Manhattan edges in the trees based on a total length of the Manhattan edges or on a total count of the Manhattan edges. A number of conductor layers to be used for diagonal routing of conductors is determined, where the determining comprising calculating a second ratio for the diagonal edges in the trees based on a total length of the diagonal edges or on a total count of the diagonal edges. Based on the first and the second ratios, a conductor scheme for the integrated circuit is selected. A layout of the integrated circuit is then generated. In yet another aspect, a method includes a processing device determining a placement of components in the integrated circuit and determining a representation for each respective net in the integrated circuit based on the placement. In one embodiment, the representation is a tree representation. A diagonal edge length is determined for one or more diagonal edges in the representations. A respective diagonal edge is replaced with a Manhattan edge based on a determination that the diagonal edge length of the respective diagonal edge is less than a threshold length. A number of conductor layers to be used for Manhattan routing of conductors is determined, where the determining includes calculating a first ratio for the Manhattan edges in the trees based on a total length of the Manhattan edges or on a total count of the Manhattan edges. A number of conductor layers to be used for diagonal routing of conductors is determined, where the determining comprising calculating a second ratio for the diagonal edges in the trees based on a total length of the diagonal edges or on a total count of the diagonal edges. Based on the first and the second ratios, a conductor scheme for the integrated circuit is selected. A layout of the integrated circuit is then generated. The description and illustration of one or more aspects provided in this application are not intended to limit or restrict the scope of the disclosure as claimed in any way. The aspects, examples, and details provided in this application are considered sufficient to convey possession and enable others to make and use the best mode of claimed disclosure. The claimed disclosure should not be construed as being limited to any aspect, example, or detail provided in this application. Regardless of whether shown and described in combination or separately, the various features (both structural and methodological) are intended to be selectively included or omitted to produce an embodiment with a particular set of features. Having been provided with the description and illustration of the present application, one skilled in the art may envision variations, modifications, and alternate aspects falling within the spirit of the broader aspects of the general inventive concept embodied in this application that do not depart from the broader scope of the claimed disclosure. | 70,185 |
11861285 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. Embodiments, or examples, illustrated in the drawings are disclosed as follows using specific language. It will nevertheless be understood that the embodiments and examples are not intended to be limiting. Any alterations or modifications in the disclosed embodiments, and any further applications of the principles disclosed in this document are contemplated as would normally occur to one of ordinary skill in the pertinent art. Further, it is understood that several processing steps and/or features of a device may be only briefly described. Also, additional processing steps and/or features can be added, and certain of the following processing steps and/or features can be removed or changed while still implementing the claims. Thus, the following descriptions should be understood to represent examples only, and are not intended to suggest that one or more steps or features are required. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. FIG.1is a block diagram of a semiconductor device1in accordance with some embodiments of the present disclosure. The semiconductor device1includes heat generating structures (which may also be referred to as heat sources)10and11, and a conductive line segment12. Each of the heat generating structures10and11may be or may include any device or element present on the semiconductor device1that may contribute to heating the conductive line segment12during the operation of the semiconductor device1. In some embodiments, each of the heat generating structures10and11may be a chip or a die including a semiconductor substrate, one or more integrated circuit devices and one or more overlying interconnection structures therein. The integrated circuit devices may include active devices such as transistors and/or passive devices such as resistors, capacitors, inductors, or a combination thereof. In some embodiments, the heat generating structure10may be or may include a metal-oxide-semiconductor field-effect transistor (MOSFET), such as a complementary MOS (CMOS), a fin field effect transistor (FinFET), an n-channel MOSFET, a p-channel MOSFET, or a combination thereof. In some embodiments, the heat generating structure11may be or may include a high-resistance (Hi-R) element. In some embodiments, the Hi-R element may be non-metallic. In some embodiments, the conductive line segment12may include a conductive line including a plurality of metal atoms, selected from a group of metals including, e.g., aluminum (Al), copper (Cu), titanium (Ti), tantalum (Ta), tungsten (W), platinum (Pt), cobalt (Co) and, in some embodiments, one or more alloying metals or other elements including nickel (Ni), nitrogen (N), and silicon (Si). The conductive line segment12forms a conductive path for electrons moving between a cathode and an anode. EM occurs when electrical current runs through a conductive line (such as the conductive line segment12) and the electrons transfer a portion of their momentum to the metal atoms of the conductive line, thereby impelling the metal atoms in the direction of the electron flow. Repeated transfers of momentum from the electrons to the metal atoms during operation of a semiconductor device (such as the semiconductor device1) will gradually shift the metal atoms from their original positions, thereby increasing the non-uniformity of the conductive line. In those regions of the conductive line in which the movement of the metal atoms reduces the cross-section of the conductive line, the current density will increase and further exacerbate both the self-heating effect and EM in the thinned region(s). Conductive lines incorporating such thinned regions will exhibit increased resistance and will typically lead to reduced performance and, eventually, a void or an open circuit. Conversely, in those regions of the conductive line in which the movement of the metal atoms increases the cross-section of the conductive line, the thickened regions, e.g., hillocks, will tend to stress the surrounding materials and eventually compromise the structural integrity of the surrounding materials and/or create short circuit to an adjacent conductive line or other conductor. Over time, EM increases the non-uniformity of the conductive line and causes the formation of hillocks (accumulation of excess metal) and/or voids (depletion of initial metal) in the conductive line which, in turn, tends to result in short circuits (in the presence of hillocks) or open circuits (in the presence of voids). To avoid EM-related failures of the semiconductor device, EM evaluation, analysis, and signoff methodologies may be applied to estimate a mean time to failure (MTTF) for a conductive line caused by EM. A simulated integrated circuit design that passes the applicable EM requirements may be approved for EM signoff and tape-out for use in manufacturing a semiconductor device. In some embodiments, an EM evaluation considers various thermal effects, such as self-heating of heat generating structure(s) (which may experience some degree of self-heating during operation) and thermal coupling on heat sensitive structure(s). For example, the heat generating structures10and11, and the conductive line segment12may experience some degree of self-heating during the operation of the semiconductor device1, and thus may be considered heat generating structures in an EM evaluation. Therefore, the self-heating effects of the heat generating structures10and11, and the conductive line segment12may be taken into consideration in an EM evaluation. In addition, a portion of the heat generated from the heat generating structures10and11may be transferred to conductive line segment12and coupled with inherent current/resistance (IR) heating (also referred to as ohmic or joule heating) of the conductive line segment12, increasing the risk of EM-related failure. For example, at least a portion of the heat generated from the FinFET semiconductor device may, in turn, be transferred to the conductive lines through direct contact with the transistor and via conduction through intervening materials, e.g., layers of interlayer dielectric (ILD) material(s). Therefore, an EM evaluation also considers the increased operating temperature of the conductive line segment12resulting from or induced by thermal coupling from heat generating structures10and11. FIG.2is a flowchart2showing an EM evaluation method in accordance with some embodiments of the present disclosure.FIG.3is a simulated integrated circuit design layout3in accordance with some embodiments of the present disclosure. The EM evaluation method ofFIG.2is detailed here with respect to the simulated integrated circuit design layout3ofFIG.3. However, the present disclosure is not limited thereto. In some embodiments, the EM sign-off methodology of the present disclosure may be applied on any suitable simulated integrated circuit design layout. The EM evaluation method may begin in operation 21, generating a simulated integrated circuit design layout. For example, the simulated integrated circuit design layout3as shown inFIG.3may be generated by a data storage device for storing design data corresponding to an integrated circuit layout. In some embodiments, during a structural and/or operational review of a simulated integrated circuit design layout that is under evaluation, one or more temperature sensitive structures and one or more heat generating structures may be identified. For example, structures for which an increased operating temperature will degrade performance and/or lifetime, may be identified as temperature sensitive structures, such as transistors and conductive lines. For example, proximate structures surrounding and/or adjacent to the temperature sensitive structure may be evaluated for identification as heat generating structures. In order to be identified as a heat generating structure, a proximate structure can exhibit at least one of the following properties: (1) an operating temperature that meets or exceeds a predetermined temperature level above the anticipated operating temperature of the temperature sensitive structure; and (2) a location within the impact area defined by the temperature sensitive structure (or by the heat generating structure) that allows thermal coupling between the heat generating structure and the temperature sensitive structure. Proximate structures having or exhibiting an operating temperature and location sufficient to meet the noted properties are then identified as heat generating structures. In some embodiments, an electronic design automation (EDA) (also referred to as electronic computer-aided design (EGAD)) tool may be used to identify potential heat generating structures. The simulated integrated circuit design layout3may be a simulated integrated circuit design layout of a FinFET semiconductor device, and may be identified as a heat generating structure. In some embodiments, the simulated integrated circuit design layout3may further include a layout for a heat sensitive structure (such as the conductive line segment12ofFIG.1, not shown ofFIG.3). In some embodiments, a FinFET semiconductor device includes a substrate in which is formed an active region in which a source and drain are formed, a guard ring, a plurality of conductive line layers separated by layers of ILD material(s), and vias formed through the ILD materials to establish electrical connections to and between the conductive line layers. Depending on the particular integrated circuit design, heat generated within the active regions of the FinFET semiconductor device will reach portions of the conductive line layers that are within the active region impact range and, to some extent, through the vias connecting the conductive lines to the active region. In some embodiments, the FinFET semiconductor device ofFIG.3includes an active area or oxide definition (OD) area30, polysilicon gate(s) (PO)32,33,34, and35, polysilicon gate(s) over diffusion edge (PODE)31,36, source(s) S, and drain(s) D. In some embodiments, there may be any number of OD areas, PO, PODE, sources, and drains in the FinFET semiconductor device ofFIG.3based on design requirements. In operation 22, a current distribution of the simulated integrated circuit design layout3may be determined. For example, a current distribution among currents I1, I2, and I3may be determined. In some embodiments, total current Itotalmay flow into the simulated integrated circuit design layout3through the drain D. In some embodiments, an individual current path of the simulated integrated circuit design layout3may be from a drain D to a source S controlled by a PO. In another example, the simulated integrated circuit design layout3can include a current path from the current I1through PO32to the current I4, a current path from the current I2through PO33to the current I4, a current path from the current I2through PO34to the current I5, and a current path from the current I3through PO35to the current I5. In the simulated integrated circuit design layout3, the current I1flows along one current path and the current I3flows along another one current path. The current I2contributes to two current paths. In some embodiments, the current distribution among the currents I1, I2, and I3may be determined to be one third (⅓) of the total current Itotal. However, since the current I2contributes to two current paths, the practical current I2is twice the current I1and the current I3, and thus the current distribution among the currents I1, I2, and I3may not be equitable. In this embodiment, the underestimated current I2may be about 33 percent (%) of the practical current I2. EM evaluation methods not identifying and compensating for the increased operating temperature of the heat generating structure in the simulated integrated circuit design layout resulting from or induced by current distribution present an increased risk of underestimating self-heating effects (and also the thermal coupling) of the simulated integrated circuit design layout3and the conductive line operating temperature. In some embodiments, underestimating the self-heating effects and the conductive line operating temperature may, in turn, produce an EM evaluation result that is overly optimistic, tending to overestimate the average lifetime of semiconductor devices manufactured to that design, and resulting in premature field failures of the affected semiconductor devices. In order to provide a reasonable and prudent estimate regarding the self-heating effects and the thermal coupling from the heat generating structures, EM evaluation method takes the practical current distribution into consideration. In some embodiments, a current of an individual current path in the simulated integrated circuit design layout3may be calculated based on the practical current distribution. In some embodiments, a practical current for an individual current path of the simulated integrated circuit design layout3may be represented by the following formula (or equation) Eq. 1: IMD(i)=f(Itotal,#PO,#PODE,#SOURCE,#DRAIN,i)[Eq.1] In some embodiments, the relevant values and/or parameters included in the formula Eq. 1 are provided by the foundry, incorporated in the applicable design rules, or extracted from the integrated circuit layout and include:Itotal: total current running through the simulated integrated circuit design layout3;#PO: number of PO;#PODE: number of PODE;#SOURCE: number of sources;#DRAIN: number of drains;i: location of an individual current path. In some embodiments, the formula Eq. 1 may further include other values and/or parameters corresponding to the practical current distribution. By recognizing and determining the practical current distribution of heat generating structures, the EM evaluation according to some embodiments of the present disclosure provides a more grounded and accurate estimate of the anticipated performance of the semiconductor device, thereby increasing the likelihood that semiconductor devices can meet or exceed customer expectations. In operation 23, an individual self-heating temperature of the simulated integrated circuit design layout3can be calculated. For example, the individual self-heating temperature ΔT; for an individual current path of the simulated integrated circuit design layout3may be calculated using the device temperature formula Eq. 2: ΔTi=RTHC×finger_effect×fin_effect×powerper_fin/per_finger[Eq.2] In some embodiments, the relevant values and/or parameters included in the formula Eq. 2 are provided by the foundry, incorporated in the applicable design rules, or extracted from the integrated circuit layout and include:RTHC: thermal resistance value (may be provided by foundry); finger-effect: a function of gate finger number, cross-coupling of gate finger, etc.;fin-effect: a function of fin number, fin width, etc.;Powerper_fin/per_finger: a function of IMD(watt). In some embodiments, as mentioned, the IMDtakes current distribution into consideration. Therefore, the individual self-heating temperature ΔTifor an individual current path of the simulated integrated circuit design layout3takes current distribution into consideration. In some embodiments, the device temperature formula Eq. 2 may be provided as part of the design tool set provided by a semiconductor device foundry including, for example, a Simulation Program with Integrated Circuit Emphasis (SPICE) model corresponding to a particular manufacturing process. In operation 24, a cumulative self-heating temperature ΔTODof the simulated integrated circuit design layout3can be calculated. In some embodiments, the cumulative self-heating temperature ΔTODmay be calculated based on the individual self-heating temperature ΔTiobtained in operation 23. In some embodiments, the cumulative self-heating temperature ΔTODis a function of the individual self-heating temperature ΔTicalculations for each of the structures (such as PO32,33,34, and35, and the PODE31,36) incorporated within the OD area30. In some embodiments, the cumulative self-heating temperature ΔTODmay include an operating temperature of the heat generating structure. In some embodiments, this cumulative self-heating temperature ΔTODmay then be used in subsequent calculations for evaluating the magnitude of thermal coupling between the OD area30and heat sensitive structures proximate to the simulated integrated circuit design layout3. In operation 25, an anticipated temperature increase ΔTConfor a heat sensitive structure proximate to the simulated integrated circuit design layout3may be calculated. In some embodiments, the anticipated temperature increase ΔTConfor a heat sensitive structure, e.g., a conductive line, may be a function of both self-heating of the heat sensitive structure and the thermal contribution(s) (or thermal coupling(s)) from other heat generating structures proximate to the heat sensitive structure. In some embodiments, the anticipated temperature increase ΔTConfor a heat sensitive structure may be calculated according to the formula Eq. 3: ΔTCon=ΔTrms+f(a,b,ΔTOD,c,d,ΔTHi-R,ΔTother_devices)[Eq.3] In some embodiments, the relevant values and/or parameters included in the formula Eq. 3 are provided by the foundry, incorporated in the applicable design rules, or extracted from the integrated circuit layout and may include:ΔTrms: current-induced metal heating temperature of a heat sensitive structure;ΔTOD: cumulative self-heating temperature of a FinFET semiconductor device;ΔTHi-R: self-heating temperature of a Hi-R device;ΔTother_devices: self-heating temperature from other devices. For the purposes of the anticipated temperature increase ΔTConcalculation according to formula Eq. 3, other devices can include, for example, bipolar junction transistors (BJT), diodes, and resistors that are thermally coupled to the heat sensitive structure under analysis. The thermal coefficients a b, c, and d reflect:a=a derating coefficient (or de-rating coefficient) value reflecting operation at less than maximum capacity;b=a function of ΔTrmsand ΔTOD[ƒ(ΔTrms, ΔTOD)];c=a layer effect associated with the layer/material;d=a temperature profile associated with the layer/material. In some embodiments, the thermal coefficients, a, b, c, and d may be specific to each of the materials and/or layers incorporated in the simulated integrated circuit design layout and to the particular manufacturing process used to produce semiconductor devices according to the simulated integrated circuit design layout. In some embodiments, adjustment of the anticipated temperature increase ΔTConfor the heat sensitive structure by the thermal contribution(s) may provide a more accurate analysis of the anticipated performance of the semiconductor device. In some embodiments, the range over which thermal coupling is expected to occur between a heat generating structure and the heat sensitive structure (which may also be referred to as the impact range or thermal coupling range), is defined by a horizontal distance from the associated heat generating structure. For example, the formula Eq. 3 may include the thermal contribution(s) based on the location of the heat sensitive structure with respect to the heat generating structure (such as the Hi-R impact area and the active region impact area). For example, the formula Eq. 3 may include the thermal contribution(s) from the impact areas associated with two or more heat generating structures overlapping at least in part (i.e., a combined impact area). In some embodiments, the anticipated temperature increase ΔTConmay be used to evaluate the heat sensitive structure at an evaluation temperature. In operation 26, a tape out data file corresponding to an integrated circuit layout that passes the EM analysis may be generated. In some embodiments, the EM methodologies detailed can be applied to any integrated circuit design layout and/or semiconductor manufacturing process in which self-heating effects are anticipated. In some embodiments, the integrated circuit design layouts can include FinFET devices and/or other planar or more complex structural semiconductor manufacturing processes. In some embodiments, the self-heating aware EM evaluation identifies those regions, if any, of the integrated circuit design layout in which the self-heating effects result in localized heating, e.g., a “hotspot,” that will reduce overall EM performance and/or lifetime of semiconductor devices manufactured according to the integrated circuit design. In some embodiments, the initial self-heating aware EM evaluation is coupled with a heat sink-aware EM evaluation in order to determine if one or more surrounding structures is capable of mitigating the self-heating effects and/or thermal coupling effects previously identified and thereby improving the EM performance of the integrated circuit design layout. FIG.4is a simulated integrated circuit design layout4in accordance with some embodiments of the present disclosure. The simulated integrated circuit design layout4ofFIG.4is similar to the simulated integrated circuit design layout ofFIG.3, with differences therebetween as follows. In some embodiments, the simulated integrated circuit design layout4includes two FinFET semiconductor devices ofFIG.3in parallel. The FinFET semiconductor device ofFIG.4includes an OD area40and an OD area41spaced apart from the OD area40. When applying the EM evaluation method ofFIG.2to the simulated integrated circuit design layout4, the number of the OD areas may be taken into consideration. For example, a practical current for an individual current path of the simulated integrated circuit design layout4may be represented by the following formula Eq. 4: IMD(i)=f(Itotal,#PO,#PODE,#SOURCE,#DRAIN,#OD,i)[Eq.4] In some embodiments, the relevant values and/or parameters included in the formula Eq. 4 are provided by the foundry, incorporated in the applicable design rules, or extracted from the integrated circuit layout and include:Itotal: total current running through the simulated integrated circuit design layout4;#PO: number of PO;#PODE: number of PODE;#SOURCE: number of source;#DRAIN: number of drain;#OD: number of OD area;i: location of an individual current path. FIG.5is a simulated integrated circuit design layout5in accordance with some embodiments of the present disclosure. The simulated integrated circuit design layout5ofFIG.5is similar to the simulated integrated circuit design layout ofFIG.3, with differences therebetween as follows. In some embodiments, the simulated integrated circuit design layout5includes an OD area50having a width W1and an OD area51having a width W2. The width W2is different from the width W1. The width may be measured in a direction along the length side of the PO. When applying the EM evaluation method ofFIG.2to the simulated integrated circuit design layout5, the width of the OD area may be taken into consideration. For example, a practical current for an individual current path of the simulated integrated circuit design layout5may be represented by the following formula Eq. 5: IMD(i)=f(Itotal,#PO,#PODE,#SOURCE,#DRAIN,WFIN,i)[Eq.5] In some embodiments, the relevant values and/or parameters included in the formula Eq. 5 are provided by the foundry, incorporated in the applicable design rules, or extracted from the integrated circuit layout and include:Itotal: total current running through the simulated integrated circuit design layout5;#PO: number of PO;#PODE: number of PODE;#SOURCE: number of sources;#DRAIN: number of drains;WFIN: width of OD area (or fin structure);i: location of an individual current path. FIG.6is a block diagram of an electronic process control (EPC) system6in accordance with some embodiments of the present disclosure. EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein are implementable, for example, using EPC system6, in accordance with some embodiments. In some embodiments, EPC system6may be a general purpose computing device including an I/O interface60, a hardware processor61, a network interface62, a memory64, and a bus68. In some embodiments, the I/O interface60may be coupled to an external circuitry. In some embodiments, the EPC system6may be configured to receive information through the I/O interface60. The information received through the I/O interface60may include one or more of instructions, data, design rules, process performance histories, target ranges, set points, and/or other parameters for processing by the hardware processor61. The information may be transferred to the hardware processor61via the bus68. The EPC system6may be configured to receive information related to a user interface (UI) through the I/O interface60. The information may be stored in the memory64as user interface (UI)67. In one or more embodiments, the I/O interface60may include a keyboard, keypad, mouse, trackball, trackpad, touchscreen, and/or cursor direction keys for communicating information and commands to the hardware processor61. In some embodiments, the hardware processor61may be configured to execute instructions (which may be referred to as computer program code)65encoded in the memory64in order to cause EPC system1100to be usable for performing a portion or all of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein. In some embodiments, the hardware processor61may be a central processing unit (CPU), a multiprocessor, a distributed processing system, an application specific integrated circuit (ASIC), and/or a suitable processing unit. In some embodiments, the network interface62may be coupled to the hardware processor61through the bus68. The network interface62may allow the EPC system6to communicate with network63, to which one or more other computer systems are connected. Network interface63may include wireless network interfaces such as BLUETOOTH, WIFI, WIMAX, GPRS, or WCDMA; or wired network interfaces such as ETHERNET, USB, or IEEE-1364. In some embodiments, the memory (which may be referred to as a non-transitory, computer-readable storage medium)64, amongst other things, may be encoded with, i.e., stores, instructions (or computer program code)65, such as a set of executable instructions. Execution of computer program code1106by the hardware processor61implements a portion or all of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein. In some embodiments, the memory64, amongst other things, may store formulas (such as the formulas Eqs. 1-5), design data corresponding to a simulated integrated circuit design layout, and models for calculating a simulated operating temperature (such as a pseudo-3-D thermal model or other suitable model). In some embodiments, the design data may utilize Open Artwork System Interchange Standard (OASIS) or another language for representing the integrated circuit design layout. In some embodiments, the memory64, amongst other things, may also store information which facilitates performing a portion or all of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein. In some embodiments, the memory64may store process control data66including, in some embodiments, control algorithms, process variables and constants, target ranges, set points, and code for enabling statistical process control (SPC) and/or model predictive control (MPC) based control of the various processes. In some embodiments, the memory64may be an electronic, magnetic, optical, electromagnetic, infrared, and/or a semiconductor system (or apparatus or device). For example, the memory64may include a semiconductor or solid-state memory, a magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and/or an optical disk. In one or more embodiments using optical disks, the memory64may include a compact disk-read only memory (CD-ROM), a compact disk-read/write (CD-R/W), and/or a digital video disc (DVD). In some embodiments, a portion or all of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein may be implemented as a standalone software application for execution by a processor. In some embodiments, a portion or all of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein may be implemented as a software application that is a part of an additional software application. In some embodiments, a portion or all of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein may be implemented as a plugin for a software application. In some embodiments, at least one of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein may be implemented as a software application that is a portion of an EPC tool. In some embodiments, a portion or all of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein may be implemented as a software application that is used by the EPC system6. In some embodiments, the processes of the EM evaluation, analysis, and signoff methodologies (such as the EM evaluation method ofFIG.2) described herein are realized as functions of a program stored in a non-transitory computer readable recording medium. Examples of a non-transitory computer readable recording medium include, but are not limited to, external/removable and/or internal/built-in storage or memory unit, e.g., one or more of an optical disk, such as a DVD, a magnetic disk, such as a hard disk, a semiconductor memory, such as a ROM, a RAM, a memory card, and the like. Some embodiments of the present disclosure provide a method for evaluating a heat sensitive structure. The method includes identifying a heat sensitive structure in an integrated circuit design layout and identifying a heat generating structure in the integrated circuit design layout. The method also includes calculating an operating temperature of the heat generating structure by taking a practical current distribution into consideration. The method also includes calculating an anticipated temperature increase for the heat sensitive structure induced by thermal coupling of the heat generating structure at the operating temperature. Some embodiments of the present disclosure provide a method for evaluating a heat sensitive structure. The method includes identifying a FinFET structure in an integrated circuit design layout. The FinFET structure includes a first OD area and a second OD area spaced apart from the first OD area. The method also includes determining a practical current distribution of the FinFET structure by taking an OD area number of the FinFET structure into consideration. The method also includes calculating an operating temperature of the FinFET structure based on the practical current distribution. Some embodiments of the present disclosure provide a method for evaluating a heat sensitive structure. The method includes identifying a FinFET structure in an integrated circuit design layout. The FinFET structure includes a first fin structure having a first width and a second fin structure having a second width different from the first width of the first fin structure. The method also includes determining a practical current distribution of the FinFET structure by taking the first width and the second width of the FinFET structure into consideration. The method also includes calculating an operating temperature of the FinFET structure based on the practical current distribution. The methods and features of the present disclosure have been sufficiently described in the above examples and descriptions. It should be understood that any modifications or changes without departing from the spirit of the present disclosure are intended to be covered in the protection scope of the present disclosure. Moreover, the scope of the present application in not intended to be limited to the particular embodiments of the process, machine, manufacture, and composition of matter, means, methods and steps described in the specification. As those skilled in the art will readily appreciate from the present disclosure, processes, machines, manufacture, composition of matter, means, methods or steps presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein, may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope: processes, machines, manufacture, compositions of matter, means, methods or steps. In addition, each claim constitutes a separate embodiment, and the combination of various claims and embodiments are within the scope of the present disclosure. | 35,047 |
11861286 | DETAILED DESCRIPTION During semiconductor manufacturing, a wafer may progress through multiple processing stages. The results after one or more processing stages may be inspected using wafer inspection tools. Wafer inspection tools may capture an image of the wafer, and then use image processing techniques to detect defects. In this disclosure, the term “defect” may refer to a deviation (which is greater than a tolerance amount) between a printed pattern on a wafer and a design intent, and which is detected by a wafer inspection tool. For example, if a space is supposed to exist between two layout shapes in the design intent, and the wafer inspection tool detects that no space exists between the corresponding printed shapes, then the wafer inspection tool may indicate that a defect exists at the location where the space was expected. Defects reported by a wafer inspection tool may be categorized as real defects or false positives. A real defect is a reported defect that causes the IC to malfunction or not meet the desired performance goals, whereas a false positive is a reported defect that can be ignored because it does not impact the functionality or performance of the IC in any meaningful way. At small technology nodes, a large number of the defects reported by a wafer inspection tool can be false positives. Typically, the inspection step is followed by a review step that uses tools like a scanning electron microscope (SEM) to verify if a reported defect is a real defect, and to help uncover possible causes of real defects. Only a small percentage of reported defects can be sent for SEM review due to time and resource constraints (SEM equipment is very costly, and SEM review is a time-consuming step). If most of the reported defects are false positives, then the process of identifying the real defects and uncovering the root causes of the real defects can become difficult and slow. Approaches that use polygonal pattern matching can be too slow and inefficient for binning millions of incoming defects. Additionally, to achieve enough information to perform a defect-to-layout correlation, an extremely large number of samples may need to be reviewed. This can lead to a low throughput from the review process or to a low yield in the absence of a comprehensive review. Embodiments described herein provide systems and techniques for an effective and efficient process for inline defect segregation and sampling so that the identified defects can be considered for review. Benefits of embodiments described herein include, but are not limited to, (1) combating the problems of high false alarm rate of inspection defects and the difficulty in uncovering underlying root causes of real defects, (2) reducing the amount of resources used for determining root causes for defects, and (3) improving the manufacturing yield. Optical inspection tools can overlay the physical layout structures from the design data with the silicon image. This information can help one understand as to whether the defect under review overlaps with a critical structure in the physical layout. If so, then the tool can mark the critical structure for review. Specifically, after the inspection step in semiconductor manufacturing, defects are typically reported by using inspection tools. The defects are typically reported with their location on the wafer and their size. Coordinates of wafer are usually translated to corresponding coordinates in the CAD coordinate system. An IC's physical layout may be represented by a hierarchical set of layout cells. Each layout cell may include polygons and/or one or more child cells. Defects in semiconductor manufacturing may be caused by specific geometric positioning of polygons. The specific geometric positioning of polygons can, in turn, be due to relative instantiation of layout cells. A set of defects may be assigned the same attribute identifier if the set of defects overlap the same set of layout cells. The attribute identifiers may then be used to group similar defects. FIG.1illustrates associating a defect with an attribute that is based on one or more cell identifiers in accordance with some embodiments described herein. IC design100may include layout cells102,104, and106. Each layout cell may include multiple polygons, e.g., polygon108. Each polygon may refer to a physical structure in an IC chip, and the squares at the junction of two polygon may refer to a contact or a via between the two polygons. Each defect may be associated with a defect area. For example, defects D1, D2, and D3may be associated with defect areas110,112, and114, respectively, which are shown as dashed ovals inFIG.1. In some embodiments described herein, overlap may be detected between the defect area associated with a defect and polygons belonging to one or more cells. Next, the defect may be associated with an attribute that is constructed based on identifiers of a set of cells that contain the overlapping polygons. For example, defect D2is associated with defect area112, which overlaps with polygon108. Polygon108is in layout cell106. Thus, defect D2may be associated with an attribute that is based on an identifier for layout cell106. Defect D3is associated with defect area114, which overlaps with polygons belonging to cells104and106. Thus, defect D3may be associated with an attribute that is based on identifiers for layout cells104and106. Finally, defect D1does not overlap with any polygons. Thus, defect D1may be associated with a null identifier and ignored because it is not expected to cause any manufacturing problems in the IC chip. Layout cell areas may be substantially larger than a typical defect area. For example, if cell hierarchies are merged into a parent cell, then the parent cell may have a large size when compared to the defect areas. If attributes are constructed based on cell identifiers as described in reference toFIG.1, then multiple unrelated defects may be associated with the same attribute when layout cells are much larger than typical defect areas. Some embodiments described herein can partition large cells into multiple regions and assign each region a unique identifier. Next, for a given defect, an overlap may be detected between the defect area and one or more polygons of one or more cells. The regions with the cells that contain the overlapping polygons may then be identified. Next, the defect may then be associated with an attribute that is based on the cell identifiers and the region identifiers that contain the overlapping polygons. In some embodiments described herein, a layout cell size threshold may be used to determine whether to partition the cell into regions. Specifically, if the area of a layout cell is greater than the layout cell size threshold, then the layout cell may be partitioned into multiple regions so that each region is less than or equal to the layout cell size threshold. In some embodiments described herein, a threshold number of polygons in a layout cell may be used to determine whether to partition the layout cell. Specifically, if the number of polygons in a layout cell is greater than the threshold number of polygons, then the layout cell may be partitioned into multiple regions. Otherwise, if the number of polygons in the layout cell is less than or equal to the threshold number of polygons, then the layout cell may not be partitioned into multiple regions. Once a cell has been partitioned into multiple regions, each region may be assigned a unique identifier. Next, a particular region within a particular layout cell may be uniquely referenced using a combination of the layout cell identifier and the region identifier. For example, a unique identifier for a region within a cell may be formed by concatenating the layout cell identifier and the region identifier (with optionally a separator character in between). FIG.2illustrates associating a defect with an attribute that is based on one or more cell identifiers and one or more region identifiers in accordance with some embodiments described herein. Layout cell200may be partitioned into regions202,204,206, and208. A defect with defect area210overlaps with polygons in regions202and204of layout cell200. This defect may be associated with an attribute that is based on the cell identifier for layout cell200and the region identifiers for regions202and204. Likewise, a defect with defect area212overlaps with a polygon in region206of layout cell200. This defect may be associated with an attribute that is based on the cell identifier for layout cell200and region identifier for region206. The extracted CAD based attribute may not be limited to the attributes derived from the physical layout. In some embodiments, attributes that define high level circuit design constructs associated with a defect may also be used. For example, attributes of a netlist that are associated with a defect may also be used. Specifically, in some embodiments, internal layout nets may be used to segment the layout cells into logically identifiable groups. Internal layout nets may also be used to represent relative positioning of polygons within a layout cell. In an IC design layout, polygons from multiple layers may overlap with each other through vias to form electrical connections. A set of electrically connected polygons may be referred to as a net. An internal layout net may refer to a set of electrically connected polygons within a layout cell. When a layout cell is instantiated in different areas of an IC chip, these internal layout nets may be printed in the same fashion. Thus, the set of internal layout nets that overlaps with a defect area may be used to construct a defect attribute. Internal layout nets may be identified using multiple techniques. In one technique, if a layout versus schematic (LVS) mapping is available, then labels for layout nets may be directly used from the LVS mapping. In another technique, if an LVS mapping is not available, then the net may be traced by using layer connectivity information, which is sometimes referred to as an online trace. Next, internal layout nets formed by the online trace may be assigned a unique identifier, e.g., based on the position of lower left corner of the trace with respect to the layout cell extent, and then using actual coordinates of polygons of the net to handle collisions with other layout nets having the same lower left corner. FIG.3illustrates associating a defect with an attribute that is based on one or more net identifiers in accordance with some embodiments described herein. Layout cell300may include nets302,304, and306. As shown inFIG.3, each net may include multiple electrically connected polygons. A defect with defect area308overlaps with polygons in nets302and304of layout cell300. This defect may be associated with an attribute that is based on the net identifiers for nets302and304. Likewise, a defect with defect area310overlaps with a polygon in net306of layout cell300. This defect may be associated with an attribute that is based on the net identifier for net306. FIG.4illustrates associating a defect with an attribute that is based on one or more net identifiers and one or more layout cell region identifiers in accordance with some embodiments described herein. If the internal layout nets are larger than the defect area, some embodiments may use layout cell region identifiers along with net identifiers to construct an attribute for the defect. For example, layout cell400may include nets402,404, and406. Further, layout cell400may be partitioned into regions412,414,416, and418, and each region may be assigned a unique identifier. A defect with defect area408overlaps with polygons in net402in region412of layout cell400. Thus, this defect may be associated with an attribute that is based on the net identifier for net402and the region identifier for region412. On the other hand, a defect with defect area410overlaps with polygons in net402in region418of layout cell400. Thus, this defect may be associated with an attribute that is based on the net identifier for net402and the region identifier for region418. In some embodiments described herein, a bounding box for a net may be partitioned into multiple regions, and each region may be assigned a unique identifier. Next, a combination of the net identifier and the region identifier may be used to construct an attribute for a defect. FIG.5illustrates associating a defect with an attribute that is based on one or more net identifiers and one or more net region identifiers in accordance with some embodiments described herein. The bounding box around net500may be partitioned into net regions502,504,506, and508. Next, some embodiments may use net region identifiers along with net identifiers to construct an attribute for the defect. For example, a defect with defect area510overlaps with polygons in net500in net regions502and504. Thus, this defect may be associated with an attribute that is based on the net identifier for net500and region identifiers for regions502and504. On the other hand, a defect with defect area512overlaps with polygons in net500in region508. Thus, this defect may be associated with an attribute that is based on the net identifier for net500and the region identifier for region508. Embodiments described herein may generally extract any CAD identifiers corresponding to semiconductor manufacturing defects. Defects arising from common root causes may have matching or similar values for one or more extracted CAD identifiers. The CAD identifiers may be obtained from cell information, instance hierarchy information, layout polygon connectivity, netlist information, and moment of polygons. Examples of CAD identifiers include, but are not limited to, a cell identifier (e.g., CELL_A), a net identifier (e.g., NET_A), a region identifier (e.g., REG_1). In some embodiments, the CAD identifiers for the defects may be used to construct defect attributes, and the defect attributes may be used to partition the defects into groups by using machine learning (ML) based clustering techniques. Examples of defect attributes include, but are not limited to, a set of cell identifiers, a set of net identifiers, a set of region identifiers, or a combination thereof (e.g., “{CELL_A, CELL_B, CELL_C},” “{CELL_A(REG_1), CELL_B(REG_1,REG_2)}”). Some embodiments described herein may maintain a database of defect attributes. If a database entry does not exist for an incoming defect, then a new attribute may be created for the defect and stored in the database. FIG.6Aillustrates a process for creating an attribute for a defect in accordance with some embodiments described herein. A set of polygons in an IC design that overlap with a defect area associated with the defect may be detected (at602). Next, a set of CAD identifiers may be extracted from the IC design based on the overlapping polygons (at604). A defect attribute may be constructed based on the set of CAD identifiers (at606). Next, the defect attribute may be searched in a defect attribute database (at608). If the defect attribute is found in the database, then a database record identifier corresponding to the defect attribute may be returned (at610). On the other hand, if the defect attribute is not found in the database, a new database record for the defect attribute may be created, and a database record identifier corresponding to the new database record may be returned (at612). In some embodiments, the database search may perform an exact match. In some embodiments, the database search may match for a threshold percentage of identifiers. A percentage-based match may be helpful to catch common critical areas across multiple defects that have failing polygon constructs. Specifically, matching a percentage of identifiers may allow ignoring the rest of the non-critical polygonal constructs and match for critical recurring organization of polygons by virtue of matching critical subset of layout cells. For example, assume that the percentage match in a configuration is set at 80%. Additionally, in this example, the following defect attributes may be present in the database:Attr #1={CELL_A, CELL_B, CELL_C, CELL_D, CELL_E}Attr #2={CELL_A, CELL_B, CELL_E, CELL_F, CELL_G} Next, a defect may be received with identifiers {CELL_A, CELL_C, CELL_D, CELL_E}. In this case, some embodiments may match the defect with attribute “Attr #1” because 80% of the identifiers match (i.e., four out of five identifiers match). This approximation can help the embodiment ignore polygon structures that do not correspond to a root cause. In some embodiments, the percentage threshold for matching defect attributes may be set by a user. FIG.6Billustrates a process for segregating defects based on CAD identifiers associated with the defects in accordance with some embodiments described herein. For each defect in a set of defects, the defect may be associated with a defect attribute constructed from a set of CAD identifiers associated with polygons in an IC design that overlap with a defect area of the defect (at652). For example, embodiments shown inFIGS.1-6Amay be used to construct a defect attribute and maintain a database of defect attributes. Specifically, the set of CAD identifiers may include a set of cell identifiers that include polygons that overlap with the defect area. In some embodiments, the set of CAD identifiers may include one or more cell identifiers in conjunction with one or more region identifiers that include polygons that overlap with the defect area. In some embodiments, the set of CAD identifiers may include a set of net identifiers that include polygons that overlap with the defect area. In some embodiments, the set of CAD identifiers may include one or more net identifiers in conjunction with one or more region identifiers that include polygons that overlap with the defect area. Next, the set of defects may be segregated into defect groups based on the associated defect attributes (at654). In some embodiments, an ML clustering technique may be used to segregate the set of inspection defects into the defect groups. The defect groups may then be used to perform additional processing on the set of defects (at656). Examples of additional processing on the set of defects may include, but is not limited to, (1) sampling defects for further review based on defect groups associated with the inspection defects, (2) analyzing defects in a given defect group to identify a root cause for the defects in the given defect group, (3) identifying a hotspot pattern associated with a defect group, (4) using the hotspot pattern to identify additional hotspot locations in the IC chip, and (5) using the hotspot pattern to filter defects. FIG.7illustrates a flow for segregating defects into defect groups and using the defect groups to perform further processing in accordance with some embodiments described herein. A set of defects can be identified on a wafer (702). The CAD design704can be used to compute CAD identifiers (706) for the defects. In some embodiments, ML clustering708can be used to group the defects into different groups. Next, the groups can be used for various applications including, but not limited to, sampling for further review710, root cause analysis712, and for identifying hotspots714. Each defect may be represented as a point in a multidimensional space, where each type of CAD identifier associated with the defect may correspond to a distinct dimension. An ML clustering technique may determine clusters of points that are near each other and farther away from other points. Some ML clustering techniques are shown below: MethodGeometryNameParametersUse Case(Metric Used)K-Meansnumber ofGeneral-purpose,Distancesclusterseven cluster size,between pointsflat geometry, nottoo many clustersAffinitydamping,Many clusters,Graph distancepropagationsampleuneven cluster size,(e.g. nearest-preferencenon-flat geometryneighbor graph)Mean-shiftbandwidthMany clusters,Distancesuneven cluster size,between pointsnon-flat geometrySpectralnumber ofFew clusters, evenGraph distanceclusteringclusterscluster size, non-flat(e.g. nearest-geometryneighbor graph)Wardnumber ofMany clusters,Distances betweenhierarchicalclusters orpossiblypointsclusteringdistanceconnectivitythresholdconstraintsAgglomerativenumber ofMany clusters,Any pairwiseclusteringclusters orpossiblydistancedistanceconnectivitythreshold,constraints, nonlinkageEuclidean distancestype, distanceDBSCANneighborhoodNon-flat geometry,Distancessizeuneven cluster sizesbetween nearestpointsOPTICSminimumNon-flat geometry,Distancesclusteruneven cluster sizes,between pointsmembershipvariable clusterdensityGaussianmanyFlat geometry,Mahalanobismixturesgood for densitydistances toestimationcentersBirchbranchingLarge dataset,Euclideanfactor,outlier removal,distance betweenthreshold,data reduction.pointsoptionalglobalclusterer. The hotspot identification714can be used to identify hotspots throughout the IC chip (716) and can also be used to filter defects. For example, defect filtering capability718can be used to obtain a substantially smaller set of defects720. Some embodiments may be used to find similar defects, which may help to sample defects for review effectively. Specifically, defect groups with a larger number of defects may be sampled with higher priority as they are likely to be more systematic. After a given defect group with a high defect population is sampled and found to have a real/false alarm defect ratio that is greater than a threshold value, no more defects may be sampled from the given defect group because it is unlikely that any additional real defects that are identified in the defect group will provide any new information for determining the root cause of the real defects. Sampling priority may be assigned to defect groups, and the number of defects sampled from a given defect group may be based on the defect group. Specifically, more defects may be sampled from higher priority defect groups. Sampling priority may be increased for defect groups with next higher defect counts until a real defect threshold or a false alarm threshold is reached. In this manner, embodiments described herein can make the review process more efficient and can improve the overall yield of chip manufacturing. Some embodiments described herein may be used for filtering incoming inspection defects based on their associated CAD identifiers. Specifically, defect attributes corresponding to real defects may be referred to as critical defect attributes. Critical defect attributes may be used to search an IC chip area for locations that are likely to include real defects. These areas may be referred to as hotspots and may be used to predict classification of future incoming inspection defects. Some embodiments described herein may be used for assisting in uncovering new defect root causes. Critical defect attributes can be used to infer underlying root causes of the defects during failure analysis. Some embodiments described herein enable effective SEM review. Defects under the same group and having the same or similar CAD identifiers would exhibit similar root causes. So, if enough number of defects within a defect group, say beyond customizable threshold value, turn out to be real or nuisance defects, then the rest of the defects may be predicted as real or nuisance defects, respectively. This helps in deciding the choice of samples to select for review. Defects from distinct defect groups that are related by set of CAD identifier values may be used to uncover potential root causes leading to more effective review cycles and improving overall yield of the manufacturing process. Some embodiments may aid the discovery of “systematic-ness” of defects and help determine the root causes of defects. CAD identifier values related to systematic defects have valuable insights about possible root causes behind the defects. For example, consider the identifier associated with proximity of metal lines. If certain group of defects are found to be always failing, and most of the defects have a similar value for the “proximity of metal lines” identifier, then one can infer that the defects might be caused due to the proximity of metal lines, and this is root cause may be confirmed by further investigations. Some embodiments may aid the search for probable hotspots and filtering incoming inspection defects. If defects from a given group turn out to be mostly real defects or mostly nuisance defects (with customizable thresholds), then entire chip may be searched for presence of identifier values corresponding to the group. These chip areas matching the specific identifier values may be referred to as hotspots. Some embodiments may aid the search for matching areas in chip layout for specific CAD identifier values. After the defect review, defects can be categorized either as real defects or false alarms. If specific identifier values are found to be recurring across real defects, then the entire chip can be searched for presence of areas with these CAD identifier values. These areas can be referred to as hotspots and can be used for quick filtering or classification of defects coming out of inspection. FIG.8illustrates an example set of processes800used during the design, verification, and fabrication of an article of manufacture such as an integrated circuit to transform and verify design data and instructions that represent the integrated circuit. Each of these processes can be structured and enabled as multiple modules or operations. The term ‘EDA’ signifies the term ‘Electronic Design Automation.’ EDA can use one or more CAD tools. These processes start with the creation of a product idea810with information supplied by a designer, information which is transformed to create an article of manufacture that uses a set of EDA processes812. When the design is finalized, the design is taped-out834, which is when artwork (e.g., geometric patterns) for the integrated circuit is sent to a fabrication facility to manufacture the mask set, which is then used to manufacture the integrated circuit. After tape-out, a semiconductor die is fabricated836and packaging and assembly processes838are performed to produce the finished integrated circuit840. Specifications for a circuit or electronic structure may range from low-level transistor material layouts to high-level description languages. A high-level of representation may be used to design circuits and systems, using a hardware description language (‘HDL’) such as VHDL, Verilog, SystemVerilog, SystemC, MyHDL or OpenVera. The HDL description can be transformed to a logic-level register transfer level (‘RTL’) description, a gate-level description, a layout-level description, or a mask-level description. Each lower representation level that is a more concrete description adds more useful detail into the design description, for example, more details for the modules that include the description. The lower levels of representation that are more concrete descriptions can be generated by a computer, derived from a design library, or created by another design automation process. An example of a specification language at a lower level of representation language for specifying more detailed descriptions is SPICE, which is used for detailed descriptions of circuits with many analog components. Descriptions at each level of representation are enabled for use by the corresponding tools of that layer (e.g., a formal verification tool). A design process may use a sequence depicted inFIG.8. The processes described by be enabled by EDA products (or tools). During system design814, functionality of an integrated circuit to be manufactured is specified. The design may be optimized for desired characteristics such as power consumption, performance, area (physical and/or lines of code), and reduction of costs, etc. Partitioning of the design into different types of modules or components can occur at this stage. During logic design and functional verification816, modules or components in the circuit are specified in one or more description languages and the specification is checked for functional accuracy. For example, the components of the circuit may be verified to generate outputs that match the requirements of the specification of the circuit or system being designed. Functional verification may use simulators and other programs such as testbench generators, static HDL checkers, and formal verifiers. In some embodiments, special systems of components referred to as ‘emulators’ or ‘prototyping systems’ are used to speed up the functional verification. During synthesis and design for test818, HDL code is transformed to a netlist. In some embodiments, a netlist may be a graph structure where edges of the graph structure represent components of a circuit and where the nodes of the graph structure represent how the components are interconnected. Both the HDL code and the netlist are hierarchical articles of manufacture that can be used by an EDA product to verify that the integrated circuit, when manufactured, performs according to the specified design. The netlist can be optimized for a target semiconductor manufacturing technology. Additionally, the finished integrated circuit may be tested to verify that the integrated circuit satisfies the requirements of the specification. During netlist verification820, the netlist is checked for compliance with timing constraints and for correspondence with the HDL code. During design planning822, an overall floor plan for the integrated circuit is constructed and analyzed for timing and top-level routing. During layout or physical implementation824, physical placement (positioning of circuit components such as transistors or capacitors) and routing (connection of the circuit components by multiple conductors) occurs, and the selection of cells from a library to enable specific logic functions can be performed. As used herein, the term ‘cell’ may specify a set of transistors, other components, and interconnections that provides a Boolean logic function (e.g., AND, OR, NOT, XOR) or a storage function (such as a flipflop or latch). As used herein, a circuit ‘block’ may refer to two or more cells. Both a cell and a circuit block can be referred to as a module or component and are enabled as both physical structures and in simulations. Parameters are specified for selected cells (based on ‘standard cells’) such as size and made accessible in a database for use by EDA products. During analysis and extraction826, the circuit function is verified at the layout level, which permits refinement of the layout design. During physical verification828, the layout design is checked to ensure that manufacturing constraints are correct, such as DRC constraints, electrical constraints, lithographic constraints, and that circuitry function matches the HDL design specification. During resolution enhancement830, the geometry of the layout is transformed to improve how the circuit design is manufactured. During tape-out, data is created to be used (after lithographic enhancements are applied if appropriate) for production of lithography masks. During mask data preparation832, the ‘tape-out’ data is used to produce lithography masks that are used to produce finished integrated circuits. A storage subsystem of a computer system (such as computer system900ofFIG.9) may be used to store the programs and data structures that are used by some or all of the EDA products described herein, and products used for development of cells for the library and for physical and logical design that use the library. FIG.9illustrates an example machine of a computer system900within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system900includes a processing device902, a main memory904(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory906(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device918, which communicate with each other via a bus930. Processing device902represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device902may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device902may be configured to execute instructions926for performing the operations and steps described herein. The computer system900may further include a network interface device908to communicate over the network920. The computer system900also may include a video display unit910(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device912(e.g., a keyboard), a cursor control device914(e.g., a mouse), a graphics processing unit922, a signal generation device916(e.g., a speaker), graphics processing unit922, video processing unit928, and audio processing unit932. The data storage device918may include a machine-readable storage medium924(also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions926or software embodying any one or more of the methodologies or functions described herein. The instructions926may also reside, completely or at least partially, within the main memory904and/or within the processing device902during execution thereof by the computer system900, the main memory904and the processing device902also constituting machine-readable storage media. In some implementations, the instructions926include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium924is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device902to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 40,256 |
11861287 | The diagrams depicted herein are illustrative. There can be many variations to the diagrams or the operations described therein without departing from the spirit of the invention. For instance, the actions can be performed in a differing order or actions can be added, deleted or modified. Also, the term “coupled” and variations thereof describes having a communications path between two elements and does not imply a direct connection between the elements with no intervening elements/connections between them. All of these variations are considered a part of the specification. DETAILED DESCRIPTION As previously noted, integrated circuit development may involve several stages from design through fabrication. As also noted, at one or more stages, the integrated circuit design may be subdivided hierarchically for design or testing tasks. For example, the integrated circuit may be partitioned into macros that are each made up of cells. The final design must meet design rules in order to be fabricated. The design rules may be applied to portions of the integrated circuit that are defined by tiles arranged in a grid pattern. The tiles may not overlap with hierarchical divisions. For example, a tile that defines an area that is checked for compliance to design rules may encompass portions of multiple cells or macros. One of the aspects of the final design that may be checked is post-fill density. On the one hand, a minimum limit on density may be imposed to ensure that metal (or other fill material) is confined to trenches following the chemical mechanical planarization (CMP) process. On the other hand, a maximum limit on density may be imposed to prevent interconnections that are shorter than process specifications following the CMP process. Thus, density within a tile must be within a range defined by design rules. Because partitions used in the design (e.g., macros, cells) and tiles do not necessarily match up, considering density at a cell or macro level may not result in compliance with post-fill density-related design rules considered for a tile. One prior approach involves performing fill at the chip level rather than at lower levels (e.g., macro level). However, this approach may delay delivery of the chip to the foundry. This is because the macros may be developed by different entities, and it may take at least 24 hours after receiving the last macro to complete the fill and design rule checking (DRC) process, even if additional time consuming multiple iterations are not needed to address density violations. Another prior approach involves prefilling at the lower levels (e.g., macro level) and performing manual fixes, as needed, at the chip level or creating extra space between blocks (e.g., macros) to avoid interaction and the potential for a density violation. However, these approaches require extra time to make the manual fixes or extra area which is costly and may limit chip performance. Embodiments of the invention relate to integrated circuit development using density-aware border fill. While the lower hierarchical level discussed for explanatory purposes is the macro level, the processes discussed may be performed at any hierarchical level below the complete integrated circuit level. Further, while the border region of a macro is discussed in particular to detail embodiments of the invention, a fill process may be performed for other areas of a macro before, after, or during the fill of the border region. The fill is performed at the lower level (e.g., macro level) in consideration of the density of bordering macros. This awareness of the density of nearby macros is facilitated by floorplanning. Floorplanning refers to the initial tentative placement of major functional blocks of the integrated circuit. By controlling fill at the border region of each macro based on awareness of the density of surrounding macros, density within a given tile may be controlled to comply with design rules (i.e., pass the design rule check (DRC)), as detailed. FIG.1is a block diagram of a system100to perform the development of an integrated circuit120using density-aware border fill according to one or more embodiments of the invention. Exemplary macros125are indicated as being part of the integrated circuit120. As the expanded view of a macro125indicates, each macro125has layers130that are not visible in the view shown for the integrated circuit120. The system100includes a processing system110used to generate the design that is ultimately fabricated into the integrated circuit120. The steps involved in the fabrication of the integrated circuit120are well-known and briefly described herein. Once the physical layout is finalized, based, in part, on using density-aware border fill according to one or more embodiments of the invention, the finalized physical layout is provided to a foundry. Masks are generated for each layer of the integrated circuit based on the finalized physical layout. Then, the wafer is processed in the sequence of the mask order. The processing includes photolithography and etch. This is further discussed with reference toFIG.5. The processing system110has one or more central processing units (processors)21a,21b,21c, etc. (collectively or generically referred to as processor(s)21and/or as processing device(s)). According to one or more embodiments of the present invention, each processor21can include a reduced instruction set computer (RISC) microprocessor. Processors21are coupled to system memory (e.g., random access memory (RAM)24) and various other components via a system bus33. Read only memory (ROM)22is coupled to system bus33and can include a basic input/output system (BIOS), which controls certain basic functions of processing system110. Further illustrated are an input/output (I/O) adapter27and a communications adapter26coupled to system bus33. I/O adapter27can be a small computer system interface (SCSI) adapter that communicates with a hard disk23and/or a tape storage drive25or any other similar component. I/O adapter27, hard disk23, and tape storage device25are collectively referred to herein as mass storage34. Operating system40for execution on processing system110can be stored in mass storage34. The RAM22, ROM24, and mass storage34are examples of memory19of the processing system110. A network adapter26interconnects system bus33with an outside network36enabling the processing system110to communicate with other such systems. A display (e.g., a display monitor)35is connected to system bus33by display adaptor32, which can include a graphics adapter to improve the performance of graphics intensive applications and a video controller. According to one or more embodiments of the present invention, adapters26,27, and/or32can be connected to one or more I/O busses that are connected to system bus33via an intermediate bus bridge (not shown). Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI). Additional input/output devices are shown as connected to system bus33via user interface adapter28and display adapter32. A keyboard29, mouse30, and speaker31can be interconnected to system bus33via user interface adapter28, which can include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit. According to one or more embodiments of the present invention, processing system110includes a graphics processing unit37. Graphics processing unit37is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. In general, graphics processing unit37is very efficient at manipulating computer graphics and image processing and has a highly parallel structure that makes it more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. Thus, as configured herein, processing system110includes processing capability in the form of processors21, storage capability including system memory (e.g., RAM24), and mass storage34, input means such as keyboard29and mouse30, and output capability including speaker31and display35. According to one or more embodiments of the present invention, a portion of system memory (e.g., RAM24) and mass storage34collectively store an operating system such as the AIX® operating system from IBM Corporation to coordinate the functions of the various components shown in processing system110. FIG.2illustrates aspects of the development of an integrated circuit120that are affected by using density-aware border fill according to one or more embodiments of the invention. Two exemplary macros125a,125bare shown. Each macro125is shown with a grid210of tiles215overlaid on it. The size d1-by-d2 of each tile is indicated. In the exemplary orientation shown inFIG.2, the width is d1 and the length is d2. As previously noted, one or more DRCs (e.g., density check) may be performed for the chip components within each tile215. That is, minimum and/or maximum density limit requirements must be met within each tile215. Thus, the area encompassed by each tile215is indicated for each of the macros125a,125bindividually. Generally, the corner of the grid210of tiles215may be initially placed to align with a macro corner, as indicated for each macro125a,125b, and then moved in stages to cover the entire macro125, for example. At each stage and each corresponding position of the grid210, the DRCs may be performed for the area of the macro125encompassed by each tile215. The DRC relevant to one or more embodiments of the invention relates to density (e.g., whether density within a tile215is between predefined minimum and maximum densities). As indicated, the macros125a,125bare put together to create a higher-level device220(e.g., the chip if the higher hierarchical level is the chip level) according to a result of floorplanning. As shown, the area of the macro125athat corresponds with each tile215remains the same in the higher-level device220. However, the area of the macro125bthat corresponds with each tile215is different in the higher-level device220as compared with the area encompassed by each tile215when the macro125bis considered individually. AsFIG.2illustrates, if the fill of the macro125bis controlled at the macro level to pass the post-fill density DRC, those DRC results may not be valid in the higher-level device220. In the exemplary case shown inFIG.2, a portion of the top border region305of macro125aand a portion of the bottom border region305of macro125bcombine to make up the area within a set of tiles215overlaid on the higher-level device220. Thus, at those border regions305, passing the post-fill density DRC requires considering density around both the top border region305of the macro125aand the bottom border region305of the macro125b. Density-aware border fill in consideration of this scenario, according to one or more embodiments of the invention, is further discussed with reference toFIGS.3and4. FIG.3shows the process flow of a method300of setting a mode to affect density-aware border fill in the development of an integrated circuit120according to one or more embodiments of the invention. The processes are performed for each layer130of one or more macros125. In addition, each border region305is considered individually. As detailed, portions of a border region305may be considered separately. An exemplary macro125xis shown for explanatory purposes. The four border regions305of the macro125xare indicated and are referenced according to the exemplary orientation shown inFIG.3. The depth of each border region305corresponds to the associated dimension d1 or d2 of a tile215, as indicated. The depth may be a multiple (e.g., k*d1 or i*d2) of a corresponding dimension d1 or d2 of a tile215, as indicated. The values of k and i may be the same or different, and both k and i may be 1. In the exemplary illustration, the thickness of the left and right border regions305is k*d1, a multiple of the exemplary width of a tile215(perFIG.2), and the thickness of the top and bottom border regions305is i*d2, a multiple of the exemplary length of a tile215. Based on a result of the floorplanning, which tentatively places the macros125relative to each other, macros125mand125nare adjacent to the left border region305of the macro125x, and macro125pis adjacent to the bottom border region305of the macro125x. The processes of the method300are discussed with reference to macro125xfor explanatory purposes. At block310, a check is done of whether density information is available for surrounding macros125. In the exemplary case, when the processes of the method300are performed for macro125x, the check would determine if density information is available for macros125m,125n, and125p. If the check at block310indicates that density information is not available for any surrounding macros125, then the mode is set as the default mode at block320. If the check at block310indicates that density information is available for one or more surrounding macros125, then a check is done, at block330, of whether density information is available as a percentage of area for the one or more surrounding macros125. For example, the fill density of macro125mmay be known to be 70 percent (i.e., 70 percent of the area of macro125mis filled). If the check at block330indicates that density information is available as a percentage of the area for one or more of the surrounding macros125, then the mode is set for an associated border region305as a percentage value, at block340. In the exemplary case of macro125x, a portion of the left border region305borders macro125m, another portion of the left border region305borders macro125n, and the bottom left portion of the macro125m, where the left and bottom border regions305overlap, borders both macro125nand macro125p. Thus, for example, if the fill density of macro125mis available as 70 percent (based on the check at block330), then the mode for the portion of the left border region305of macro125mthat borders macro125mis set, at block340, as a complementary percentage value (e.g., 30 percent). The relationship between the percentage for the surrounding macro125(from block330) and the percentage value set as the mode (at block340) is predefined (e.g., a predefined computation) and is not limited by the example. If the check at block330indicates that density information is not available as a percentage for one or more of the surrounding macros125, then another check is done at block350. The check at block350determines if an estimate (e.g., low density, high density) is available instead for the one or more surrounding macros125. If the check at block350indicates that an estimate of the fill density is available, then the mode for the associated border region305(or portion of a border region305) is set as high or low accordingly. That is, the mode is set as a level based on a predefined correspondence with the density of an adjacent macro125. For example, if the check at block350determines that the fill density for macro125pis available as an estimate (e.g., high), then the mode for the bottom border region305of the macro125xmay be set, at block360, as low. The setting at block360(e.g., low) may be the opposite of the estimate obtained at block350(e.g., high) to ensure that the combination (e.g., of the bottom border region305of macro125xand macro125p) passes the density fill DRC even if a tile215spans part of the bottom border region305of the macro125xand part of the macro125p. For any remaining surrounding macros125, for which fill is not available as a percentage (per the check at block330) or as an estimate (per the check at block350), the corresponding border region305of the subject macro125is set as a default mode. Border regions305with no adjacent macro125(e.g., top and right border regions305of macro125x) may be set as a default mode, as well. As previously noted, the processes may be repeated for different layers130of different macros125. As discussed with reference toFIG.4, the mode information may then be used to perform density-aware border fill according to one or more embodiments of the invention. FIG.4is a process flow of a method400of performing density-aware border fill in the development of an integrated circuit120according to one or more embodiments of the invention. Macro125xshown inFIG.3will be discussed as an exemplary macro125for explanatory purposes. The processes shown inFIG.4may be used to address cross-dependency in the process of performing the fill. That is, not only the mode of a given border region305of a given macro125, which is set according to the processes discussed with reference toFIG.3, but also the mode of surrounding macros125may be considered in performing the fill process according to one or more embodiments of the invention. The processes shown inFIG.4may be performed for each border region305or portion of a border region305of each layer130of a macro125. For a given border region305or portion of the border region305of a given layer130of a given macro125, at block410, a check is done of whether the mode is set as low (at block360). As the discussion with reference toFIG.3indicates, the mode being set as low indicates that the neighboring macro125has an estimated density that is high. In this case, the fill process is performed for the neighboring macro125first and then performed for the given border region305of the macro125, at block420. For example, if the bottom border region305of the macro125xhas a mode setting of low (according to the check at block410), then the neighboring macro125pmust have an estimated density that is high (per the check at block350). In this case, at block420, the fill is completed for the macro125pand, more particularly, for the adjacent border region305of the macro125p, prior to the fill for the bottom border region305of the macro125x. If the check at block410indicates that the mode for the given border region305or portion of the border region305of the given layer130of the given macro125is not low, then a check is performed at block430. At block430, the check determines if the mode set for the neighboring macro125or, more particularly, the mode set for the adjacent border region305of the neighboring macro125, is low. If so, then, at block440, the fill is first performed for the given border region305or portion of the border region305of the given layer130of the given macro125. For example, the given portion of the given border region305may be the top left border region305of the macro125x, and the check at block430may determine that the mode set for the macro125mor the right border region305of the macro125mis low. In this case, the fill is first performed for the top left border region305of the macro125xat block440, prior to the fill for the border region305of the macro125m. If the check at block430indicates that the mode for the neighboring macro125is not low, then the processes at block450are performed. The processes at block450would be reached when neither the given border region305or portion of the border region305of the given layer130of the given macro125nor the neighboring macro125have a mode setting of low. The mode settings may be high (per block360) or a percentage value of fill (per block340) or a default mode (per block320), for example. At block450, the border region305of the macro125and neighboring macro125are filled in either order. Following the fill at any of the blocks420,440,450, a check is performed at block460. At block460, the check determines if the post-fill density DRC is passed. If it is, then, at block470, the fill is deemed completed for the given border region305or portion of the border region305of the given layer130of the given macro125. If the check at block460indicates that performing the fill does not result in passing the DRC related to post-fill density, then the floorplanning phase must be iterated again, at block480. The floorplanning iteration at block480considers the given border region305or portion of the border region305of the given layer130of the given macro125and the neighboring macro125prior to the fill process. FIG.5is a process flow of a method500of fabricating the integrated circuit according to exemplary embodiments of the invention. Once the physical design data is obtained, based, in part, on the processes discussed with reference toFIGS.3and4, the integrated circuit120can be fabricated according to known processes that are generally described with reference toFIG.5. Generally, a wafer with multiple copies of the final design is fabricated and cut (i.e., diced) such that each die is one copy of the integrated circuit120. At block510, the processes include fabricating masks for lithography based on the finalized physical layout. At block520, fabricating the wafer includes using the masks to perform photolithography and etching. Once the wafer is diced, testing and sorting each die is performed, at block530, to filter out any faulty die. Various embodiments of the invention are described herein with reference to the related drawings. Alternative embodiments of the invention can be devised without departing from the scope of this invention. Various connections and positional relationships (e.g., over, below, adjacent, etc.) are set forth between elements in the following description and in the drawings. These connections and/or positional relationships, unless specified otherwise, can be direct or indirect, and the present invention is not intended to be limiting in this respect. Accordingly, a coupling of entities can refer to either a direct or an indirect coupling, and a positional relationship between entities can be a direct or indirect positional relationship. Moreover, the various tasks and process steps described herein can be incorporated into a more comprehensive procedure or process having additional steps or functionality not described in detail herein. One or more of the methods described herein can be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc. For the sake of brevity, conventional techniques related to making and using aspects of the invention may or may not be described in detail herein. In particular, various aspects of computing systems and specific computer programs to implement the various technical features described herein are well known. Accordingly, in the interest of brevity, many conventional implementation details are only mentioned briefly herein or are omitted entirely without providing the well-known system and/or process details. In some embodiments, various functions or acts can take place at a given location and/or in connection with the operation of one or more apparatuses or systems. In some embodiments, a portion of a given function or act can be performed at a first device or location, and the remainder of the function or act can be performed at one or more additional devices or locations. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, element components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiments were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated. The diagrams depicted herein are illustrative. There can be many variations to the diagram or the steps (or operations) described therein without departing from the spirit of the disclosure. For instance, the actions can be performed in a differing order or actions can be added, deleted or modified. Also, the term “coupled” describes having a signal path between two elements and does not imply a direct connection between the elements with no intervening elements/connections therebetween. All of these variations are considered a part of the present disclosure. The following definitions and abbreviations are to be used for the interpretation of the claims and the specification. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” “contains” or “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a composition, a mixture, process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but can include other elements not expressly listed or inherent to such composition, mixture, process, method, article, or apparatus. Additionally, the term “exemplary” is used herein to mean “serving as an example, instance or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “at least one” and “one or more” are understood to include any integer number greater than or equal to one, i.e. one, two, three, four, etc. The terms “a plurality” are understood to include any integer number greater than or equal to two, i.e. two, three, four, five, etc. The term “connection” can include both an indirect “connection” and a direct “connection.” The terms “about,” “substantially,” “approximately,” and variations thereof, are intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, “about” can include a range of ±8% or 5%, or 2% of a given value. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instruction by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments described herein. | 34,986 |
11861288 | DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. In accordance with some embodiments, a system, device, and method for performing layout verification on slanted layout components are disclosed. In one aspect, a slanted layout component is a layout component having a side slanted from a base axis. A layout component may indicate a size and a location of a polygon corresponding to a structure (e.g., structure for forming a transistor, metal rail, via contact, etc.) for forming an integrated circuit. In one aspect, an offset angle of the side of the slanted layout component with respect to the base axis is determined. In one aspect, the slanted layout component is rotated according to the offset angle to obtain a rotated layout component having a side in parallel with the base axis. In one aspect, layout verification can be performed on the rotated layout component with respect to the base axis. Advantageously, the disclosed system, device, and method enable layout verification on slanted layout components based on the base axis. Example layout verification includes design rule check (DRC) verification to verify a spacing between two layout components, verify a pitch between two layout components, verify width or length of a side of a layout component, etc. In one aspect, layout verification is performed by measuring a distance or spacing between two layout components along a direction in parallel with the base axis. By determining the offset angle and rotating the slanted layout components according to the offset angle, the rotated layout components may have sides in parallel with or perpendicular to the base axis. Hence, layout verification can be performed on the rotated layout components by measuring a distance or spacing between sides of rotated layout components. By allowing verification on slanted layout components, layout components can be placed and routed within a smaller area and with increased flexibility. FIG.1is a diagram of a system100for generating an integrated circuit, in accordance with one embodiment. In some embodiments, the system100includes a device110that provides an IC layout design130(also referred to as “a layout design130” herein) to a fabrication facility190. The device110may be a computing device operated by a user (or a circuit designer). The layout design130may indicate locations and sizes of a set of polygons corresponding to various structures of IC. The layout design130may be in a GDSII format. The fabrication facility190may receive the layout design130and fabricate multiple ICs according to the layout design130. In some embodiments, the device110includes one or more processors115and a non-transitory computer readable medium120storing instructions when executed by the one or more processors115cause the one or more processors115to perform various processes or operations for generating the layout design130. In some embodiments, the non-transitory computer readable medium120stores software applications including a simulator150, a schematic editor160, a synthesis tool170, a layout editor175, and a layout verifier180. These applications may assist a user of the device110to generate the layout design130. In some embodiments, the non-transitory computer readable medium120stores more, fewer, or different applications than shown inFIG.1. In some embodiments, the schematic editor160is a software application enabling a user to generate a gate level design of circuit components. The gate level design may indicate schematic relationships of circuit components. For example, the schematic editor160allows a user to provide, through a graphical user interface, input to create or define schematic connections of circuit components such as transistors, resistors, capacitors, inductors, etc. Based on the user input provided through the graphical user interface, the schematic editor160may automatically generate a netlist data indicating the schematic relationships of the circuit components. In some embodiments, the synthesis tool170is a software application that generates a gate level design of circuit components based on register transistor level (RTL) design of an integrated circuit. For example, the synthesis tool170receives a text or a code (e.g., Verilog or VHDL) indicating logic level design or RTL design of the integrated circuit, and automatically generates a netlist data indicating schematic relationships of circuit components to perform logic operations or functions as indicated by the RTL design. In some embodiments, the simulator150is a software application to simulate or predict a performance a circuit design. The simulator150may simulate the performance of the circuit design in response to various conditions applied. The simulator150may perform simulation on a gate level design, a logic level design, or a combination of them. Based on the simulation result, the user of the device110may adjust or modify the gate level design or the logic level design. In some embodiments, the layout editor175is a software application for generating a layout design. In one aspect, the layout editor175provides a graphical user interface that allows a user to draw or define locations and sizes of polygons corresponding to various layout components. In one aspect, the layout editor175can automatically generate a layout design based on the logic level design or the gate level design. The layout editor175may generate the layout design in a GDSII format. In some embodiments, the layout verifier180is a software application to confirm or verify the layout design from the layout editor175. Example layout verification includes DRC verification to verify a spacing between two layout components, verify a pitch between two layout components, verify width or length of a side of a layout component, etc. Additional layout verifications include layout versus schematic (LVS) verification, electrical rule check (ERC) verification, etc. For example, LVS verification can be performed to confirm whether connections of layout components are consistent with the schematic connections indicated by the netlist data. FIG.2is a flowchart showing a method200of generating a layout design, in accordance with some embodiments. The method200may be performed by the device110ofFIG.1. In some embodiments, the method200is performed by other entities. In some embodiments, the method200includes more, fewer, or different operations than shown inFIG.2. In an operation210, the device110generates the gate level design. In one approach, a user can provide, through the schematic editor160, input to create or define schematic connections of circuit components such as transistors, resistors, capacitors, inductors, etc. Based on the user input, the schematic editor160may automatically generate a netlist data indicating the schematic relationships of the circuit components. In one approach, the synthesis tool170receives a text or a code (e.g., Verilog or VHDL) indicating logic level design or RTL design of the integrated circuit, and automatically generates a netlist data indicating schematic relationships of circuit components to perform logic operations or functions as indicated by the RTL design. In an operation220, the device110performs a pre-layout simulation. In one approach, the simulator150may simulate or predict the performance of the circuit design in response to various conditions applied. The simulator150may perform simulation on a gate level design, a logic level design, or a combination of them. Based on the simulation result, the user may adjust or modify the gate level design or the logic level design. In an operation230, the device110generates a layout design130. In one approach, the user can draw or define, through the layout editor175, locations and sizes of polygons corresponding to various layout components. In one approach, the layout editor175can automatically generate a layout design130based on the logic level design or the gate level design. The layout editor175may generate the layout design130in a GDSII format. In an operation240, the device110performs layout verification on the layout design130. In one approach, the layout verifier180can check or verify a spacing between two layout components, verify a pitch between two layout components, verify width or length of a side of a layout component, etc. Additional layout verifications include LVS verification, ERC verification, etc. For example, LVS verification can be performed to confirm whether connections of layout components are consistent with the schematic connections indicated by the netlist data. In an operation245, the device110determines whether the layout is verified or not and generates a report indicating the verification result. For example, the report may indicate one or more layout components violating spacing rules, and/or indicate one or more layout components satisfying the spacing rules. If the layout design130does not pass any of DRC, LVS, ERC verifications, the layout design130can be modified based on the report through the layout editor175in an operation248and additional layout verification can be performed until the layout design130passes DRC, LVS, ERC verifications. If the layout design130passes DRC, LVS, ERC verifications, the device110can proceed to an operation250and perform a post-layout simulation. For example, the simulator150may simulate the performance of the circuit design with parasitic capacitances or resistances extracted based on the layout design130. Based on the post-layout simulation result, the logic level design, the gate level design, the layout design130, or any combination of them can be modified. If the post-layout simulation satisfies target performances, the device110can output the layout design130to the fabrication facility190for fabrication. FIG.3is a diagram of the layout verifier180, in accordance with some embodiments. In some embodiments, the layout verifier180includes an offset angle identifier310, a layout rotator320, a base layout verifier330, and a contact verifier340. These components may operate together to perform layout verification on slanted layout components. In some embodiments, the layout verifier180includes more, fewer, or different components than shown inFIG.3. In some embodiments, the offset angle identifier310is a component that detects an offset angle of one or more layout components with respect to a base axis. A set of layout components may have parallel sides elongated along a direction traversing a direction of a base axis at an offset angle. The offset angle may be a non-perpendicular angle between 0 and 90 degrees. The offset angle identifier310may detect, from a plurality of layout components, the set of layout components extending along a parallel direction slanted from the base axis by the offset angle and determine the offset angle for the set of layout components. The offset angle identifier310may also detect, from the plurality of layout components, a different set of layout components extending along another parallel direction slanted from the base axis at another offset angle, and determine the another offset angle for the different set of layout components. In some embodiments, the layout rotator320is a component that automatically rotates slanted layout components to obtain rotated layout components. In one aspect, the layout rotator320automatically rotates the slanted layout components such that sides of rotated layout components can be in parallel with or perpendicular to the base axis. The layout rotator320may identify, for a layout component, locations of a set of vertexes, and transform a location for each vertex to a new location according to the offset angle. Assuming for an example that a vertex is located at (X,Y), the layout rotator320may obtain a new location (X′, Y′) according to the following equation: X′=Xcos θ+Ysin θ,Y′=−Xsin θ+Ycos θ Eq. (1) where θ is the offset angle. For example, if a layout component has vertexes (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), the layout rotator320may generate new vertexes (X1′, Y1′), (X2′, Y2′), (X3′, Y3′), (X4′, Y4′) based on the Eq. (1) according to the offset angle. In some embodiments, the base layout verifier330is a component that performs layout verification on the layout design. Example layout verification includes DRC verification to verify a spacing between two layout components, verify a pitch between two layout components, verify width or length of a side of a layout component, etc. In one aspect, the layout verification is performed with respect to two base axes (e.g., X-axis and Y-axis) in a Cartesian coordination system. In one aspect, layout verification is performed by measuring a distance or spacing between two layout components along a direction in parallel with or perpendicular to a base axis. By rotating slanted layout components by the layout rotator320to obtain rotated layout components in parallel with or perpendicular to a base axis, the base layout verifier330may perform layout verification on rotated layout components. In some embodiments, the contact verifier340is a component that verifies a layout component for a via contact coupled to a slanted layout component. In one aspect, the contact verifier340verifies the layout component for the via contact based on a point location of the layout component, rather than a rectangular area allocated for the via contact. In general, a rectangular area may indicate or specify a location and a size of a via contact between two overlapping layout components. However, an overlapping area between the slanted layout component and another layout component may not be sufficient to encompass a rectangular area for the via contact. In one aspect, the contact verifier340may verify whether a slanted layout component has sufficient enclosure to cover the layout component for the via contact. For example, the contact verifier340verifies whether a distance between i) a side of the slanted layout component, and ii) a point location, at which the layout component for the via contact is located, exceeds a threshold value. The contact verifier340may also verify whether enough spacing is provided between two nearby layout components for via contacts coupled to the slanted layout component. For example, the contact verifier340verifies whether a distance between i) a first point location, at which a first layout component for a first via contact is located, and ii) a second point location, at which a second layout component for a second via contact is located, exceeds a threshold value. Accordingly, the contact verifier340may verify whether via contacts can be securely formed for slanted layout components. FIG.4is a diagram showing a process400of performing a layout verification on slanted layout components410A,410B, in accordance with some embodiments. The slanted layout components410A,410B may indicate sizes and locations of structures for forming a transistor, metal rail, via contact, etc. In one approach, the offset angle identifier310detects the slanted layout components410A,410B having sides slanted from a base axis in a Cartesian coordination system. The offset angle identifier310may detect an offset angle between i) an elongated direction of sides of the slanted layout components410A,410B, and ii) a direction of the base axis. The layout rotator320may rotate the slanted layout components410A,410B according offset angle to obtain rotated layout components420A,420B. For example, the layout rotator320applies, for each vertex of the slanted layout components, a location of the vertex to the equation Eq. (1) utilizing the offset angle to obtain a location of a transformed vertex. In one aspect, the rotated layout components420A,420B have sides in parallel with the base axis. The base layout verifier330may perform layout verification (e.g., DRC verification) on the rotated layout components420A,420B. FIG.5is a flowchart showing a method500of rotating a slanted layout component and performing a layout verification on the rotated layout component, in accordance with some embodiments. The method500may be performed by the device110ofFIG.1. In some embodiments, the method500is performed by other entities. In some embodiments, the method500includes more, fewer, or different operations than shown inFIG.5. In an operation510, the device110detects a set of slanted layout components. The device110may detect, from a plurality of layout components, the set of slanted layout components extending along a parallel direction slanted from the base axis by the offset angle. The device110may also detect, from the plurality of layout components, a different set of slanted layout components extending along another parallel direction slanted from the base axis at another offset angle. In an operation520, the device110determines an offset angle of the set of layout components with respect to the base axis. The device110may compare an elongated direction of parallel sides of the set of slanted layout components with the base axis and determine the offset angle according to the comparison. The device110may determine the offset angle of the set of slanted layout components, and determine the another offset angle of the different set of slanted layout components. In an operation530, the device110rotates the set of layout components according to the offset angle to obtain a set of rotated layout components. For example, the device110applies, for each vertex of the slanted layout components, a location of the vertex to the equation Eq. (1) according to the offset angle to obtain a location of a transformed vertex. The transformed vertexes may be vertexes of the set of rotated layout components. The device110may rotate the different set layout components according to the another offset angle to obtain another set of rotated layout components. In an operation540, the device110performs layout verification on the set of rotated layout components with respect to the base axis. For example, the device110may perform DRC verification on the set of rotated layout component. The device110may verify a spacing between two rotated layout components, verify a pitch between two rotated layout components, verify width or length of a side of the rotated layout component, etc. FIG.6Ais a diagram showing a layout design600A of an integrated circuit having a set of slanted layout components610A,610B,610C, in accordance with some embodiments.FIG.6Bis a diagram showing a rotated layout design600B including rotated layout components610A′,610B′,610C′ obtained by rotating the set of slanted layout components610A,610B,610C inFIG.6A, in accordance with some embodiments. The slanted layout components610A-610C may indicate sizes and locations of structures for forming a transistor, metal rail, via contact, etc. In one approach, the offset angle identifier310detects the slanted layout components610A-610C having sides slanted from the Y-axis in a Cartesian coordination system. The offset angle identifier310may detect an offset angle θ between the slanted layout components610A-610C, and the Y-axis. For example, the offset angle identifier310may compare an elongated direction of sides of the layout components610A-610C with the direction of the Y-axis to determine the offset angle θ. The layout rotator320may rotate the slanted layout components610A-610C according offset angle θ to obtain rotated layout components610A′-610C′. In one aspect, the rotated layout components610A′-610C′ are in parallel with the Y-axis. The base layout verifier330may perform layout verification (e.g., DRC verification) on the rotated layout components610A′-610C′ with respect to the X-axis, the Y-axis or both. For example, the base layout verifier330may verify a width W of each of the rotated layout components610A′-610C′ along the X-direction or a spacing S between two rotated layout components610A′,610B′ along the X-direction. FIG.7Ais a diagram showing a layout design700A of a slanted layout component710, in accordance with some embodiments.FIG.7Bis a diagram showing a rotated layout design700B obtained by rotating the slanted layout component710inFIG.7A, in accordance with some embodiments. In one approach, the offset angle identifier310may determine the offset angle61of the slanted layout component710. For example, the offset angle identifier310may compare a difference in an elongated direction of a side of the slanted layout component710and a direction of the Y-axis to determine the offset angle θ. The layout rotator320may apply, for each vertex of the slanted layout components, a location of the vertex to the equation Eq. (1), according to the offset angle θ to obtain a location of a transformed vertex. For example, for the vertex A of the slanted layout component710, the layout rotator320may apply a location (X,Y) of the vertex A to the equation Eq. (1) to obtain a location (X′,Y′) of a transformed vertex A′. The layout rotator320may transform the remaining vertexes of the slanted layout components710to obtain transformed vertexes of the rotated layout component710′. The rotated layout component710′ may have sides in parallel with or perpendicular to the Y-axis. FIG.8Ais a diagram showing a layout design800A of two layout components810,820overlapping in a non-perpendicular angle, in accordance with some embodiments. The layout components810,820may overlap with each other in a non-perpendicular angle. The layout component810may correspond to a metal rail in a first layer Li, and the layout component820may correspond to a metal rail in a second layer Li+1. For example, the layout component810is a slanted layout component slanted with respect to a base axis. In one aspect, the contact verifier340verifies a layout component for a via contact coupled to the slanted layout component810. In one aspect, the contact verifier340verifies the layout component for the via contact based on a point location815, at which the layout components810,820overlap. In one aspect, the contact verifier340may verify whether the slanted layout component810has sufficient enclosure to cover the layout component for the via contact at the point location815. For example, the contact verifier340determines a distance E between i) a side of the slanted layout component810, and ii) the point location815. The contact verifier340may verify whether the distance E exceeds a threshold value to determine whether sufficient enclosure is provided. FIG.8Bis a diagram showing a layout design800B of three layout components810,820,830overlapping in a non-perpendicular angle, in accordance with some embodiments. The layout components810,820may overlap with each other in a non-perpendicular angle, and the layout components810,830may overlap with each other in a non-perpendicular angle. The layout components810,830may overlap with each other in a perpendicular angle or a non-perpendicular angle. The layout component810may correspond to a metal rail in a first layer Li, the layout component820may correspond to a metal rail in a second layer Li+1, and the layout component830may correspond to a metal rail in a third layer Li−1. For example, the layout components810,830may be slanted layout components slanted with respect to a base axis. The contact verifier340may also verify whether enough spacing is provided between two nearby layout components for via contacts connected to the slanted layout component810. For example, the contact verifier340determines a distance D between i) the first point location815, at which a first layout component for a first via contact between the layout components810,820is located, and ii) a second point location835, at which a second layout component for a second via contact between the layout components810,830is located. The contact verifier340may verify whether the distance D exceeds a threshold value to determine whether sufficient spacing is provided between two via contacts at the point locations815,835. Accordingly, the contact verifier340may verify whether via contacts can be securely formed for slanted layout components. Referring now toFIG.9, an example block diagram of a computing system900is shown, in accordance with some embodiments of the disclosure. The computing system900may be used by a circuit or layout designer for integrated circuit design. A “circuit” as used herein is an interconnection of electrical components such as resistors, transistors, switches, batteries, inductors, or other types of semiconductor devices configured for implementing a desired functionality. The computing system900includes a host device905associated with a memory device910. The host device905may be configured to receive input from one or more input devices915and provide output to one or more output devices920. The host device905may be configured to communicate with the memory device910, the input devices915, and the output devices920via appropriate interfaces925A,925B, and925C, respectively. The computing system900may be implemented in a variety of computing devices such as computers (e.g., desktop, laptop, servers, data centers, etc.), tablets, personal digital assistants, mobile devices, other handheld or portable devices, or any other computing unit suitable for performing schematic design and/or layout design using the host device905. The input devices915may include any of a variety of input technologies such as a keyboard, stylus, touch screen, mouse, track ball, keypad, microphone, voice recognition, motion recognition, remote controllers, input ports, one or more buttons, dials, joysticks, and any other input peripheral that is associated with the host device905and that allows an external source, such as a user (e.g., a circuit or layout designer), to enter information (e.g., data) into the host device and send instructions to the host device. Similarly, the output devices920may include a variety of output technologies such as external memories, printers, speakers, displays, microphones, light emitting diodes, headphones, video devices, and any other output peripherals that are configured to receive information (e.g., data) from the host device905. The “data” that is either input into the host device905and/or output from the host device may include any of a variety of textual data, circuit data, signal data, semiconductor device data, graphical data, combinations thereof, or other types of analog and/or digital data that is suitable for processing using the computing system900. The host device905includes or is associated with one or more processing units/processors, such as Central Processing Unit (“CPU”) cores930A-930N. The CPU cores930A-930N may be implemented as an Application Specific Integrated Circuit (“ASIC”), Field Programmable Gate Array (“FPGA”), or any other type of processing unit. Each of the CPU cores930A-930N may be configured to execute instructions for running one or more applications of the host device905. In some embodiments, the instructions and data to run the one or more applications may be stored within the memory device910. The host device905may also be configured to store the results of running the one or more applications within the memory device910. Thus, the host device905may be configured to request the memory device910to perform a variety of operations. For example, the host device905may request the memory device910to read data, write data, update or delete data, and/or perform management or other operations. One such application that the host device905may be configured to run may be a layout verification application935. The layout verification application935may be part of a computer aided design or electronic design automation software suite that may be used by a user of the host device905to verify a layout design including slanted layout components. In some embodiments, the instructions to execute or run the layout verification application935may be stored within the memory device910. The layout verification application935may be executed by one or more of the CPU cores930A-930N using instructions from the memory device910. After the layout design of the integrated circuit is verified, multiples of the integrated circuit can be fabricated according to the layout design by a fabrication facility. Referring still toFIG.9, the memory device910includes a memory controller940that is configured to read data from or write data to a memory array945. The memory array945may include a variety of volatile and/or non-volatile memories. For example, in some embodiments, the memory array945may include NAND flash memory cores. In other embodiments, the memory array945may include NOR flash memory cores, Static Random Access Memory (SRAM) cores, Dynamic Random Access Memory (DRAM) cores, Magnetoresistive Random Access Memory (MRAM) cores, Phase Change Memory (PCM) cores, Resistive Random Access Memory (ReRAM) cores, 3D XPoint memory cores, ferroelectric random-access memory (FeRAM) cores, and other types of memory cores that are suitable for use within the memory array. The memories within the memory array945may be individually and independently controlled by the memory controller940. In other words, the memory controller940may be configured to communicate with each memory within the memory array945individually and independently. By communicating with the memory array945, the memory controller940may be configured to read data from or write data to the memory array in response to instructions received from the host device905. Although shown as being part of the memory device910, in some embodiments, the memory controller940may be part of the host device905or part of another component of the computing system900and associated with the memory device. The memory controller940may be implemented as a logic circuit in either software, hardware, firmware, or combination thereof to perform the functions described herein. For example, in some embodiments, the memory controller940may be configured to retrieve the instructions associated with the layout verification application935stored in the memory array945of the memory device910upon receiving a request from the host device905. It is to be understood that only some components of the computing system900are shown and described inFIG.9. However, the computing system900may include other components such as various batteries and power sources, networking interfaces, routers, switches, external memory systems, controllers, etc. Generally speaking, the computing system900may include any of a variety of hardware, software, and/or firmware components that are needed or considered desirable in performing the functions described herein. Similarly, the host device905, the input devices915, the output devices920, and the memory device910including the memory controller940and the memory array945may include other hardware, software, and/or firmware components that are considered necessary or desirable in performing the functions described herein. One aspect of this description relates to a device for verifying a layout design of an integrated circuit. In some embodiments, the device includes one or more processors, and non-transitory computer readable medium storing instructions. The instructions when executed by the one or more processors may cause the one or more processors to detect a slanted layout component having a side slanted from a base axis. The instructions when executed by the one or more processors may cause the one or more processors to determine an offset angle of the side of the slanted layout component with respect to the base axis. The instructions when executed by the one or more processors may cause the one or more processors to rotate the slanted layout component according to the offset angle to obtain a rotated layout component, wherein the rotated layout component has a rotated side in parallel with the base axis. The instructions when executed by the one or more processors may cause the one or more processors to perform layout verification on the rotated layout component with respect to the base axis. One aspect of this description relates to a device for verifying a layout design of an integrated circuit. In some embodiments, the device includes one or more processors, and a non-transitory computer readable medium that stores instructions. The instructions when executed by the one or more processors may cause the one or more processors to detect a slanted layout component having a side slanted from a base axis by an offset angle. The instructions when executed by the one or more processors may cause the one or more processors to transform a first location of a vertex of the slanted layout component according to the offset angle to obtain a second location of a rotated vertex of a rotated layout component. The instructions when executed by the one or more processors may cause the one or more processors to perform layout verification on the rotated layout component with respect to the base axis. One aspect of this description relates to a method of verifying a layout design of an integrated circuit. In some embodiments, the method includes detecting a first layout component in a first layer extending along a first direction. In some embodiments, the method includes detecting a second layout component in a second layer extending along a second direction. In some embodiments, the method includes verifying a third layout component corresponding to a via contact at a point location between the first layout component and the second layout component. The first direction and the second direction may be non-perpendicular with each other. The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. | 35,973 |
11861289 | DETAILED DESCRIPTION Embodiments of the present invention provide a virtual fabrication environment for semiconductor device fabrication that includes an analytics module for identifying key parameters and for performing process model calibration and variability analysis. However, prior to discussing the key parameter identification, process model calibration, optimization, variability analysis and other features provided by embodiments, an exemplary 3D design environment/virtual fabrication environment into which an analytics module of the present invention may be integrated is first described. Exemplary Virtual Fabrication Environment FIG.1depicts an exemplary virtual fabrication environment1suitable for practicing an embodiment of the present invention. Virtual fabrication environment1includes a computing device10accessed by a user2. Computing device10is in communication with a display120. Display120may be a display screen that is part of computing device10or may be a separate display device or display surface in communication with computing device10. Computing device10may be a PC, laptop computer, tablet computing device, server, or some other type of computing device equipped with one or more processors11and able to support the operations of virtual fabrication application,70, 3D modeling engine75and analytics module79(described further below). The processor(s) may have one or more cores. The computing device10may also include volatile and non-volatile storage such as, but not limited to, Random Access Memory (RAM)12, Read Only Memory (ROM)13and hard drive14. Computing device10may also be equipped with a network interface15so as to enable communication with other computing devices. It will be appreciated that computing device10rather than being a solitary computing device may also be implemented as a computing system with multiple computing devices working in parallel or other combination. Computing device10may store and execute virtual fabrication application70including 3D modeling engine75. 3D modeling engine75may include one or more algorithms such as algorithm1(76), algorithm2(77), and algorithm3(78) used in virtually fabricating semiconductor device structures. 3D modeling engine75may accept input data20in order to perform virtual fabrication “runs” that produce semiconductor device structural model data90. Virtual fabrication application70and 3D modeling engine75may generate a number of user interfaces and views used to create and display the results of virtual fabrication runs. For example, virtual fabrication application70and 3D modeling engine75may display layout editor121, process editor122and virtual fabrication console123used to create virtual fabrication runs. Virtual fabrication application70and 3D modeling engine75may also display a tabular and graphical metrology results view124and 3D view125for respectively displaying results of virtual fabrication runs and 3D structural models generated by the 3D modeling engine75during virtual fabrication of semiconductor device structures. Virtual fabrication application70may also include analytics module79for performing analysis of 3D models as discussed further below. Input data20includes both 2D design data30and process sequence40. Process sequence40may be composed of multiple process steps43,44,47,48and49. As described further herein, process sequence40may also include one or more virtual metrology measurement process steps45. Process sequence40may further include one or more subsequences which include one or more of the process steps or virtual metrology measurement process steps. 2D design data30includes of one or more layers such as layer1(32), layer2(34) and layer3(36), typically provided in an industry-standard layout format such as GDS II (Graphical Design System version 2) or OASIS (Open Artwork System Interchange Standard). Input data20may also include a materials database60including records of material types such as material type1(62) and material type2(64) and specific materials for each material type. Many of the process steps in a process sequence may refer to one or more materials in the materials database. Each material has a name and some attributes such as a rendering color. The materials database may be stored in a separate data structure. The materials database may have hierarchy, where materials may be grouped by types and sub-types. Individual steps in the process sequence may refer to an individual material or a parent material type. The hierarchy in the materials database enables a process sequence referencing the materials database to be modified more easily. For example, in virtual fabrication of a semiconductor device structure, multiple types of oxide material may be added to the structural model during the course of a process sequence. After a particular oxide is added, subsequent steps may alter that material. If there is no hierarchy in the materials database and a step that adds a new type of oxide material is inserted in an existing process sequence, all subsequent steps that may affect oxide materials must also be modified to include the new type of oxide material. With a materials database that supports hierarchy, steps that operate on a certain class of materials such as oxides may refer only to the parent type rather than a list of materials of the same type. Then, if a step that adds a new type of oxide material is inserted in a process sequence, there is no need to modify subsequent steps that refer only to the oxide parent type. Thus hierarchical materials make the process sequence more resilient to modifications. A further benefit of hierarchical materials is that stock process steps and sequences that refer only to parent material types can be created and re-used. 3D Modeling Engine75uses input data20to perform the sequence of operations/steps specified by process sequence40. As explained further below, process sequence40may include one or more virtual metrology steps45,49that indicate a point in the process sequence during a virtual fabrication run at which a measurement of a structural component should be taken. The measurement may be taken using a locator shape previously added to a layer in the 2D design data30. Alternatively, the measurement location may be specified by alternate means such as (x, y) coordinates in the 2D design data or some other means of specifying a location in the 2D design data30instead of through the use of a locator shape. The performance of the process sequence40during a virtual fabrication run generates virtual metrology data80and 3D structural model data90. 3D structural model data90may be used to generate a 3D view of the structural model of the semiconductor device structure which may be displayed in the 3D viewer125. Virtual metrology data80may be processed and presented to a user2in the tabular and graphical metrology results view124. Because of the large number of structural dimensions that are critical to the success of an integrated technology such as semiconductor devices, finding the relationship between the many inter-related process steps used to fabricate a device structure and the created structure is critical. As structural modifications produced by a step in the process sequence may be affected by previous and subsequent steps in the sequence, a particular step may affect a structural dimension in ways that are not obvious. A virtual fabrication environment enables automatic extraction of structural measurements from the device being created. The automatic extraction of a measurement is accomplished by specifying a virtual metrology measurement step in the process sequence at a point in the process when the measurement is critical. A locator shape for this virtual metrology measurement can be added to a layer in the design data and specified by the virtual metrology measurement step. The output data from this virtual metrology measurement can be used to provide quantitative comparison to other modeling results or to physical metrology measurements. This virtual metrology measurement capability is provided by during the processing sequence to extract a critical physical dimension at the correct point in the integrated process flow. The ability to provide virtual metrology measurement data at specified locations in the device structure provides a significant improvement over conventional physical fab measuring techniques. Typically, physical in-fab measurements are done on specific characterization structures fabricated in the scribe lines or saw kerfs, adjacent to the product dice. In most cases, these characterization structures need to be designed to accommodate limitations of the measurement technique, such as optical spot size. Therefore, the characterization structures are not entirely representative of the actual structures on the product dice. Because of these differences, users of in-fab measurements usually face the challenge of inferring the result on the product structure from a measurement on a characterization structure. In the virtual fabrication environment, measurements can be added to any design layout at specified points in the process sequence thus providing greater insight into the effect of the inter-related process steps on the virtual structural model being constructed. As such, the in-fab challenge of measuring a characterization structure and inferring the result on a product structure is eliminated. FIG.2depicts an exemplary virtual fabrication console123to set up a virtual fabrication run in the virtual fabrication environment. The virtual fabrication console123allows the user to specify a process sequence202and the layout (2D design data)204for the semiconductor device structure that is being virtually fabricated. It should be appreciated however that the virtual fabrication console can also be a text-based scripting console that provides the user with a means of entering scripting commands that specify the required input and initiate building of a structural model, or building a set of structural models corresponding to a range of parameter values for specific steps in the process sequence. The latter case is considered a virtual experiment (discussed further below). FIG.3depicts an exemplary layout editor in the virtual fabrication environment. The layout editor121displays the 2D design layout specified by the user in the virtual fabrication console123. In the layout editor, color may be used to depict different layers in the design data. The areas enclosed by shapes or polygons on each layer represent regions where a photoresist coating on a wafer may be either exposed to light or protected from light during a photolithography step in the integrated process flow. The shapes on one or more layers may be combined (booleaned) to form a mask that is used in a photolithography step. The layout editor121provides a means of inserting, deleting and modifying a polygon on any layer, and of inserting, deleting or modifying layers within the 2D design data. A layer can be inserted for the sole purpose of containing shapes or polygons that indicate the locations of virtual metrology measurements. The rectangular shapes302,304,306have been added to an inserted layer (indicated by a different color) and mark the locations of virtual metrology measurements. As noted above, other approaches to specifying the locations for the virtual metrology measurements besides the use of locator shapes may also be employed in the virtual fabrication environment. The design data is used in combination with the process data and materials database to build a 3D structural model. Inserted layers in the design data displayed in the layout editor121may include inserted locator shapes. For example, a locator shape may be a rectangle, the longer sides of which indicate the direction of the measurement in the 3D structural model. For example, inFIG.3, a first locator shape302may mark a double patterning mandrel for virtual metrology measurement, a second locator shape304may mark a gate stack for virtual metrology measurement and a third locator shape306may mark a transistor source or drain contact for virtual metrology measurement FIG.4depicts an exemplary process editor122in the virtual fabrication environment. The user defines a process sequence in the process editor. The process sequence is an ordered list of process steps conducted in order to virtually fabricate the user's selected structure. The process editor may be a text editor, such that each line or group of lines corresponds to a process step, or a specialized graphical user interface such as is depicted inFIG.4. The process sequence may be hierarchical, meaning process steps may be grouped into sub-sequences and sub-sequences of sub-sequences, etc. Generally, each step in the process sequence corresponds to an actual step in the fab. For instance, a sub-sequence for a reactive ion etch operation might include the steps of spinning on photo resist, patterning the resist, and performing the etch operation. The user specifies parameters for each step or sub-step that are appropriate to the operation type. Some of the parameters are references to materials in the materials database and layers in the 2D design data. For example, the parameters for a deposit operation primitive are the material being deposited, the nominal thickness of the deposit and the anisotropy or ratio of growth in the lateral direction versus the vertical direction. This deposit operation primitive can be used to model actual processes such as chemical vapor deposition (CVD). Similarly, the parameters for an etch operation primitive are a mask name (from the design data), a list of materials affected by the operation, and the anisotropy. There may be hundreds of steps in the process sequence and the process sequence may include sub-sequences. For example, as depicted inFIG.4, a process sequence410may include a subsequence412made up of multiple process steps such as selected step413. The process steps may be selected from a library of available process steps402. For the selected step413, the process editor122enables a user to specify all required parameters420. For example, a user may be able to select a material from a list of materials in the material database404and specify a process parameter406for the material's use in the process step413. One or more steps in the process sequence may be virtual metrology steps inserted by a user. For example, the insertion of step4.17“Measure CD” (414), where CD denotes a critical dimension, in process sequence412would cause a virtual metrology measurement to be taken at that point in the virtual fabrication run using one or more locator shapes that had been previously inserted on one or more layers in the 2D design data. Inserting the virtual metrology steps directly in the fabrication sequence allows virtual metrology measurements to be taken at critical points of interest during the fabrication process. As the many steps in the virtual fabrication interact in the creation of the final structure, the ability to determine geometric properties of a structure, such as cross-section dimensions and surface area, at different points in the integrated process flow is of great interest to the process developer and structure designer. FIG.5depicts an exemplary sequence of steps in the virtual fabrication environment to generate virtual metrology measurement data. The sequence begins with a user selecting a semiconductor device structure to be fabricated (step502). The user may select from among multiple available sets of design data files and then select a rectangular region within the design data. For example the user may choose a FinFET or a passive resistor or a memory cell. Following the determination/selection of the structure to be fabricated, the user enters a process sequence in the process editor122(step504a) and selects 2D design data that is expected to result in the desired structure (step504b). Optionally, the user may create or modify design data in the layout editor121. In the process editor, the user may insert one or more virtual metrology steps in the process sequence that specify a point during the virtual fabrication that the user would like virtual metrology measurements to be taken at specified locations in the evolving structure (step506a). The user may insert locator shapes in the 2D design data displayed in the layout editor121that will be used by the virtual metrology step to perform its measurements (step506b). The significance of a locator shape depends on the type of measurement requested. For example, the longer axis of a rectangular shape may indicate the direction and extent of a length measurement to be taken on a cross section of the structure, or the rectangle itself may designate a region where the contact area between two materials is to be measured. It will be appreciated that both above-described steps in the process editor may be performed before the steps in the layout editor or vice-versa in the virtual fabrication environment. After the one or more locator shapes have been added to one or more layers in the 2D design data (step506b) and the virtual metrology step(s) have been added to the process sequence (506a) the user sets up a virtual fabrication run using the virtual fabrication console123(step (508). During the virtual fabrication run, the process steps in the process sequence40are performed in the order specified by the 3D modeling engine75. When the virtual fabrication reaches the virtual metrology step, a virtual “measurement” of the specified component in the structure being fabricated is performed. The computations done by the modeling engine depend on the nature of the measurement being requested, and are generally consistent with the analogous physical measurement technique in the fab. For example, critical dimension scanning electron microscope (CD-SEM) measurements in the fab locate sidewalls by detecting rapid changes in the orientation of the top surface of a structure. Similarly in a virtual metrology operation, the 3D modeling engine extracts the top surface of the structure in the region specified by a locator rectangle, interrogates the surface along its intersection with a plane defined by the intersection of the longer axis of the rectangle and the vertical axis for changes in slope that exceed a threshold (5 degrees, for example). Large changes in slope define faces of a feature, such as the bottom, top and sides of a ridge in the structure. Having established the locations of bottom, top and sides of a feature, the distance between the sides of the feature is computed at a vertical location (bottom, middle, or top) specified by the metrology step. The 3D modeling engine generates one or more types of output as it builds structural models. One type of output is the structural model itself, and may include its state at one or more points in the process sequence. The 3D model may be displayed to a user in the 3D viewer125(step512a). The 3D modeling engine also exports the virtual metrology data (step510). The virtual metrology data80may be exported to an automatic data analysis tool for further processing or may be displayed to a user through a user interface such as the tabular and graphical metrology results view124or other view (step512b). If the structure when viewed or analyzed is satisfactory (step513), the virtual fabrication run ends (step514). If the structure created by the 3D modeling engine is unsatisfactory, the user modifies the process sequence and/or the 2D design data (step516) and a new virtual fabrication run is set up (step508). FIG.6depicts an exemplary 3D viewer125in the virtual fabrication environment. The 3D viewer75may include a 3D view canvas602for displaying 3D models generated by the 3D modeling engine75. The 3D viewer75may display saved states604in the process sequence and allow a particular state to be selected606and appear in the 3D view canvas. The 3D Viewer provides functionality such as zoom in/out, rotation, translation, cross section, etc. Optionally, the user may activate a cross section view in the 3D view canvas602and manipulate the location of the cross section using a miniature top view608. Another type of output from the 3D modeling engine75is the data produced by virtual metrology steps that are included in the process sequence.FIG.7depicts an exemplary display of virtual metrology measurement data80generated by multiple virtual metrology measurement steps in the virtual fabrication environment. The virtual metrology measurement result data80may be displayed in a tabular or graphical form including 2D X-Y plots and multi-dimensional graphics. The techniques employed in the exemplary virtual fabrication environment are geometry-based. Calibration of the process step input parameters with actual experimental results from a physical fabrication to make virtual experiments more predictive is therefore advisable. Such calibration of the process steps results in improved modeling accuracy for all structures that comprise the full technology suite. Calibration can be executed on individual process steps from measurements, metrology or other physical characterization methods on characterization structures or product structures. Calibration may be conducted by comparing modeling results, including virtual metrology measurement data, to corresponding measurements or metrology conducted in the physical fab (on corresponding characterization or product structures), and subsequently adjusting modeling parameters such that the resulting virtually fabricated structures better match the physically fabricated structures. With proper calibration of modeling process parameters, the virtual fabrication environment becomes more predictive of the structures that result from physical fabrication throughout the entire allowed design space. FIG.8depicts an exemplary sequence of steps to calibrate a process sequence in a virtual fabrication environment. The sequence includes steps taken in both a virtual fabrication environment and a corresponding physical fab environment. In the virtual fabrication environment, the user selects a process sequence (for a structure to be virtually fabricated) to be calibrated and identifies related process parameters (step802a). In the physical fab the user identifies a set of characterization or product structures for measurement during a fabrication run (step802b). Back in the virtual fabrication environment the user enters the process sequence in the process editor (step804a) and the 2D design data (layout) that defines the characterization structures is selected from available 2D design data or created for the purpose in the layout editor121(step804b) The same design data is used for virtual fabrication and actual characterization. As discussed above, the user inserts one or more virtual metrology steps in the process sequence (step806a) and adds measurement locator shapes to the 2D design data (step806b). The user sets up a virtual fab run in the virtual fabrication console (step808) and the 3D modeling engine builds the 3D model, and generates and exports virtual metrology data (step812a). In parallel or offset with the virtual fabrication run, the physical fabrication environment creates the characterization or product structures (step810) and in-fab images and measurements are taken on these structures (step812b). The user may then compare the 3D views of the generated virtual model in the 3D viewer75to the in-fab images of the physical device structure (step814a). Further, the set of characterization structure measurements may be compared to the virtual metrology measurements taken as a result of the virtual metrology step being inserted into the process sequence (step814b). In most cases, this comparison will be made by the user, but alternatively the comparison may be made by an automated data analysis tool based on pre-defined or interactively solicited criteria. If there is satisfactory agreement between the views and images and the virtual and actual measurements (step815), the process sequence is considered calibrated (step816). However, if there is not satisfactory agreement (step815), the user modifies the values of the process parameters in the process editor (step818) and a new virtual fabrication run is set up in the virtual fabrication console (step808). The sequence then iterates until a satisfactory agreement is reached and calibration is achieved. It should be appreciated that there may be a number of different parameters that may be calibrated within the sequence. Although the above description notes the use of the insertion of virtual metrology steps in the process sequence and the related use of the 2D locator shape or shapes to conduct the virtual metrology measurements, other techniques could be employed in the in a virtual fabrication environment. For example, the virtual measurements could be conducted on a virtual device structure after fabrication is completed and then compared to the physical measurements taken of the characterization structures during/after the physical fabrication run. While building a single structural model can be valuable, there is increased value in virtual fabrication that builds a large number of models. A virtual fabrication environment may enable a user to create and run a virtual experiment. In a virtual experiment, a range of values of process parameters can be explored. A virtual experiment may be set up by specifying a set of parameter values to be applied to individual processes (rather than a single value per parameter) in the full process sequence. A single process sequence or multiple process sequences can be specified this way. The 3D modeling engine75, executing in virtual experiment mode, then builds multiple models spanning the process parameter set, all the while utilizing the virtual metrology measurement operations described above to extract metrology measurement data for each variation. This capability may be used to mimic two fundamental types of experiments that are typically performed in the physical fab environment. Firstly, fabrication processes vary naturally in a stochastic (non-deterministic) fashion. As explained herein, a fundamentally deterministic approach used for each virtual fabrication run nevertheless can predict non-deterministic results by conducting multiple runs. A virtual experiment mode allows the virtual fabrication environment to model through the entire statistical range of variation for each process parameter, and the combination of variations in many/all process parameters. Secondly, experiments run in the physical fab may specify a set of parameters to be intentionally varied when fabricating different wafers. The virtual experiment mode enables the Virtual Fabrication Environment to mimic this type of experiment as well, by performing multiple virtual fabrication runs on the specific variations of a parameter set. Each process in the fabrication sequence has its own inherent variation. To understand the effect of all the aggregated process variations in a complex flow is quite difficult, especially when factoring in the statistical probabilities of the combinations of variations. Once a virtual experiment is created, the process sequence is essentially described by the combination of numerical process parameters included in the process description. Each of these parameters can be characterized by its total variation (in terms of standard deviation or sigma values), and therefore by multiple points on a Gaussian distribution or other appropriate probability distribution. If the virtual experiment is designed and executed to examine all of the combinations of the process variations (multiple points on each Gaussian, for example the ±3 sigma, ±2 sigma, ±1 sigma, and nominal values of each parameter), then the resulting graphical and numerical outputs from virtual metrology steps in the sequence cover the total variation space of the technology. Even though each case in this experimental study is modeled deterministically by the virtual fabrication system, the aggregation of the virtual metrology results contains a statistical distribution. Simple statistical analysis, such as Root Sum Squares (RSS) calculation of the statistically uncorrelated parameters, can be used to attribute a total variation metric to each case of the experiment. Then, all of the virtual metrology output, both numerical and graphical, can be analyzed relative to the total variation metric. In typical trial-and-error experimental practice in a physical fab, a structural measurement resulting from the nominal process is targeted, and process variations are accounted for by specifying an overly large (conservative) margin for the total variation in the structural measurement (total structural margin) which must be anticipated in subsequent processes. In contrast, the virtual experiment in the virtual fabrication environment can provide quantitative predictions of the total variation envelope for a structural measurement at any point in the integrated process flow. The total variation envelope, rather than the nominal value, of the structural measurement may then become the development target. This approach can ensure acceptable total structural margin throughout the integrated process flow, without sacrificing critical structural design goals. This approach, of targeting total variation may result in a nominal intermediate or final structure that is less optimal (or less aesthetically pleasing) than the nominal structure that would have been produced by targeting the nominal process. However, this sub-optimal nominal process is not critical, since the envelope of total process variation has been accounted for and is more important in determining the robustness and yield of the integrated process flow. This approach is a paradigm shift in semiconductor technology development, from an emphasis on the nominal process to an emphasis on the envelope of total process variation. FIG.9depicts an exemplary sequence of steps in the virtual fabrication environment to set up and perform a virtual experiment generating virtual metrology measurement data for multiple semiconductor device structural models. The sequence begins with a user selecting a process sequence (which may have been previously calibrated to make the results more structurally predictive (step902a) and identifying/creating 2D design data (step902b). The user may select process parameter variations to analyze (step904a) and/or design parameter variations to analyze (step904b). The user inserts one or more virtual metrology steps in the process sequence as set forth above (step906a) and adds measurement locator shapes to the 2D design data (step906b). The user may set up the virtual experiment with the aid of a specialized user interface, an automatic parameter explorer 126 (step908). An exemplary automatic parameter explorer is depicted inFIG.10and may display, and allow the user to vary, the process parameters to be varied1002,1004,1006and the list of 3D models to be built with their corresponding different parameter values1008. The parameter ranges for a virtual experiment can be specified in a tabular format. The 3D modeling engine75builds the 3D models and exports the virtual metrology measurement data for review (step910). The virtual experiment mode provides output data handling from all Virtual Measurement/Metrology operations. The output data from the virtual metrology measurements may be parsed and assembled into a useful form (step912). With this parsing and assembling, subsequent quantitative and statistical analysis can be conducted. A separate output data collector module110may be used to collect 3D model data and virtual metrology measurement results from the sequence of virtual fabrication runs that comprise the virtual experiment and present them in graphical and tabular formats.FIG.11depicts an exemplary tabular-formatted display of virtual metrology data generated by a virtual experiment in the virtual fabrication environment. In the tabular formatted display, the virtual metrology data collected during the virtual experiment1102and the list of virtual fabrication runs1104may be displayed. FIG.12depicts an exemplary 2D X-Y graphical plot display of virtual metrology data generated by a virtual experiment in the virtual fabrication environment. In the example depicted inFIG.10, the total variation in shallow trench isolation (STI) step height due to varying3parameters in preceding steps of the process sequence is shown. Each diamond1202represents a virtual fabrication run. The variation envelope1204is also displayed as is the depicted conclusion1206that the downstream process modules must support approximately 10.5 nm of total variation in STI step height to achieve robustness through 6 sigma of incoming variation. The virtual experiment results can also be displayed in multi-dimensional graphic formats. Once the results of the virtual experiment have been assembled, the user can review 3D models that have been generated in the 3D viewer (step914a) and review the virtual metrology measurement data and metrics presented for each virtual fabrication run (step914b). Depending on the purpose of the virtual experiment, the user can analyze the output from the 3D modeling engine for purposes of developing a process sequence that achieves a desired nominal structural model, for further calibrating process step input parameters, or for optimizing a process sequence to achieve a desired process window. The 3D modeling engine's75task of constructing multiple structural models for a range of parameter values (comprising a virtual experiment) is very compute intensive and therefore could require a very long time (many days or weeks) if performed on a single computing device. To provide the intended value of virtual fabrication, model building for a virtual experiment must occur many times faster than a physical experiment. Achieving this goal with present day computers requires exploiting any and all opportunities for parallelism. The 3D modeling engine75uses multiple cores and/or processors to perform individual modeling steps. In addition, the structural models for different parameter values in a set are completely independent and can therefore be built in parallel using multiple cores, multiple processors, or multiple systems. The 3D modeling engine75in the virtual fabrication environment may represent the underlying structural model in the form of voxels. Voxels are essentially 3D pixels. Each voxel is a cube of the same size, and may contain one or more materials, or no materials. Those skilled in the art will recognize that the 3D modeling engine75may also represent the structural model in other formats. For instance, the 3D modeling engine could use a conventional NURBS-based solid modeling kernel such as is used in 3D mechanical CAD tools, although modeling operations based on a digital voxel representation are far more robust than the corresponding operations in a conventional analog solid modeling kernel. Such solid modeling kernels generally rely on a large number of heuristic rules to deal with various geometric situations, and modeling operations may fail when the heuristic rules do not properly anticipate a situation. Aspects of semiconductor structural modeling that cause problems for NURBS-based solid modeling kernels include the very thin layers produced by deposition processes and propagation of etch fronts that results in merging faces and/or fragmentation of geometry. The virtual fabrication environment may enable the performance of a multi-etch process that is included in the process sequence which allows the 3D modeling engine75to model a wide-range of process and material-specific etch behavior. Patterning operations in process flows for highly scaled semiconductor devices are frequently performed using plasma etches. Plasma etches are known by many different names: dry etch, reactive ion etch (RIE), inductively coupled plasma (ICP) etch, etc. A wide variety of operating conditions and chemistry allows process engineers to fine-tune plasma etch behavior to selectively achieve diverse etch physics in multiple different classes of materials. This behavioral flexibility is key to achieving a desired 3D structure when patterning through several layers of material. Several different types of physics are typically involved, including but not limited to: chemical etching, sputtering, deposition or re-deposition of polymeric material, electrostatic charging, electrostatic focusing, and shadowing. This diverse spectrum of physics produces a commensurate range of etch behavior and hence structural shapes. Directly simulating the physics involved in plasma etches with sufficient accuracy is extremely difficult and slow. The multi-etch process step avoids the difficulties of physics-based simulations by simulating plasma etches using a reduced set of behavioral parameters that are specific to the type of etch and the material being etched. This allows the capture of a wide range of physical etch behavior without the need to directly simulate the physics of the etch process. For example, three main types of etch behavior may be simulated: isotropic, taper, and sputtering. A fourth type of etch behavior, shadowing, can optionally also be simulated. Basic (isotropic) behavior is caused (physically) by chemical etching and results in material being removed at a similar rate in all directions from the point on the etchable surface, regardless of the local orientation of the etchable surface. Basic behavior may be modeled with a single input parameter, “lateral ratio”, that controls the ratio between the lateral and vertical etch rates. For example, a lateral ratio value of one (1.0) indicates that the etch rate is uniform in all directions. A lateral ratio value less than one indicates that the etch rate in the lateral direction (on vertical surfaces) is slower than the etch rate in the vertical direction (on horizontal surfaces). Taper behavior is caused (physically) by a combination of directional etch behavior and polymer deposition. The polymer deposition occurs as a side effect of a directional etch process. During a directional etch process that etches horizontal surfaces much faster than vertical surfaces, polymer may accumulate on near-vertical surfaces. This competition between etching and deposition results in tapered sidewall profiles. Taper behavior may be modeled with a single input parameter, the taper angle. A taper angle describes the critical angle at which deposition and etch rates are balanced. An optional second parameter, the lateral ratio, has the same meaning as defined above for basic behavior. Sputter behavior refers to direct physical removal of material through bombardment by energetic ions and results in preferential removal of protruding edges (convex edges) and in some cases corners. Sputtering may be modeled with two parameters: the angle of maximum sputter yield, and the rate of sputter relative to the rate of vertical etching. Shadowing refers to a reduction in directional ion flux caused by a local elevation change, effectively reducing etch rates for some structures. This effect can be significant in some cases, resulting in differing etch rates across a cell. Shadowing may be modeled using a single parameter to describe angle of incidence of the energetic ions relative to a vertical axis. To model a multi-material, multi-physics etch, the input parameters described above must be formed into a suitable numerical modeling algorithm in the virtual fabrication environment. The numerical modeling algorithm includes single material and multi-material speed functions and a surface evolution technique. A single-material speed function defines the etch speed as a function of local surface orientation (i.e., surface normal direction) and is determined empirically in order to produce the desired etch behavior. Note also that a single-material speed function may combine multiple types of etch behavior; for example, both taper and sputter etching include the parameters associated with basic (isotropic) etching. A multi-material speed function is a combination of single-material speed functions, and calculates the local etch speed as a function of both local surface orientation and local material type. The Etch Ratio parameter defines the relative etch rates of etchable materials and is a multiplication factor on the single-material speed. With the speed function defined, a suitable surface evolution technique may be used to locate and evolve the position of the etchable surface in three dimensions. The etchable surface is advected or moved in its local normal direction according to the local scalar speed determined by evaluating the speed function. The scalar speed must be calculated at points of interest on the etchable surface and must be periodically re-calculated as the geometry of the etchable surface evolves. A number of different types of surface evolution techniques may be utilized by the numerical algorithm for simulating the multi-etch process in the virtual fabrication environment. The moving surface may be represented using any suitable numerical spatial discretization. Explicit front tracking methods may be used: examples include string methods, point-and-line methods (2D) and polygon surfaces (3D). An alternate implicit surface representation, such as distance fields, volume of fluid or voxels, may also be used. Any suitable time-dependent numerical technique may be used to advance the moving surface in time. A selective epitaxy process may be included in a process sequence used to virtually fabricate a semiconductor device structure. The selective epitaxy process virtually models epitaxial growth of a crystalline material layer on top of a crystalline substrate surface of a semiconductor device structure. Selective epitaxy is widely used in contemporary semiconductor process flows, often for the purpose of imparting mechanical stress on the transistor channel to improve performance. A key characteristic of epitaxial growth is its dependence on crystal directions. Semiconductor devices are normally fabricated on single crystal silicon wafers; i.e., silicon material with atoms arranged in a repetitive crystal lattice structure that is continuous over the majority of the wafer. Silicon crystal structure is anisotropic (i.e., not symmetric in all directions), and silicon surfaces are more stable in several particular crystal directions. These directions are defined by the major crystal plane families, identified as <100>, <110> and <111> using their Miller indices, and have the strongest impact on growth characteristics. By varying the pressure, temperature and chemical precursors in the epitaxy process, engineers can control the relative growth rates of the three major planes. Growth rates on minor planes, for example <211>, <311>, <411>, also vary but often are not influential in determining the final shape of an epitaxially grown structure. The virtual fabrication environment may use a surface evolution algorithm to model epitaxial growth. The surface upon which epitaxial growth is occurring (the growing surface) is advected or moved according to a scalar advection speed. The growth rate is calculated at selected points based on the local surface normal direction and fixed input parameters, is local in both distance and time, and moves the surface in its normal direction. The growing surface may be represented using any suitable numerical spatial discretization. Explicit front tracking methods may be used: examples include string methods, point-and-line methods (2D) and polygon surfaces (3D). An alternate implicit surface representation, such as distance functions, volume of fluid or voxels, may also be used. Any suitable time-dependent numerical technique may be used to advance the growing surface in time. The selective epitaxy process in the virtual fabrication environment utilizes the growth rates of the three major plane families, <100>, <110> and <111> as fixed input parameters. These input parameters define the growth rate for surfaces that are aligned with any one of their associated planes. Further input parameters may include growth rates on neighboring non-crystalline materials. The relationship between the 3D modeling coordinate system and the crystal lattice of the wafer may also be considered when calculating the epitaxial growth rate. The 3D modeling coordinate system normally uses the same X and Y axes as the 2D design data and the Z axis is normally perpendicular to the surface of the wafer. Alternate coordinate systems may also be employed. On a real wafer, the orientation of the crystal lattice is indicated by a “flat” or “notch” on the edge of the otherwise circular wafer. The notch may be used as a reference to orient the 2D design data in the desired direction relative to the crystal lattice. Input parameters specifying the notch (or flat) type and direction may define the orientation of the crystal lattice and associated crystal planes of the wafer relative to the 2D design data. It should be noted that this relationship can be described as a coordinate transformation between the 3D model coordinate system and the coordinate system of the crystal lattice. Using the growth rates for the major plane families and knowing the orientation of the crystal lattice, the epitaxial growth rate may be calculated everywhere on the growing surface. Areas of the growing surface with a normal direction that is aligned with a major plane direction are assigned the speed of that major plane. For areas of the growing surface that are not aligned with a major plane direction, an appropriate speed must be found by interpolating between neighboring major plane directions. Further, the behavior of the epitaxial growth at the boundaries of the crystalline material can also be important. Epitaxial growth is often performed after several prior processing steps in which non-crystalline materials have been deposited and patterned. These non-crystalline materials may be adjacent to crystalline material and hence in close proximity to epitaxial growth. Examples of non-crystalline neighboring materials are silicon dioxide, silicon nitride, or any other materials common in semiconductor processing. In some cases, epitaxial growth slowly creeps along adjacent non-crystalline material (overgrowth) but in other cases it does not. Overgrowth behavior may be modeled with fixed input parameters defining the set of neighboring materials on which overgrowth occurs (overgrowth materials), as well as the speed at which the growing surface creeps along the overgrowth materials. The overgrowth speed modifies the epitaxial growth rate at the surface of the overgrowth materials such that the growing surface moves along the overgrowth material at the specified speed. In addition, the speed at which the growing surface moves along the overgrowth material may depend on the angle between the overgrowth material surface and the growing surface. The overgrowth speed may be ignored if the angle between the two surfaces is greater than a threshold angle. Design Rule Checks (DRCs) or Optical Rule Checks (ORCs) may be performed in the virtual fabrication environment. DRCs and ORCs have typically been performed by specialized software on 2D design data as part of the process of preparing 2D design data for conversion into photolithography masks. Such checks are performed for purposes of identifying errors in the layout that would result in non-functional or poorly functioning chips. The checks are also performed after adding compensations for optical effects such as optical proximity correction (OPC). Typical design rules (as published in design manuals and coded in DRC decks) are simple 2D criteria intended to prevent problems that are fundamentally 3D in nature. However, with the growing complexity of semiconductor process technology, design manuals have blossomed into thousand-page documents with thousands of 2D design rules to codify and explain. In many cases, a single 3D failure mechanism/concern can drive hundreds of 2D design rules. The development of those 2D design rules requires significant assumptions about the 3D nature of the integrated process flow and resulting structures. 2D DRCs are developed from relatively simple calculations that may result in overly conservative designs. For example, consider the 2D design rules required to assure a minimum contact area between a line on a metal interconnect layer and an underlying via. A via is a vertical, electrically conductive connector between two interconnect layers, also called metal layers, or a vertical connector between an interconnect layer and a device such as a transistor, resistor or capacitor. Many additional 2D DRCs are required to satisfy a criterion that is very simple to state in 3D: that the contact area between metal lines and vias must exceed a specified threshold value. The 2D DRC situation becomes even more complex when one considers that multiple manufacturing variations can affect the contact area, including over or under-exposure during lithography steps, mis-registration of the masks, planarization (via chemical mechanical polishing (CMP)) of the via layer, and the sidewall tapers produced by plasma etching. It is infeasible to include all of these statistical variations in the simple formulae that drive 2D DRCs, so the DRCs are stricter than necessary to guard against manufacturing variations. These overly strict 2D DRCs may result in sub-optimal designs with wasted area on the die. In contrast to a 2D DRC environment, a virtual fabrication environment may perform checks, such as minimum line width, minimum space between features, and minimum area of contacts, directly in 3D without making assumptions about the translation from 2D to 3D. Checks performed directly in 3D are referred to herein as “3D DRCs”. One benefit of 3D DRC is that the required number of checks is significantly smaller than the number required in 2D environments. As a result, the checks are more robust and easier to develop than 2D checks. Furthermore, with a much smaller set of 3D rules, the virtual fabrication environment can perform the checks for a range of statistical variations in process parameters. It should be appreciated that 3D-DRCs are distinct from virtual measurement/metrology operations that may also be performed in the virtual fabrication environment. The virtual measurement metrology operations mimic actual measurement and metrology operations in the fab, whereby a measurement location is specified and a metric such as a distance value or area is output. For 3D DRCs, on the other hand, a geometric criterion is specified and the location and value of the criterion are desired. That is, the location is an output of the 3D DRC operation rather than an input. For example, a virtual metrology operation may specify an oxide film thickness measurement at a specific location indicated by a locator in the 2D design data, whereas a 3D DRC for minimum layer thickness may request the location(s) anywhere in the 3D model where the oxide film thickness is less than a specified threshold value. The 3D structural model may then be searched for locations where the specified minimum dimensional criteria are satisfied. Similarly, a 3D DRC may also cause the structural model to be searched to see if a maximum dimensional criteria is satisfied. 3D DRCs of this type thus provide benefits unavailable with virtual measurement/metrology operations for identifying unexpected causes of failures. Examples of 3D-DRCs Include:Electrical Net Isolation: finds the shortest distance between selected conductors. A conductor is a lump that may be comprised of one or more conducting materials (a “lump” is a discrete volumetric region (technically, a 3-manifold) within a 3D structural model. A lump may be composed of a single material or multiple materials);Minimum Separation: finds the shortest distance between any pair in a group of selected lumps;Minimum Line Width, finds the shortest distance through any lump in a group of selected lumps;Minimum Layer Thickness, finds the shortest distance through any lump in the collection of lumps that comprise a layer of material;Minimum Contact Area: finds the smallest contact area between all pairs of selected lumps. Lumps may be selected on the basis of constituent material(s), electrical conductivity or other properties. Each of the 3D DRC checks can be extended by specifying a threshold value. For example, specifying a threshold value for a Minimum Line Width check produces a list of locations where the minimum line width is less than the threshold value. Those skilled in the art will recognize that other checks of this nature may be defined. Analytics Module In one embodiment, the virtual fabrication environment includes an analytics module. The analytics module is designed to mimic the workflows in use cases encountered by semiconductor process integrators. Exemplary use cases encountered by semiconductor process integrators and addressed by the analytics module may include but are not limited to, key parameter identification, process model calibration and variability analysis. In key parameter identification, the analysis module may find process steps/parameters that most strongly influence an outcome (calibration, defect mode, etc.). In process model calibration, the process parameters may be adjusted to make the 3D model match measurements from a physical fab, such as, but not limited to, Transmission Electron Microscopy (TEM) data or a process target. In variability analysis, the analytics module may assist the user in analyzing and understanding the variability in metrology data obtained for a set of virtual 3D models such as, but not limited to, by estimating variability in structural or electrical parameters for specification limit setting. The analytics module described herein may generate process variation via experimental design or Monte Carlo simulation applied to parameters and settings in the virtual semiconductor fabrication environment and then perform automated statistical analysis, optimization, and visualization for users. The data being analyzed can include the settings of the input process parameters as well as, but not limited to, the metrology, structure search, Design Technology Checking (DTC) checks, and electrical analysis, that are evaluated on the 3D virtual semiconductor structures produced in the virtual fabrication environment. Embodiments utilize statistical methods chosen and customized to solve problems and address issues peculiar to virtual semiconductor fabrication and correct for errors that may occur when exporting result data to conventional third party statistical tools. Embodiments also provide a more efficient technique for experimental design as the particular manner in which the virtual semiconductor fabrication environment of the present invention constructs 3D models results in not having certain common problems that other experimental design methods must address. For example, if the deck and the parameter settings are not changed, the same 3D model will be generated each and every time in the virtual semiconductor fabrication environment. Thus there is no random component to the 3D model output and the three common tasks in experimental design of randomization, replication and blocking are not needed to be performed. In one embodiment, the analytics module is integrated into the virtual fabrication environment resulting in improved and new functionality not available via third party statistical solutions. In one embodiment, the UI and algorithms may be organized by use cases and follow a left-side, step-wise flow UI for each use case. This design may strongly guide the user (who may lack statistical training) to perform correct analysis steps so that they avoid mistakes in the analysis. The analytics module may also include a statistical analysis engine that employs a set of analysis algorithms to correctly analyze each specific use case. The analytics module may solve problems not correctly addressed by third party statistical software such as multicollinearity and outliers (discussed below), and, as previously noted, avoids using methods that are not required, e.g., randomization during experimental design. Results of the analysis may be provided to a user or to third party software in a number of formats. FIG.13depicts an exemplary analytics flow in an exemplary embodiment. Inputs to the analytics module may include, but are not limited to, selection of the type of analysis, which may be organized by use case (e.g. identifying key parameters, optimization, calibration, variability analysis). Additional exemplary inputs include process parameters of interest (e.g. specified as nominal values and/or ranges) and targets of interest (e.g.: metrology values, structure searches, DTC checks, electrical analysis values). In one embodiment, an input value may be a reference to a 3D model file. The analytics module may perform run list generation to set up the experimental Design of Experiments (DOE) (e.g. a screening D.O.E., a full factorial D.O.E., a Monte Carlo simulation) followed by run list execution and may utilize cluster computing to increase efficiency during execution. Outputs from execution may include outlier detection and statistical analysis results such as determining parameter significance/ranking. Outputs may also include exploratory graphs (e.g. bivariate plots, response surface) and indirect optimization. In one embodiment results may also be exported to third party tools for further analysis. Key Parameter Identification One exemplary use case for an embodiment employing an analytics module as described herein is key parameter identification. In key parameter identification the analytics module receives a user selection of a deck containing a 2D layout and process steps. The purpose of the key parameter identification use case is to determine which parameters are related to and affect a target. Then, those parameters are ranked to show their relative importance. In one embodiment, the use case has seven steps:1) Pick experimental design;2) Select parameters to vary and input user-selected levels into design;3) Generate design and run (export if necessary);4) Select metrology targets;5) Set regression options;6) Select identified outliers for addition or removal from DOE result data; and7) Run regression and view results. Identify important/key parameters. In this embodiment, the first step is a selection of a Design of Experiments (DOE) step, also call experimental design. D.O.E. is a methodology for calculating the number of experiments at specific combinations of parameters settings such that more information is gained for less experimental effort. The analytics module provides three ways to create an experimental design to sample the parameter space: full factorial design, Definitive Screening Design (DSD) and Monte Carlo simulation.FIG.14Adepicts an exemplary UI1400provided in the virtual fabrication environment for making the selection of the type of experimental design1402. Full factorial design is the most classic experimental design. All possible combinations are created. Full factorial designs are best used when the number of parameters are smaller, approximately from 2 to 7. For each parameter setting chosen, the user inputs the number of levels and the values for those levels via the UI. In one embodiment up to 10 levels can be input for each parameter setting. Definitive Screening Design (DSD) is a screening design used when the number of parameters is larger or the cost (time) of runs is high. It produces far fewer runs than full factorial designs for the same number of parameters. Embodiments may implement the DSD-augmented method for continuous variables only. In one embodiment, for a DSD, there are only three levels specified for each parameter. Monte Carlo simulation is a D.O.E. option that allows for random generation of parameter settings using normal or uniform distributions. In an embodiment, the UI allows the user to input means and standard deviations for normally distributed parameters, or minima and maxima for uniformly distributed parameters, and random values are generated accordingly. In an embodiment the user may also enter the number of runs desired. FIG.14Bdepicts an exemplary UI1410in an embodiment by which a user can specify the levels for each parameter being varied in the design.FIG.14Bshows a screenshot for selecting parameters for a full factorial DOE. The left pane contains the list1412of parameters in the deck. Each can be selected and added to the right pane. There, the user inputs the desired number of levels1414and values1416for each level. For example if three parameters have been selected and they have 3, 2, and 4 levels respectively they would produce 3*2*4=24 runs. In an embodiment, the D.O.E. that has been created in the previous steps is run by the virtual semiconductor fabrication environment in batch mode, producing a 3D model for each run in the DOE. The D.O.E. may be exported to a csv or other type of file as well. In the fourth step of the key parameter identification workflow, the metrology targets may be selected by the user to acquire measurements on the 3D models produced by the DOE. An exemplary UI1420for making the selection of metrology targets1422is depicted inFIG.14C. To perform key parameter identification, a regression model is built in the fifth step of the workflow. InFIG.14D, in an exemplary embodiment, the UI1430enables the user to select1432whether to build a regression model with main effects (original parameters) only, or to build a full quadratic model. In another embodiment, the type of regression model is automatically selected. In one embodiment, default options may be provided for either type of regression model and may be changed by a user. In another embodiment there may be additional options provided for more knowledgeable users. These additional options1434may include a cutoff for a collinearity test and two entry/exit p-value cutoffs for stepwise linear regression. The collinearity test enables correct handling of multicollinear variables and correct identification and exclusion of outliers from the statistical analysis being performed within the virtual semiconductor fabrication environment. Multicollinearity occurs when two or more predictor/independent variables in a multiple regression model are highly correlated such that one can be predicted from the others with a high degree of accuracy. The fitting of a quadratic model often produces multicollinear variables and embodiments address this issue as described further below. Within the set of 3D models created from the experimental design, one or more 3D models may have targets (metrology, CD, etc.) containing data values that are unusual in some respect (outliers) that would adversely affect or prevent a (correct) statistical analysis. The analytics module identifies outliers for the user. InFIG.14E, in an exemplary embodiment the UI1440enables the user to select from among identified outliers1442to determine which should be omitted from the target data when performing statistical analysis. There are four types of outliers that are tested for in a target in this step. Empty cells—Empty data cell returned for a target if the run failed (no 3D model could be built). This type of run is marked automatically as an outlier to be removed during statistical analysis and cannot be put back in by the user. NOVAL—If a run completed, but the target measurement could not be calculated, a text value of “NOVAL” is returned. This type of run is marked automatically as an outlier to be removed during statistical analysis and cannot be put back in by the user. Constant Values—Multiple values for a target may be identical. If many results for a target are identical, this will prevent or distort statistical modeling. The target data is tested to check if a certain amount of data, for example 50% or more of the data, is identical/constant by comparing it to the median. Those runs are removed. If all the target data is identical, an error is reported. Statistical Outliers—These are data points that are sufficiently far from the center of the data that they may need to be excluded from the analysis. The Median Absolute Deviation (MAD) method may be used to statistically test if each data point is an outlier. Given that MAD=median(|x−median(x)|), a robust equivalent to the standard deviation may be calculated as SM=1.4826*MAD. Data values exceeding MAD±K*SM (where K=3 by default and is equivalent to 3 standard deviations) may be considered outliers and marked as such for user inspection. In one embodiment users may place any of these outliers back into the analysis. It should be appreciated that there can be outliers in measured data that are not a concern when using the types of designs discussed here, i.e. DSD, full factorial or Monte Carlo simulation, since by definition, those data points are in-range, unless the user has made a typographical or other error in setting the levels/ranges. Following removal of the outliers, a number of types of statistical analysis may be performed on the data for the targets. For example, in one embodiment, the analytics module may make input parameters for a regression model (squares/cross-terms if selected). This permits fitting basic curved relationships between the x parameters and the target y. A set of variables X can be fit to a linear regression model and the equation can be represented in linear algebra notation as: X*b=y, where X is a matrix with n rows (the runs) and k columns (the variables). In an embodiment, the analytics module can also perform a multicollinearity check for all possible pairs of input variables, calculate the correlation coefficient r, and remove one parameter of every pair with |r|>0.9 (this cutoff can be adjusted by the user). This fixes the multicollinearity problem in most cases. In an embodiment, the analytics module can also perform an undetermined matrix check to check if X is underdetermined (k>n). If there are more variables than data points (runs), there is not enough data to find a unique regression solution using standard equations (algorithms fail to return an answer). There are two solutions: 1) delete variables (use only main effects instead of a full 2ndorder model), or 2) use a method like principal component regression. In one embodiment the first type of solution is applied by the analytics module to delete variables. If k>p, then the squares and cross-terms are removed and checked again. If X is still underdetermined, the regression cannot be performed and an error is returned to the user. The analytics module may further run a number check on the data. After outlier deletion, depending on the design chosen by the user and its size, there may not be enough runs left to bother with a regression. In one embodiment the check is to determine if the number of runs n is <10 in which case there is not enough data and an error is returned to the user. In an embodiment, the analytics module may perform stepwise linear regression. The forward approach may be used: the initial model includes only an intercept (the β0weight) and all variables are tested for statistical significance to see which one, if any should enter the model. Once a variable is chosen, say variable x3, then all the remaining variables are tested for inclusion into the new model. This process continues until no variables meet the inclusion criteria (a p-value <0.05, user adjustable). Variables in the model are also tested for removal (a p-value >0.10, user adjustable). In an embodiment the analytics module may perform a relative importance calculation to identify key parameters. If a model is generated with two or more statistically significant parameters, a new linear regression is calculated using only those variables, but after they have been autoscaled. To autoscale a variable, the variable's mean is subtracted from all data points and then the resulting values are divided by the variable's original standard deviation. This makes all the variables have a mean of 0 and a standard deviation of 1. The reason for doing this is variable scale. One variable may be in a range 0 to 1, while another variable may be in a range 50 to 80. Importance (size of weights, the β values) in regression is affected by variable scale. If one wants to know which variables are more important by examining the β values, the variables in the regression model have to be converted to have the same variance, which autoscaling accomplishes. The results may be presented via a user interface1450to the user in a number of different formats, such as, but not limited to, a plot with annotation1452and in a table1454as depicted inFIG.14F. The plot is a plot of predicted target vs. actual target. In one embodiment it may be annotated with the following: r2 (regression squared correlation coefficient, ranging from 0 to 1, indicates fraction of target variation explained by the model), Root-Mean-Squared-Error (RMSE, a measure of predictive accuracy), and n (the actual number of data points/runs used in the regression model). In one embodiment the output table1454of regression results may have five columns as depicted in larger form inFIG.14G. Column 1: Parameter Names. These are the names of the original variables, and if included, the square and cross-terms. Column 2: p-value for significant variables Column 3: Regression weights (β). Column 4: Relative Weight. Regression weights (β) for a regression calculated with autoscaled variables. These can be used to rank the significant parameters. For example, a parameter Etch4 lateral ratio may be determined to be more important than Etch1 etchratio when the relative importance is calculated. Column 5. Status. In one embodiment there are four possible results: non-significant, significant, removed highly collinear and removed underdetermined. In an embodiment, significant parameters have non-zero weights and scaled importance that indicate how important a given process parameter is for the chosen metrology. This approach to key parameter identification is further summarized inFIG.15which depicts a sequence of steps performed to identify key parameters in an exemplary embodiment. The sequence begins with the receipt of a user identification of a deck (layout data and process steps) by the virtual fabrication environment (step1500). Multiple virtual fabrication runs are then performed for a D.O.E. for the semiconductor device of interest (step1502). In one embodiment, a user selection of the type of D.O.E. and additional D.O.E. related input selections are received through a user interface provided in the virtual fabrication environment. Alternatively, in another embodiment, the type of D.O.E. and D.O.E. parameters are automatically selected by the virtual fabrication environment. A user selection of targets (e.g.: metrology measurement, a structure search, a DTC check and/or an electrical analysis) is received (step1504) and the analytics module identifies outliers in the target data produced by the virtual fabrication runs as described above (step1506). The identified outliers are displayed to the user and then a user selection adding one or more of the outliers back into the target data or removing the outliers from the target data is received via a provided user interface (step1508). The adjusted target data following the outlier decision is then used by the analytics module to perform a regression analysis to identify one or more key parameters for the D.O.E. (step1510). An indication (e.g. list, plot, chart) of the identified key parameters is then displayed to the user or the identified key parameters may be exported to a third party application for additional processing (step1512). Process Model Calibration The analytics module may also perform process model calibration. In process model calibration process step parameters and settings are adjusted in the virtual fabrication environment to make the virtual 3D model produced from virtual fabrication runs match a physical semiconductor produced in a physical fabrication environment. Once calibrated, the parameters and their settings in the virtual semiconductor fabrication environment may be varied to introduce changes in the 3D models and provide insight into what process changes will improve various semiconductor properties. In one embodiment, a wizard user interface is provided to guide the user through the process of optimizing the virtual 3D model to match the physical semiconductor. The user selects measurement target(s) and their desired value(s), weights the importance of targets if there are multiple targets, sets parameter bounds, runs one or more trials, and receives optimized parameter values and corresponding measurement target results. Conventional virtual fabrication environments that adjust process parameters in a calibration effort lack a system-level component enabling proper process model calibration. Further, many semiconductor process integration engineers have little or no statistical knowledge. Consequently, those engineers perform process model calibration by adjusting parameters in a primitive trial and error fashion, usually via a one-factor-at-a-time (OFAT) approach. This approach is time-consuming and gives poor quality solutions, when it finds any solution at all. The OFAT approach guarantees that the optimal parameter sets cannot be found because it does not take into account the effects of any interactions among the parameters. To address these issues, embodiments provide automated statistical analysis, optimization, and visualization for users (e.g.: semiconductor process integrators who may have limited or no statistical knowledge) using an analytics module integrated into a virtual fabrication environment. More particularly, embodiments provide a programmatic approach to solving the problem of calibration without confusing the engineer untrained in statistics. A statistical analysis engine in the analytics module employs a set of analysis algorithms to analyze each specific use case with little user input. In one embodiment a user interface (UI) is a wizard whose purpose is to strongly guide the user to perform the correct analysis steps. The wizard may be organized by use cases and follow a left-side, step-wise flow UI for each use case. An example workflow for process model calibration performed in an exemplary embodiment is depicted inFIG.16. The sequence begins with the virtual fabrication environment receiving an identification of a deck (layout data and process steps) from which to produce a virtual 3D model of a semiconductor device of interest. In most cases the deck will be retrieved following a user selection/specification provided via a UI provided in the virtual fabrication environment. The UI also receives a user identification of one or more measurement targets on the 3D model that the user desires to have match measurement targets on a corresponding physical semiconductor (step1602). Targets may be, but are not limited to, values related to metrology values, structure searches, DTC checks, electrical analysis, etc. evaluated on the virtual semiconductor structure. In another embodiment, the deck may be selected programmatically without user input. The parameters which are important (key parameters) and should be adjusted to make the 3D model target values match the experimental data are then determined (step2904). In one embodiment, this determination is done via the key parameter identification process performed by the analytics module as discussed above. Alternatively, in another embodiment, the key parameters may be manually selected by the user via a UI. The sequence continues by receiving a user specification of a desired value (DV) for each target via the UI (step1606). A DV can be, but is not limited to, a distance obtained from a TEM, or the quality of a match between a slice of the 3D model and a whole TEM, or an optical spectra. Relative weighting is applied to each target by default or as indicated by the user, e.g., for two targets A and B, target A may be weighted to be twice as important as target B if the user desires. The sequence continues by receiving a user-specification of each parameter to be adjusted in the calibration with the user setting lower and upper bounds (step1608). The optimization algorithm provided in the analytics module keeps the parameters inside these bounds as it iterates towards a solution. The analytics module next executes an optimization algorithm (step1610). The optimization algorithm may perform indirect or direct optimization, both of which are described further below. In one embodiment, the user may have options to select or specify, such as number of iterations, convergence tolerance, type of scoring function (L−2 or L−1), number of trials, etc. In some embodiments, for multiple trials, random starting values of the parameters may be created that are inside the lower and upper bounds previously specified. The results of the optimization algorithm are displayed to the user (step1612). In one embodiment, the user can select a trial from the displayed results via the UI to trigger the building of a 3D model in the virtual fabrication environment (step1614). Two different types of optimization algorithms may be used by the analytics module. Indirect optimization applies an optimization algorithm to the regression equations created during the key parameter identification process. Indirect optimization has the advantages of being very fast since it does not call the virtual fabrication environment to build additional 3D models and generally avoids local minima because the regression equations provide a set of planes making the response surface (a response surface indicates the relationship between the parameters and the error between the 3D model targets and Desired Values). Trials begun from random starting points in parameter space tend to converge to similar results, so users may be able to use only a small number of trials to perform their optimization task. It should also be noted that indirect optimization has the disadvantage that if the regression equation(s) poorly predict the target(s), e.g., if the response surface is highly non-linear, the results will be of poor quality. Direct optimization is much slower than indirect optimization and may be used in an embodiment where the key parameter identification process discussed above is not followed. In this method, the optimization algorithm calls the virtual fabrication environment at each iteration, generating a new 3D model and associated metrology values and updating the optimization algorithm, which then adjusts the parameter values. This is a sequential optimization process. Direct optimization has the advantages of being the most realistic method and will work better for non-linear response surfaces, and does not necessarily require key parameter identification process described above be run first (no regression equations are required and the user would only need to pick parameters to optimize). It has the disadvantages of being slow since direct optimization calls the virtual fabrication environment to build 3D models at each iteration of each trial and may become trapped in local minima. These disadvantages can be alleviated by using multiple licenses (speed) and more trials to provide a broader sampling of parameter space to avoid the algorithm becoming trapped in a local minimum. A variety of optimization algorithms can be used to perform direct and indirect optimization. As a non-limiting example, in one embodiment an interior-point algorithm with parameter bounds may be utilized for indirect optimization, although other algorithms could be used. For direct optimization, as a non-limiting example, genetic algorithms may be used as they can handle complex response surfaces with discontinuities and binary targets (present/absent). As one non-limiting illustration of performing process model calibration with indirect optimization, in one embodiment a user first completes the key parameter identification process via the analytics module as described herein. More particularly, the user conducts an experimental design and regression on a set of parameters and targets (metrology, structure search, DTC checks, electrical analysis, etc. evaluated on the virtual semiconductor structure). This identifies the statistically significant parameters for each target, and creates a regression equation predicting each target using those statistically significant parameters. As discussed above, the user selects one or more target(s), enters the desired value (DV) for each target, and weights their importance. A default weighting for each target of 1 may be provided. For calibration options the user may pick if squared error is used (default) or not, and can set advanced options such as, but not limited to, the number of optimization trials, number of iterations and the convergence tolerance. Default values may be provided for each option. For example, the number of optimization trials may be set to a default value of 10, the number of iterations per trial may be set at a default value of 100, and the convergence tolerance may be set at a default value of 1 e-6. Following the setting of the advanced options, the user may set, via the provided UI, the allowed lower and upper bounds for each parameter being optimized. The parameter values will be kept inside these bounds during the optimization by the analytics module. The user initiates the running of the calibration via the UI and the optimization begins. In one embodiment, the underlying compute engine may use an interior-point algorithm. Once the optimization trial(s) complete, the optimized parameter and target values are displayed for each trial, along with completion/error messages, and the user can select one trial to build in the virtual fabrication environment to assess the resulting 3D model. As noted above, in one embodiment, the process model calibration sequence may be guided via a UI wizard.FIG.17depicts the selection of metrology targets for the process model calibration sequence described above where the user is guided to select targets from among targets for which regression data has been previously generated during the key parameter identification process. As explained further below, the regression data is subsequently used when performing indirect optimization. In one embodiment, the UI1700presents a selectable list of targets1702but limits the user to selecting from metrology targets that already have regression models. In an embodiment, no other parameters and no other metrology targets are provided.FIG.17also depicts a table1704in the UI enabling a user to specify DVs for the selected target. In the table in the right pane the user enters DVs (those cells may initially be empty) and weights, which may be 1 by default and can be changed by the user. FIG.18depicts an exemplary user interface1800enabling the selection of calibration options1802which may be provided by the process model calibration wizard. As depicted, in one embodiment, the optimization approach may be selected1804(indirect vs. direct) and an advance option checkbox1806may be provided by the virtual fabrication environment to enable the user to specify options such as the number of optimization trials, the iterations per trial and the tolerance desired by the user. Default values may be initially be provided which, in one embodiment, may be changed by the user. The process model calibration wizard may also provide a user interface1900enabling the user to select parameter bounds as depicted inFIG.19. A list of all the statistically significant parameters in the regressions selected by the users may be created by the analytics module and displayed in a table format1902. The relevant targets1904are listed for each parameter. For example, in the table depicted inFIG.19, the parameter 2.1.15:thickness is significant for the three regression targets FinCD_Top, FinCD_Bot, GapCD_Top. Users enter the desired lower and upper bounds on each parameter. The process model calibration wizard may then provide a run button to start the calibration and the results may be displayed to the user via a user interface2000as depicted inFIG.20. For example, the results2002may be displayed in a table format from an internal or external simulation environment and may display trial number, optimization result, predicted target results, and values for parameters2004. In one embodiment, the displayed view may enable the user to select a column in the table to export or automatically build a model in the 3D view of the virtual fabrication environment using the parameters from a particular successful trial. Variability Analysis Variability analysis helps the user analyze and understand the variability in metrology data obtained for a set of virtual 3D models. In one embodiment the analytics module in the virtual fabrication environment can perform variability analysis to generate a user interface that displays a table of calculated information about the target distribution, plots of target data histogram and normal quantiles, and provides the ability to switch to a second plot window, select up to four targets and plot/compare their empirical cumulative distribution functions. Further, the variability analysis as described herein provides estimates of the precision of the standard deviation (sigma) and its interrelationship with sample size, methods for assessing if target data is normally distributed, and consistent methods for visual comparison. Variability analysis is a task where the user assesses the distribution of values for a target (metrology, structure search, DTC checks, electrical analysis, etc.) obtained from multiple virtual semiconductor structures created in a virtual fabrication environment. The purpose is to determine nominal values, ranges, specification limits, etc. for that target. Conventional virtual fabrication environments for semiconductor device structures lack system-level components enabling proper variability analysis. Many semiconductor process integration engineers have little or no statistical knowledge and consequently those engineers perform variability analysis in an incomplete and/or incorrect fashion. Target data may be assumed to be normally distributed, which may not be the case, and if the target data is not normally distributed, then mean and sigma values are misleading. Even if the target data is normally distributed, the proper sample size needed to attain useful precision of sigma is usually not addressed in the Monte Carlo simulation/experimental design. Users frequently overestimate or underestimate the sample size, which wastes time and/or leads to a poor quality answer. Further, visualization and comparison of distributions are done in different software packages in different ways, or not at all, which leads to confusion among users. To address these issues, in one embodiment the analytics module is designed to perform variability analysis so as to provide automated statistical analysis, optimization, and visualization for users (e.g.: semiconductor process integrators with limited or no statistical knowledge) in a virtual fabrication environment. FIG.21depicts a sequence of steps to perform variability analysis in an exemplary embodiment. The sequence begins with the receipt of a user identification of a deck (layout data and process steps) used by the virtual fabrication environment to produce a virtual 3D model of the semiconductor device structure of interest (step2100). The user creates a Monte Carlo D.O.E. and identifies targets for the 3D model (step2102). Multiple virtual fabrication runs are then performed for the Monte Carlo D.O.E. (step2104). As discussed further below, in one embodiment a reduced set of approximately 200 runs is performed. The analytics module identifies outliers in the target data produced by the virtual fabrication runs in the manner described above (step2106). The identified outliers are displayed to the user and a user selection adding one or more of the outliers back into the target data or removing the outliers from the target data for each target is received via a provided user interface (step2108). The user selects the variability analysis option via the user interface and selects one or more targets for the analysis (step2110). The variability analysis results are then displayed to the user in different forms, such as, but not limited to, a tabular distribution data, plots of target data histogram and normal quantiles, or the results may be exported to a third party application for additional processing (step2112). If desired, the user can switch to a second plot window, an Empirical Cumulative Distribution Function (ECDF) window, and select up to four targets and the analytics module will plot/compare their empirical distribution functions. FIG.22depicts an exemplary user interface displaying a variable analysis results window2200in an exemplary embodiment. For the selected target, the variability analysis main window shows a table2202and two plots, the histogram2204and the normal quantile2206. The table2202contains multiple pieces of calculated information for the selected target: For example:n is the number of data points used in the calculations (the user may add/remove outliers and therefore the actual number of data points used is shown here);mean and 95% CI (confidence interval) of mean; Standard deviation and 95% confidence interval for the standard deviation. The 95% CI is very important for the user to know, because it is an estimate of the precision of the standard deviation (sigma). If n=200, the 95% CI is approximately ±10%, which has been found to be useful in estimating specification limits. A sample size of 200 is much smaller than is commonly recommended for Monte Carlo simulations (the usual recommendation is 10,000) but provides precision of ±10% which is acceptable for some of use cases. Users can adjust sample size (n) to improve precision (CI) for sigma and mean as desired. In another embodiment, the sample size for the Monte Carlo simulation is less than five hundred. Normality Test—the result of the Lilliefors Normality Test applied to the selected target, reported as p-value and whether it is statistically significant (yes/no). This is the first of multiple methods used by the analytics module to assess whether the target data is normally distributed; Percentiles—min, 0.5%, 2.5%, 5%, 25%, 50% (median), 75%, 95%, 97.5%, 99.5%, max for the selected target. The variability analysis main window may also display a histogram plot, a histogram of the data for the selected target, with a normal pdf overlaid for visual comparison of normality. If the histogram bars follow the normal pdf, the target data can be said to be normally distributed. This is a second method provided by the analytics module for testing normality of target data. The variability analysis main window may further display a normal quantile plot of the selected target data. If the points fall close to or on the line, the target data can be said to be normally distributed. This is a third method provided by the analytics module for testing normality of target data. It will be appreciated that additional methods for testing normality of target data not explicitly discussed herein may also be performed by the analytics module and should be considered to be within the scope of the present invention. The analytics module may also generate the display of a second window for displaying variability analysis results.FIG.23depicts an exemplary user interface2300displaying a comparison of empirical cumulative distribution functions of two separate targets2302,2304in an exemplary embodiment. For example, the user may click on the tab2306for the ECDF Window and select up to four targets to plot and compare their empirical cumulative distribution functions. The x-axis is the target data scaled to be in the range 0 to 1, and the y-axis is the cumulative probability from 0 to 1. This enables users to compare target distributions in an equivalent fashion and examine tail effects which are important in specification limit setting. The multiple methods for assessing normality that are made available by the analytics module allow the user to determine whether they should treat the target data as being normally distributed. If the target data is normally distributed, the user can use the mean and standard deviation to estimate the commonly used three or four sigma points for setting specification limits. If the data is not normally distributed, then the user may estimate useful specification limit points from the percentiles and min/max displayed in the table, as well as from the tails of the ECDF plot. In another embodiment, the target data may be automatically fit with Gaussian mixture models and thus used to estimate useful points for specification limit settings. In an embodiment a variant of this approach is a feature allowing the user to fit the data with variety of other known distributions, e.g., F or t distributions, and thereby estimate useful points for specification limit settings. Portions or all of the embodiments of the present invention may be provided as one or more computer-readable programs or code embodied on or in one or more non-transitory mediums. The mediums may be, but are not limited to a hard disk, a compact disc, a digital versatile disc, a flash memory, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs or code may be implemented in any computing language. Since certain changes may be made without departing from the scope of the present invention, it is intended that all matter contained in the above description or shown in the accompanying drawings be interpreted as illustrative and not in a literal sense. Practitioners of the art will realize that the sequence of steps and architectures depicted in the figures may be altered without departing from the scope of the present invention and that the illustrations contained herein are singular examples of a multitude of possible depictions of the present invention. The foregoing description of example embodiments of the invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while a series of acts has been described, the order of the acts may be modified in other implementations consistent with the principles of the invention. Further, non-dependent acts may be performed in parallel. | 95,583 |
11861290 | DETAILED DESCRIPTION Embodiments of techniques for inverse design of physical devices are described herein, in the context of generating designs for photonic integrated circuits (including a multi-channel photonic demultiplexer). In the following description numerous specific details are set forth to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Wavelength division multiplexing and its variants (e.g., dense wavelength division multiplexing, coarse wavelength division multiplexing, and the like) take advantage of the bandwidth of optical fibers by bundling multiple optical carrier signals onto a single optical fiber. Once the multiple carrier signals are bundled together, they are transmitted from one place to another over the single optical fiber where they may be demultiplexed to be read out by an optical communication device. However, devices that decouple the carrier signals from one another remain prohibitive in terms of cost, size, and the like. Moreover, design of photonic devices, such as those used for optical communication, are traditionally designed via conventional techniques sometimes determined through a simple guess and check method or manually-guided grid-search in which a small number of design parameters from pre-determined designs or building blocks are adjusted for suitability to a particular application. However, in actuality, these devices may have design parameters ranging from hundreds all the way to many billions or more, dependent on the device size and functionality. Thus, as functionality of photonic devices increases and manufacturing tolerances improve to allow for smaller device feature sizes, it becomes increasingly important to take full advantage of these improvements via optimized device design. Described herein are embodiments of a photonic integrated circuit (e.g., a multi-channel photonic demultiplexer and/or multiplexer) having a design obtainable by an inverse design process. More specifically, techniques described in embodiments herein utilize gradient-based optimization in combination with first-principle simulations to generate a design from an understanding of the underlying physics that are expected to govern the operation of the photonic integrated circuit. It is appreciated in other embodiments, design optimization of photonic integrated circuits without gradient-based techniques may also be used. Inverse design may include any technique wherein a performance metric is provided and a design is created to maximize the performance metric. The discussion herein primarily relates to inverse design techniques that use gradient descent with a forward simulation/backpropagation technique to maximize the performance metric. However, the description of these techniques should not be seen as limiting. In some embodiments, other inverse design techniques that can be used to maximize a design's performance based on a performance metric, include, but are not limited to, genetic design, generative design, engineering optimization, shape optimization, and topology optimization. Advantageously, embodiments and techniques described herein are not limited to conventional techniques used for design of photonic devices, in which a small number of design parameters for pre-determined building blocks are adjusted based on suitability to a particular application. Rather, the first-principles based designs described herein are not necessarily dependent on human intuition and generally may result in designs which outstrip current state-of-the-art designs in performance, size, robustness, or a combination thereof. Further still, rather than being limited to a small number of design parameters due to conventional techniques, the embodiments and techniques described herein may provide scalable optimization of a nearly unlimited number of design parameters. It will also be appreciated that, though the design and fabrication of photonic integrated circuits is described throughout the present text, similar inverse design techniques may be used to generate designs for other types of physical devices. FIG.1is a functional block diagram illustrating a system100for optical communication (e.g., via wavelength division multiplexing or other techniques) between optical communication device102and optical communication device120via optical signal110, in accordance with various aspects of the present disclosure. More generally, optical communication device102is configured to transmit information by modulating light from one or more light sources into a multi-channel optical signal110(e.g., a singular optical signal that includes a plurality of distinct wavelength channels) that is subsequently transmitted from optical communication device102to optical communication device120via an optical fiber, a light guide, a wave guide, or other photonic device. Optical communication device120receives the multi-channel optical signal110and demultiplexes each of the plurality of distinct wavelength channels from the multi-channel optical signal110to extract the transmitted information. It is appreciated that in some embodiments optical communication device102and optical communication device120may be distinct and separate devices (e.g., an optical transceiver or transmitter communicatively coupled via one or more optical fibers to a separate optical transceiver or receiver). However, in other embodiments, optical communication device102and optical communication device120may be part of a singular component or device (e.g., a smartphone, a tablet, a computer, optical device, or the like). For example, optical communication device102and optical communication device120may both be constituent components on a monolithic integrated circuit that are coupled to one another via a waveguide that is embedded within the monolithic integrated circuit and is adapted to carry optical signal110between optical communication device102and optical communication device120or otherwise transmit the optical signal between one place and another. In the illustrated embodiment, optical communication device102includes a controller104, one or more interface device(s)112(e.g., fiber optic couplers, light guides, waveguides, and the like), a multiplexer (mux), demultiplexer (demux), or combination thereof (MUX/DEMUX114), one or more light source(s)116(e.g., light emitting diodes, lasers, and the like), and one or more light sensor(s)118(e.g., photodiodes, phototransistors, photoresistors, and the like) coupled to one another. The controller includes one or more processor(s)106(e.g., one or more central processing units, application specific circuits, field programmable gate arrays, or otherwise) and memory108(e.g., volatile memory such as DRAM and SAM, non-volatile memory such as ROM, flash memory, and the like). It is appreciated that optical communication device120may include the same or similar elements as optical communication device102, which have been omitted for clarity. Controller104orchestrates operation of optical communication device102for transmitting and/or receiving optical signal110(e.g., a multi-channel optical signal having a plurality of distinct wavelength channels or otherwise). Controller104includes software (e.g., instructions included in memory108coupled to processor106) and/or hardware logic (e.g., application specific integrated circuits, field-programmable gate arrays, and the like) that when executed by controller104causes controller104and/or optical communication device102to perform operations. In one embodiment, controller104may choreograph operations of optical communication device102to cause light source(s)116to generate a plurality of distinct wavelength channels that are multiplexed via MUX/DEMUX114into a multi-channel optical signal110that is subsequently transmitted to optical communication device120via interface device112. In other words, light source(s)116may output light having different wavelengths (e.g., 1271 nm, 1291 nm, 1311 nm, 1331 nm, 1511 nm, 1531 nm, 1551 nm, 1571, or otherwise) that may be modulated or pulsed via controller104to generate a plurality of distinct wavelength channels representative of information. The plurality of distinct wavelength channels are subsequently combined or otherwise multiplexed via MUX/DEMUX114into a multi-channel optical signal110that is transmitted to optical communication device120via interface device112. In the same or another embodiment, controller104may choreograph operations of optical communication device102to cause a plurality of distinct wavelength channels to be demultiplexed via MUX/DEMUX114from a multi-channel optical signal110that is received via interface device112from optical communication device120. It is appreciated that in some embodiments certain elements of optical communication device102and/or optical communication device120may have been omitted to avoid obscuring certain aspects of the disclosure. For example, optical communication device102and optical communication device120may include amplification circuitry, lenses, or components to facilitate transmitting and receiving optical signal110. It is further appreciated that in some embodiments optical communication device102and/or optical communication device120may not necessarily include all elements illustrated inFIG.1. For example, in one embodiment optical communication device102and/or optical communication device120are passive devices that operate as an intermediary device that may passively multiplex a plurality of distinct wavelength channels into a multi-channel optical signal110and/or demultiplex a plurality of distinct wavelength channels from a multi-channel optical signal110. FIG.2AandFIG.2Brespectively illustrate an example demultiplexer206and multiplexer208, in accordance with various aspects of the present disclosure. Demultiplexer206and multiplexer208are possible embodiments of MUX/DEMUX114illustrated inFIG.1, and which may be part of an integrated photonic circuit, silicon photonic device, or otherwise As illustrated inFIG.2A, demultiplexer206includes an input region202and a plurality of output regions204. Demultiplexer206is configured to receive a multi-channel optical signal110that includes a plurality of distinct wavelength channels (e.g., Ch. 1, Ch. 2, Ch. 3, . . . Ch. N, each having a center wavelength respectively corresponding to λ1, λ2, λ3, . . . λN) via input region202(e.g., a waveguide that may correspond to interface device112illustrated inFIG.1) to optically separate each of the plurality of distinct wavelength channels from the multi-channel optical signal110and respectively guide each of the plurality of distinct wavelength channels to a corresponding one of a plurality of output regions204(e.g., a plurality of waveguides that may correspond to interface device(s)112illustrated inFIG.1). More specifically, in the illustrated embodiment, each of the output regions204receives a portion of the multi-channel optical signal that corresponds to, or is otherwise representative of, one of the plurality of distinct wavelength channels that may be output as plurality of optical signals (e.g., λ1, λ2, λ3, . . . λN). The plurality of output regions204may each be coupled to a respective light sensor (e.g., corresponding to light sensor118illustrated inFIG.1), which may be utilized to convert the optical signals demultiplexed from the multi-channel optical signal110into electrical signals for further processing. In the illustrated embodiment ofFIG.2B, multiplexer208includes a plurality of input regions216and an output region210. Multiplexer is configured to receive a plurality of distinct optical signals (e.g., λ1, λ2, λ3, . . . λN), each at a respective one of the plurality of input regions216(e.g., a plurality of waveguides that may correspond to interface device(s)112illustrated inFIG.1). Multiplexer208is structured or otherwise configured to optically combine (i.e., multiplex) each of the plurality of distinct wavelength channels into a multi-channel optical signal110that is guided to output region210(e.g., a waveguide that may correspond to interface device112illustrated inFIG.1). It is appreciated that in some embodiments, demultiplexer206illustrated inFIG.2Aand multiplexer208illustrated inFIG.2Bmay be bidirectional such that each device may function as both a demultiplexer and multiplexer. FIG.2Cillustrates an example distinct wavelength channel of a multi-channel optical signal (e.g., Ch. N is multi-channel optical signal110illustrated inFIG.1,FIG.2A, andFIG.2B), in accordance with various aspects of the present disclosure. The example channel may be representative of an individual channel included in a plurality of distinct wavelength channels of the multi-channel optical signal that may be demultiplexed and/or multiplexed by demultiplexer206ofFIG.2Aand/or multiplexer208ofFIG.2B. Each of the distinct wavelength channels may have different center wavelengths (λN) including at least one of 1271 nm, 1291 nm, 1311 nm, 1331 nm, 1511 nm, 1531 nm, 1551 nm, or 1571 nm, or otherwise. In the illustrated embodiment ofFIG.2C, the distinct wavelength channel has a channel bandwidth212of approximately 13 nm wide. However, in other embodiments the channel bandwidth may be different than 13 nm wide. Rather, the channel bandwidth may be considered a configurable parameter that is dependent upon the structure of MUX/DEMUX114ofFIG.1, demultiplexer206ofFIG.2A, and/or multiplexer208ofFIG.2B. For example, in some embodiments each of the plurality of distinct wavelength channels may share a common bandwidth that may correspond to 13 nm or otherwise. Referring back toFIG.2C, the channel bandwidth212may be defined as the width of a passband region218(i.e., the region defined as being between PB1and PB2). The passband region218may represent an approximate power transmission of a demultiplexer or multiplexer. It is appreciated that in some embodiments the passband region218may include ripple as illustrated inFIG.2C, which corresponds to fluctuations within the passband region218. In one or more embodiments, the ripple within the passband region around a central value214may be +/−2 dB or less, +/−1 dB or less, +/−0.5 dB or less, or otherwise. In some embodiments, the channel bandwidth212may be defined by the passband region218. In other embodiments, the channel bandwidth212may be defined as the measured power above a threshold (e.g., dB t h). For example, demultiplexer206illustrated inFIG.2Amay optically separate channel N from multi-channel optical signal110and have a corresponding channel bandwidth for channel N equivalent to the range of wavelengths above a threshold value that are transmitted to the output region204mapped to channel N (i.e., λN). In the same or other embodiments, isolation of the channel (i.e., defined by channel bandwidth212) may also be considered when optimizing the design. The isolation may be defined as a ratio between the passband region218and the stopband regions (e.g., regions less than SB1and greater than SB2). It is further appreciated that transition band regions (e.g., a first transition region between SB1and PB1and a second transition region between PB2and SB2) are exemplary and may be exaggerated for the purposes of illustration. In some embodiments, optimization of the design of the photonic demultiplexer may also include a target metric for a slope, width, or the like of the transition band regions. FIG.3A-FIG.3Dillustrate different views of an example photonic demultiplexer, in accordance with an embodiment of the present disclosure. Photonic demultiplexer316is one possible implementation of MUX/DEMUX114illustrated inFIG.1and demultiplexer206illustrated inFIG.2A. It is further appreciated that while discussion henceforth may be directed towards photonic integrated circuits capable of demultiplexing a plurality of distinct wavelength channels from a multi-channel optical signal, that in other embodiments, a demultiplexer (e.g., demultiplexer316) may also or alternatively be capable of multiplexing a plurality of distinct wavelength channels into a multi-channel optical signal, in accordance with embodiments of the present disclosure. FIG.3Aillustrates a cross-sectional view of demultiplexer316along a lateral plane within an active layer defined by a width320and a length322of the demultiplexer316. As illustrated, demultiplexer316includes an input region302(e.g., comparable to input region202illustrated inFIG.2A), a plurality of output regions304(e.g., comparable to plurality of output regions204illustrated inFIG.2A), and a dispersive region optically disposed between the input region302and plurality of output regions304. The input region302and plurality of output regions304(e.g., output region308, output region310, output region312, and output region314) may each be waveguides (e.g., slab waveguide, strip waveguide, slot waveguide, or the like) capable of propagating light along the path of the waveguide. The dispersive region332includes a first material and a second material (see, e.g.,FIG.3D) inhomogeneously interspersed to form a plurality of interfaces that each correspond to a change in refractive index of the dispersive region332and collectively structure the dispersive region332to optically separate each of a plurality of distinct wavelength channels (e.g., Ch. 1, Ch. 2, Ch. 3, . . . Ch. N illustrated inFIG.2A) from a multi-channel optical signal (e.g., optical signal110illustrated inFIG.2A) and respectively guide each of the plurality of distinct wavelength channels to a corresponding one of the plurality of output regions304when the input region302receives the multi-channel optical signal. In other words, input region302is adapted to receive the multi-channel optical signal including a plurality of distinct wavelength channels and the plurality of output regions304are adapted to each receive a corresponding one of the plurality of distinct wavelength channels demultiplexed from the multi-channel optical signal via dispersive region332. As illustrated inFIG.3A, and more clearly shown inFIG.3DandFIG.4A-B, the shape and arrangement of the first and second material that are inhomogeneously interspersed create a plurality of interfaces that collectively form a material interface pattern along a cross-sectional area of dispersive region332that is at least partially surrounded by a periphery boundary region318that includes the second material. In some embodiments periphery region318has a substantially homogeneous composition that includes the second material. In the illustrated embodiment, dispersive region332includes a first side328and a second side330that each interface with an inner boundary (i.e., the unlabeled dashed line of periphery region318disposed between dispersive region332and dashed-dotted line corresponding to an outer boundary of periphery region318). First side328and second side330are disposed correspond to opposing sides of dispersive region332. Input region302is disposed proximate to first side328(e.g., one side of input region302abuts first side328of dispersive region332) while each of the plurality of output regions304are disposed proximate to second side330(e.g., one side of each of the plurality of output regions304abuts second side330of dispersive region332). In the illustrated embodiment each of the plurality of output regions304are parallel to each other one of the plurality of output regions304. However, in other embodiments the plurality of output regions304may not be parallel to one another or even disposed on the same side (e.g., one or more of the plurality of output regions304and/or input region302may be disposed proximate to sides of dispersive region332that are adjacent to first side328and/or second side330). In some embodiments adjacent ones of the plurality of output regions are separated from each other by a common separation distance when the plurality of output regions includes at least three output regions. For example, as illustrated adjacent output region308and output region310are separated from one another by distance306, which may be common to the separation distance between other pairs of adjacent output regions. As illustrated in the embodiment ofFIG.3A, demultiplexer316includes four output regions304(e.g., output region308, output region310, output region312, output region314) that are each respectively mapped (i.e., by virtue of the structure of dispersive region332) to a respective one of four channels included in a plurality of distinct wavelength channels. More specifically, the plurality of interfaces of dispersive region332, defined by the inhomogeneous interspersion of a first material and a second material, form a material interface pattern along a cross-sectional area of the dispersive region332(e.g., as illustrated inFIG.3A,FIG.4A, orFIG.4B) to cause the dispersive region332to optically separate each of the four channels from the multi-channel optical signal and route each of the four channels to a respective one of the four output regions304when the input region302regions the multi-channel optical signal. It is noted that the first material and second material of dispersive region332are arranged and shaped within the dispersive region such that the material interface pattern is substantially proportional to a design obtainable with an inverse design process, which will be discussed in greater detail later in the present disclosure. More specifically, in some embodiments, the inverse design process may include iterative gradient-based optimization of a design based at least in part on a loss function that incorporates a performance loss (e.g., to enforce functionality) and a fabrication loss (e.g., to enforce fabricability and binarization of a first material and a second material) that is reduced or otherwise adjusted via iterative gradient-based optimization to generate the design. In the same or other embodiments, other optimization techniques may be used instead of, or jointly with, gradient-based optimization. Advantageously, this allows for optimization of a near unlimited number of design parameters to achieve functionality and performance within a predetermined area that may not have been possible with conventional design techniques. For example, in one embodiment dispersive region332is structured to optically separate each of the four channels from the multi-channel optical signal within a predetermined area of 35 μm×35 μm (e.g., as defined by width324and length326of dispersive region332) when the input region302receives the multi-channel optical signal. In the same or another embodiment, the dispersive region is structured to accommodate a common bandwidth for each of the four channels, each of the four channels having different center wavelengths. In one embodiment the common bandwidth is approximately 13 nm wide and the different center wavelengths is selected from a group consisting of 1271 nm, 1291 nm, 1311 nm, 1331 nm, 1511 nm, 1531 nm, 1551 nm, and 1571 nm. In some embodiments, the entire structure of demultiplexer316(e.g., including input region302, periphery region318, dispersive region332, and plurality of output regions304) fits within a predetermined area (e.g., as defined by width320and length322). In one embodiment the predetermined area is 35 μm×35 μm. It is appreciated that in other embodiments dispersive region332and/or demultiplexer316fits within other areas greater than or less than 35 μm×35 μm, which may result in changes to the structure of dispersive region332(e.g., the arrangement and shape of the first and second material) and/or other components of demultiplexer316. In the same or other embodiments the dispersive region is structured to have a power transmission of −2 dB or greater from the input region302, through the dispersive region332, and to the corresponding one of the plurality of output regions304for a given wavelength within one of the plurality of distinct wavelength channels. For example, if channel 1 of a multi-channel optical signal is mapped to output region308, then when demultiplexer316receives the multi-channel optical signal at input region302the dispersive region332will optically separate channel 1 from the multi-channel optical signal and guide a portion of the multi-channel optical signal corresponding to channel 1 to output region308with a power transmission of −2 dB or greater. In the same or another embodiment, dispersive region332is structured such that an adverse power transmission (i.e., isolation) for the given wavelength from the input region to any of the plurality of output regions other than the corresponding one of the plurality of output regions is −30 dB or less, −22 dB or less, or otherwise. For example, if channel 1 of a multi-channel optical signal is mapped to output region308, then the adverse power transmission from input region302to any other one of the plurality of output regions (e.g., output region310, output region312, output region314) other than the corresponding one of the plurality of output regions (e.g., output region308) is −30 dB or less, −22 dB or less, or otherwise. In some embodiments, a maximum power reflection from demultiplexer316of an input signal (e.g., a multi-channel optical signal) received at an input region (e.g., input region302) is reflected back to the input region by dispersive region332or otherwise is −40 dB or less, −20 dB or less, −8 dB or less, or otherwise. It is appreciated that in other embodiments the power transmission, adverse power transmission, maximum power, or other performance characteristics may be different than the respective values discussed herein, but the structure of dispersive region332may change due to the intrinsic relationship between structure, functionality, and performance of demultiplexer316. FIG.3Billustrates a vertical schematic or stack of various layers that are included in the illustrated embodiment of demultiplexer316. However, it is appreciated that the illustrated embodiment is not exhaustive and that certain features or elements may be omitted to avoid obscuring certain aspects of the invention. In the illustrated embodiment, demultiplexer316includes substrate334, dielectric layer336, active layer338(e.g., as shown in the cross-sectional illustration ofFIG.3A), and a cladding layer340. In some embodiments, demultiplexer316may be, in part or otherwise, a photonic integrated circuit or silicon photonic device that is compatible with conventional fabrication techniques (e.g., lithographic techniques such as photolithographic, electron-beam lithography and the like, sputtering, thermal evaporation, physical and chemical vapor deposition, and the like). In one embodiment a silicon on insulator (SOI) wafer may be initially provided that includes a support substrate (e.g., a silicon substrate) that corresponds to substrate334, a silicon dioxide dielectric layer that corresponds to dielectric layer336, a silicon layer (e.g., intrinsic, doped, or otherwise), and a oxide layer (e.g., intrinsic, grown, or otherwise). In one embodiment, the silicon in the active layer338may be etched selectively by lithographically creating a pattern on the SOI wafer that is transferred to SOI wafer via a dry etch process (e.g., via a photoresist mask or other hard mask) to remove portions of the silicon. The silicon may be etched all the way down to dielectric layer336to form voids that may subsequently be backfilled with silicon dioxide that is subsequently encapsulated with silicon dioxide to form cladding layer340. In one embodiment, there may be several etch depths including a full etch depth of the silicon to obtain the targeted structure. In one embodiment, the silicon may be 206 nm thick and thus the full etch depth may be 206 nm. In some embodiments, this may be a two-step encapsulation process in which two silicon dioxide depositions are performed with an intermediate chemical mechanical planarization used to yield a planar surface. FIG.3Cillustrates a more detailed view of active layer338(relative toFIG.3B) taken along a portion of periphery region318that includes input region302ofFIG.3A. In the illustrated embodiment, active layer338includes a first material342with a refractive index of ε1and a second material344with a refractive index of ε2that is different from ε1. Homogenous regions of the first material342and the second material344may form waveguides or portions of waveguides that correspond to input region302and plurality of output regions304as illustrated inFIG.3AandFIG.3C. FIG.3Dillustrates a more detailed view of active layer338(relative toFIG.3B) taken along dispersive region332. As described previously, active layer338includes a first material342(e.g., silicon) and a second material344(e.g., silicon dioxide) that are inhomogeneously interspersed to form a plurality of interfaces346that collectively form a material interface pattern. Each of the plurality of interfaces346that form the interface pattern correspond to a change in refractive index of dispersive region332to structure the dispersive region (i.e., the shape and arrangement of first material342and second material344) to provide, at least in part, the functionality of demultiplexer316(i.e., optical separation of the plurality of distinct wavelength channels from the multi-channel optical signal and respective guidance of each of the plurality of distinct wavelength channels to the corresponding one of the plurality of output regions304when the input region302receives the multi-channel optical signal). It is appreciated that in the illustrated embodiments of demultiplexer316as shown inFIG.3A-D, the change in refractive index is shown as being vertically consistent (i.e., the first material342and second material344form interfaces that are substantially vertical or perpendicular to a lateral plane or cross-section of demultiplexer316. However, in the same or other embodiments, the plurality of interfaces (e.g., interfaces346illustrated inFIG.3D) may not be substantially perpendicular with the lateral plane or cross-section of demultiplexer316. FIG.4Aillustrates a more detailed cross-sectional view of a dispersive region of example photonic demultiplexer400, in accordance with an embodiment of the present disclosure.FIG.4Billustrates a more detailed view of an interface pattern formed by the shape and arrangement of a first material410and a second material412for the dispersive region of the photonic demultiplexer400ofFIG.4A. Photonic demultiplexer400is one possible implementation of MUX/DEMUX114illustrated inFIG.1, demultiplexer206illustrated inFIG.2A, and demultiplexer316illustrated inFIG.3A-D. As illustrated inFIG.4AandFIG.4B, photonic demultiplexer400includes an input region402, a plurality of output regions404, and a dispersive region406optically disposed between input region402and plurality of output regions404. Dispersive region406is surrounded, at least in part, by a peripheral region408that includes an inner boundary414and an outer boundary416. It is appreciated that like named or labeled elements of photonic demultiplexer400may similarly correspond to like named or labeled elements of other demultiplexers described in embodiments of the present disclosure. The first material410(i.e., black colored regions within dispersive region406) and second material412(i.e., white colored regions within dispersive region406) of photonic demultiplexer400are inhomogeneously interspersed to create a plurality of interfaces that collectively form material interface pattern420as illustrated inFIG.4B. More specifically, an inverse design process that utilizes iterative gradient-based optimization, Markov Chain Monte Carlo optimization, or other optimization techniques combined with first principles simulations to generate a design that is substantially replicated by dispersive region406within a proportional or scaled manner such that photonic demultiplexer400provides the desired functionality. In the illustrated embodiment, dispersive region406is structured to optically separate each of a plurality of distinct wavelength channels from a multi-channel optical signal and respectively guide each of the plurality of distinct wavelength channels to a corresponding one of the plurality of output regions404when the input region402receives the multi-channel optical signal. More specifically, the plurality of output regions404-A, -B, -C, and -D are respectively mapped to wavelength channels having center wavelengths corresponding to 1271 nm, 1291 nm, 1311 nm, and 1331 nm. In another embodiment, output regions404-A,404-B,404-C, and404-D are respectively mapped to wavelength channels having center wavelengths that correspond to 1511 nm, 1531 nm, 1551 nm, and 1571 nm. As illustrated inFIG.4B, material interface pattern420, which is defined by the black lines within dispersive region406and corresponds to a change in refractive index within dispersive region406, includes a plurality of protrusions422. A first protrusion422-A is formed of the first material410and extends from peripheral region408into dispersive region406. Similarly, a second protrusion422-B is formed of the second material412and extends from peripheral region408into dispersive region406. Further illustrated inFIG.4B, dispersive region406includes a plurality of islands424formed of either the first material410or the second material412. The plurality of islands424include a first island424-A that is formed of the first material410and is surrounded by the second material412. The plurality of islands424also includes a second island424-B that is formed of the second material412and is surrounded by the first material412. In some embodiments, material interface pattern420includes one or more dendritic shapes, wherein each of the one or more dendritic shapes are defined as a branched structure formed from first material410or second material412and having a width that alternates between increasing and decreasing in size along a corresponding direction. Referring back toFIG.4A, for clarity, dendritic structure418is labeled with a white arrow having a black border. As can be seen, the width of dendritic structure418alternatively increases and decreases in size along a corresponding direction (i.e., the white labeled arrow overlaying a length of dendritic structure418) to create a branched structure. It is appreciated that in other embodiments there may be no protrusions, there may be no islands, there may be no dendritic structures, or there may be any number, including zero, of protrusions, islands of any material included in the dispersive region406, dendritic structures, or a combination thereof. In some embodiments, the inverse design process includes a fabrication loss that enforces a minimum feature size, for example, to ensure fabricability of the design. In the illustrated embodiment of photonic demultiplexer400illustrated inFIG.4AandFIG.4B, material interface pattern420is shaped to enforce a minimum feature size within dispersive region406such that the plurality of interfaces within the cross-sectional area formed with first material410and second material412do not have a radius of curvature with a magnitude of less than a threshold size. For example, if the minimum feature size is 150 nm, the radius of curvature for any of the plurality of interfaces have a magnitude of less than the threshold size, which corresponds the inverse of half the minimum feature size (i.e., 1/75 nm−1). Enforcement of such a minimum feature size prevents the inverse design process from generating designs that are not fabricable by considering manufacturing constraints, limitations, and/or yield. In the same or other embodiments, different or additional checks on metrics related to fabricability may be utilized to enforce a minimum width or spacing as a minimum feature size. FIG.5is a functional block diagram illustrating a system500for generating a design of a photonic integrated circuit (i.e., photonic device), in accordance with an embodiment of the disclosure. System500may be utilized to perform an inverse design process that generates a design with iterative gradient-based optimization that takes into consideration the underlying physics that govern the operation of the photonic integrated circuit. More specifically, system500is a design tool that may be utilized to optimize structural parameters (e.g., shape and arrangement of a first material and a second material within the dispersive region of the embodiments described in the present disclosure) of photonic integrated circuits based on first-principles simulations (e.g., electromagnetic simulations to determine a field response of the photonic device to an excitation source) and iterative gradient-based optimization. In other words, system500may provide a design obtained via the inverse design process that is substantially replicated (i.e., proportionally scaled) by dispersive region332and dispersive region406of demultiplexer316and photonic demultiplexer400illustrated inFIG.3AandFIG.4A, respectively. As illustrated, system500includes controller512, display502, input device(s)504, communication device(s)506, network508, remote resources510, bus534, and bus520. Controller512includes processor514, memory516, local storage518, and photonic device simulator522. Photonic device simulator522includes operational simulation engine526, fabrication loss calculation logic528, calculation logic524, adjoint simulation engine530, and optimization engine532. It is appreciated that in some embodiments, controller512may be a distributed system. Controller512is coupled to display502(e.g., a light emitting diode display, a liquid crystal display, and the like) coupled to bus534through bus520for displaying information to a user utilizing system500to optimize structural parameters of the photonic device (i.e., demultiplexer). Input device504is coupled to bus534through bus520for communicating information and command selections to processor514. Input device504may include a mouse, trackball, keyboard, stylus, or other computer peripheral, to facilitate an interaction between the user and controller512. In response, controller512may provide verification of the interaction through display502. Another device, which may optionally be coupled to controller512, is a communication device506for accessing remote resources510of a distributed system via network508. Communication device506may include any of a number of networking peripheral devices such as those used for coupling to an Ethernet, Internet, or wide area network, and the like. Communication device506may further include a mechanism that provides connectivity between controller512and the outside world. Note that any or all of the components of system500illustrated inFIG.5and associated hardware may be used in various embodiments of the present disclosure. The remote resources510may be part of a distributed system and include any number of processors, memory, and other resources for optimizing the structural parameters of the photonic device. Controller512orchestrates operation of system500for optimizing structural parameters of the photonic device. Processor514(e.g., one or more central processing units, graphics processing units, and/or tensor processing units, etc.), memory516(e.g., volatile memory such as DRAM and SRAM, non-volatile memory such as ROM, flash memory, and the like), local storage518(e.g., magnetic memory such as computer disk drives), and the photonic device simulator522are coupled to each other through bus520. Controller512includes software (e.g., instructions included in memory516coupled to processor514) and/or hardware logic (e.g., application specific integrated circuits, field-programmable gate arrays, and the like) that when executed by controller512causes controller512or system500to perform operations. The operations may be based on instructions stored within any one of, or a combination of, memory516, local storage518, physical device simulator522, and remote resources510accessed through network508. In the illustrated embodiment, the components of photonic device simulator522are utilized to optimize structural parameters of the photonic device (e.g., MUX/DEMUX114ofFIG.1, demultiplexer206ofFIG.2A, multiplexer208ofFIG.2B, demultiplexer316ofFIG.3A-D, and photonic demultiplexer400ofFIG.4A-B). In some embodiments, system500may optimize the structural parameters of the photonic device via, inter alia, simulations (e.g., operational and adjoint simulations) that utilize a finite-difference time-domain (FDTD) method to model the field response (e.g., electric and magnetic fields within the photonic device). The operational simulation engine526provides instructions for performing an electromagnetic simulation of the photonic device operating in response to an excitation source within a simulated environment. In particular, the operational simulation determines a field response of the simulated environment (and thus the photonic device, which is described by the simulated environment) in response to the excitation source for determining a performance metric of the physical device (e.g., based off an initial description or input design of the photonic device that describes the structural parameters of the photonic device within the simulated environment with a plurality of voxels). The structural parameters may correspond, for example, to the specific design, material compositions, dimensions, and the like of the physical device. Fabrication loss calculation logic528provides instructions for determining a fabrication loss, which is utilized to enforce a minimum feature size to ensure fabricability. In some embodiments, the fabrication loss is also used to enforce binarization of the design (i.e., such that the photonic device includes a first material and a second material that are interspersed to form a plurality of interfaces). Calculation logic524computes a loss metric determined via a loss function that incorporates a performance loss, based on the performance metric, and the fabrication loss. Adjoint simulation engine530is utilized in conjunction with the operational simulation engine526to perform an adjoint simulation of the photonic device to backpropagate the loss metric through the simulated environment via the loss function to determine how changes in the structural parameters of the photonic device influence the loss metric. Optimization engine532is utilized to update the structural parameters of the photonic device to reduce the loss metric and generate a revised description (i.e., revising the design) of the photonic device. FIGS.6A-6Crespectively illustrate an initial set up of a simulated environment describing a photonic device, performing an operational simulation of the photonic device in response to an excitation source within the simulated environment610, and performing an adjoint simulation of the photonic device within the simulated environment608. The initial set up of the simulated environment, 1-dimensional representation of the simulated environment, operational simulation of the physical device, and adjoint simulation of the physical device may be implemented with system500illustrated inFIG.5. As illustrated inFIG.6A-C, simulated environment is represented in two-dimensions. However, it is appreciated that other dimensionality (e.g., 3-dimensional space) may also be used to describe simulated environment and the photonic device. In some embodiments, optimization of structural parameters of the photonic device illustrated inFIG.6A-Cmay be achieved via an inverse design process including, inter alia, simulations (e.g., operational simulations and adjoint simulations) that utilize a finite-difference time-domain (FDTD) method to model the field response (e.g., electric and magnetic field) to an excitation source. FIG.6Aillustrates a demonstrative simulated environment606describing a photonic integrated circuit (i.e., a photonic device such as a waveguide, demultiplexer, and the like), in accordance with an embodiment of the present disclosure. More specifically, in response to receiving an initial description of a photonic device defined by one or more structural parameters (e.g., an input design), a system (e.g., system500ofFIG.5) configures a simulated environment606to be representative of the photonic device. As illustrated, the simulated environment606(and subsequently the photonic device) is described by a plurality of voxels612, which represent individual elements (i.e., discretized) of the two-dimensional (or other dimensionality) space. Each of the voxels is illustrated as two-dimensional squares; however, it is appreciated that the voxels may be represented as cubes or other shapes in three-dimensional space. It is appreciated that the specific shape and dimensionality of the plurality of voxels612may be adjusted dependent on the simulated environment606and photonic device being simulated. It is further noted that only a portion of the plurality of voxels612are illustrated to avoid obscuring other aspects of the simulated environment606. Each of the plurality of voxels612may be associated with a structural value, a field value, and a source value. Collectively, the structural values of the simulated environment606describe the structural parameters of the photonic device. In one embodiment, the structural values may correspond to a relative permittivity, permeability, and/or refractive index that collectively describe structural (i.e., material) boundaries or interfaces of the photonic device (e.g., interface pattern420of FIG.4B). For example, an interface616is representative of where relative permittivity changes within the simulated environment606and may define a boundary of the photonic device where a first material meets or otherwise interfaces with a second material. The field value describes the field (or loss) response that is calculated (e.g., via Maxwell's equations) in response to an excitation source described by the source value. The field response, for example, may correspond to a vector describing the electric and/or magnetic fields (e.g., in one or more orthogonal directions) at a particular time step for each of the plurality of voxels612. Thus, the field response may be based, at least in part, on the structural parameters of the photonic device and the excitation source. In the illustrated embodiment, the photonic device corresponds to an optical demultiplexer having a design region614(e.g., corresponding to dispersive region332ofFIG.3A, and/or dispersive region406ofFIG.4A), in which structural parameters of the physical device may be updated or otherwise revised. More specifically, through an inverse design process, iterative gradient-based optimization of a loss metric determined from a loss function is performed to generate a design of the photonic device that functionally causes a multi-channel optical signal to be demultiplexed and guided from input port602to a corresponding one of the output ports604. Thus, input port602(e.g., corresponding to input region302ofFIG.3A, input region402ofFIG.4A, and the like) of the photonic device corresponds to a location of an excitation source to provide an output (e.g., a Gaussian pulse, a wave, a waveguide mode response, and the like). The output of the excitation source interacts with the photonic device based on the structural parameters (e.g., an electromagnetic wave corresponding to the excitation source may be perturbed, retransmitted, attenuated, refracted, reflected, diffracted, scattered, absorbed, dispersed, amplified, or otherwise as the wave propagates through the photonic device within simulated environment606). In other words, the excitation source may cause the field response of the photonic device to change, which is dependent on the underlying physics governing the physical domain and the structural parameters of the photonic device. The excitation source originates or is otherwise proximate to input port602and is positioned to propagate (or otherwise influence the field values of the plurality of voxels) through the design region614towards output ports604of the photonic device. In the illustrated embodiment, the input port602and output ports604are positioned outside of the design region614. In other words, in the illustrated embodiment, only a portion of the structural parameters of the photonic device is optimizable. However, in other embodiments, the entirety of the photonic device may be placed within the design region614such that the structural parameters may represent any portion or the entirety of the design of the photonic device. The electric and magnetic fields within the simulated environment606(and subsequently the photonic device) may change (e.g., represented by field values of the individual voxels that collectively correspond to the field response of the simulated environment) in response to the excitation source. The output ports604of the optical demultiplexer may be used for determining a performance metric of the photonic device in response to the excitation source (e.g., power transmission from input port602to a specific one of the output ports604). The initial description of the photonic device, including initial structural parameters, excitation source, performance parameters or metrics, and other parameters describing the photonic device, are received by the system (e.g., system500ofFIG.5) and used to configure the simulated environment606for performing a first-principles based simulation of the photonic device. These specific values and parameters may be defined directly by a user (e.g., of system500inFIG.5), indirectly (e.g., via controller512culling pre-determined values stored in memory516, local storage518, or remote resources510), or a combination thereof. FIG.6Billustrates an operational simulation of the photonic device in response to an excitation source within simulated environment610, in accordance with various aspects of the present disclosure. In the illustrated embodiment, the photonic device is an optical demultiplexer structured to optically separate each of a plurality of distinct wavelength channels included in a multi-channel optical signal received at input port602and respectively guide each of the plurality of distinct wavelength channels to a corresponding one of the plurality of output ports604. The excitation source may be selected (randomly or otherwise) from the plurality of distinct wavelength channels and originates at input port602having a specified spatial, phase, and/or temporal profile. The operational simulation occurs over a plurality of time steps, including the illustrated time step. When performing the operational simulation, changes to the field response (e.g., the field value) for each of the plurality of voxels612are incrementally updated in response to the excitation source over the plurality of time steps. The changes in the field response at a particular time step are based, at least in part, on the structural parameters, the excitation source, and the field response of the simulated environment610at the immediately prior time step included in the plurality of time steps. Similarly, in some embodiments the source value of the plurality of voxels612is updated (e.g., based on the spatial profile and/or temporal profile describing the excitation source). It is appreciated that the operational simulation is incremental and that the field values (and source values) of the simulated environment610are updated incrementally at each time step as time moves forward for each of the plurality of time steps during the operational simulation. It is further noted that in some embodiments, the update is an iterative process and that the update of each field and source value is based, at least in part, on the previous update of each field and source value. Once the operational simulation reaches a steady state (e.g., changes to the field values in response to the excitation source substantially stabilize or reduce to negligible values) or otherwise concludes, one or more performance metrics may be determined. In one embodiment, the performance metric corresponds to the power transmission at a corresponding one of the output ports604mapped to the distinct wavelength channel being simulated by the excitation source. In other words, in some embodiments, the performance metric represents power (at one or more frequencies of interest) in the target mode shape at the specific locations of the output ports604. A loss value or metric of the input design (e.g., the initial design and/or any refined design in which the structural parameters have been updated) based, at least in part, on the performance metric may be determined via a loss function. The loss metric, in conjunction with an adjoint simulation, may be utilized to determine a structural gradient (e.g., influence of structural parameters on loss metric) for updating or otherwise revising the structural parameters to reduce the loss metric (i.e. increase the performance metric). It is noted that the loss metric is further based on a fabrication loss value that is utilized to enforce a minimum feature size of the photonic device to promote fabricability of the device. FIG.6Cillustrates an example adjoint simulation within simulated environment608by backpropagating a loss metric, in accordance with various aspects of the present disclosure. More specifically, the adjoint simulation is a time-backwards simulation in which a loss metric is treated as an excitation source that interacts with the photonic device and causes a loss response. In other words, an adjoint (or virtual source) based on the loss metric is placed at the output region (e.g., output ports604) or other location that corresponds to a location used when determining the performance metric. The adjoint source(s) is then treated as a physical stimuli or an excitation source during the adjoint simulation. A loss response of the simulated environment608is computed for each of the plurality of time steps (e.g., backwards in time) in response to the adjoint source. The loss response collectively refers to loss values of the plurality of voxels that are incrementally updated in response to the adjoint source over the plurality of time steps. The change in loss response based on the loss metric may correspond to a loss gradient, which is indicative of how changes in the field response of the physical device influence the loss metric. The loss gradient and the field gradient may be combined in the appropriate way to determine a structural gradient of the photonic device/simulated environment (e.g., how changes in the structural parameters of the photonic device within the simulated environment influence the loss metric). Once the structural gradient of a particular cycle (e.g., operational and adjoint simulation) is known, the structural parameters may be updated to reduce the loss metric and generate a revised description or design of the photonic device. In some embodiments, iterative cycles of performing the operational simulation, and adjoint simulation, determining the structural gradient, and updating the structural parameters to reduce the loss metric are performed successively as part of an inverse design process that utilizes iterative gradient-based optimization. An optimization scheme such as gradient descent may be utilized to determine specific amounts or degrees of changes to the structural parameters of the photonic device to incrementally reduce the loss metric. More specifically, after each cycle the structural parameters are updated (e.g., optimized) to reduce the loss metric. The operational simulation, adjoint simulation, and updating the structural parameters are iteratively repeated until the loss metric substantially converges or is otherwise below or within a threshold value or range such that the photonic device provides the desired performance while maintaining fabricability. FIG.7Ais a flow chart700illustrating example time steps for an operational simulation702and an adjoint simulation704, in accordance with various aspects of the present disclosure. Flow chart700is one possible implementation that a system (e.g., system500ofFIG.5) may use to perform the operational simulation702and adjoint simulation704of the simulated environment (e.g., simulated environment ofFIG.6A-C) describing a photonic integrated circuit (e.g., an optical device operating in an electromagnetic domain such a photonic demultiplexer). In the illustrated embodiment, the operational simulation702utilizes a finite-difference time-domain (FDTD) method to model the field response (both electric and magnetic) or loss response at each of a plurality of voxels (e.g., plurality of voxels612illustrated inFIG.6A-C) for a plurality of time steps in response to physical stimuli corresponding to an excitation source and/or adjoint source. As illustrated inFIG.7A, the flow chart700includes update operations for a portion of operational simulation702and adjoint simulation704. The operational simulation702occurs over a plurality of time-steps (e.g., from an initial time step to a final time step over a pre-determined or conditional number of time steps having a specified time step size) and models changes (e.g., from the initial field values712) in electric and magnetic fields of a plurality of voxels describing the simulated environment and/or photonic device that collectively correspond to the field response. More specifically, update operations (e.g., update operation714, update operation716, and update operation718) are iterative and based on the field response, structural parameters708(that is, for a selected one of the perturbed structural parameters706), and one or more excitation sources710. Each update operation is succeeded by another update operation, which are representative of successive steps forward in time within the plurality of time steps. For example, update operation716updates the field values740(see, e.g.,FIG.7B) based on the field response determined from the previous update operation714, excitation sources710, and the structural parameters708. Similarly, update operation718updates the field values742(see, e.g.,FIG.7B) based on the field response determined from update operation716. In other words, at each time step of the operational simulation the field values (and thus field response) are updated based on the previous field response and structural parameters of the photonic device. Once the final time step of the operational simulation702is performed, the loss metric724may be determined (e.g., based on a pre-determined performance loss function720). The loss gradients determined from block726may be treated as adjoint or virtual sources (e.g., physical stimuli or excitation source originating at an output region or port) which are backpropagated in reverse (from the final time step incrementally through the plurality of time steps until reaching the initial time step via update operation728, update operation732, and update operation730) to determine structural gradient734. In the illustrated embodiment, the FDTD solve (e.g., operational simulation702) and backward solve (e.g., adjoint simulation704) problem are described pictorially, from a high-level, using only “update” and “loss” operations as well as their corresponding gradient operations. The simulation is set up initially in which the structural parameters, physical stimuli (i.e., excitation source), and initial field states of the simulated environment (and photonic device) are provided (e.g., via an initial description and/or input design). As discussed previously, the field values are updated in response to the excitation source based on the structural parameters. More specifically, the update operation is given by ϕ, where+1=ϕ(,,) for=1, . . . ,. Here,corresponds to the total number of time steps (e.g., the plurality of time steps) for the operational simulation, wherecorresponds to the field response (the field value associated with the electric and magnetic fields of each of the plurality of voxels) of the simulated environment at time step,corresponds to the excitation source(s) (the source value associated with the electric and magnetic fields for each of the plurality of voxels) of the simulated environment at time step, andcorresponds to the structural parameters describing the topology and/or material properties of the physical device (e.g., relative permittivity, index of refraction, and the like). It is noted that using the FDTD method, the update operation may specifically be stated as: ϕ(,,)=A()+B(). (1) That is to say the FDTD update is linear with respect to the field and source terms. Concretely, A()ϵN×Nand B()ϵN×Nare linear operators which depend on the structure parameters,, and act on the fields,, and the sources,, respectively. Here, it is assumed that,ϵNwhere N is the number of FDTD field components in the operational simulation. Additionally, the loss operation (e.g., loss function) may be given by L=ƒ(, . . . ,), which takes as input the computed fields and produces a single, real-valued scalar (e.g., the loss metric) that can be reduced and/or minimized. In terms of revising or otherwise optimizing the structural parameters of the physical device, the relevant quantity to produce is dLd𝓏, which is used to describe the influence of changes in the structural parameters of the initial design736on the loss value and is denoted as the structural gradient734illustrated inFIG.7A. FIG.7Bis a chart738illustrating the relationship between the update operation for the operational simulation and the adjoint simulation (e.g., backpropagation), in accordance with an embodiment of the present disclosure. More specifically,FIG.7Bsummarizes the operational and adjoint simulation relationships that are involved in computing the structural gradient, dLd𝓏, which include ∂L∂𝓍𝒾,∂𝓍𝒾+1∂𝓍𝒾,dLd𝓍𝒾, and ∂𝓍𝒾∂z. The update operation716of the operational simulation702updates the field values740,, of the plurality of voxels at theth time step to the next time step (i.e.,+1 time step), which correspond to the field values742,+1. The gradients744are utilized to determine dLd𝓍𝒾 for the backpropagation (e.g., update operation732backwards in time), which combined with the gradients746are used, at least in part, to calculate the structural gradient, dLd𝓏·∂L∂𝓍𝒾 is the contribution of each field to the loss metric, L. It is noted that this is the partial derivative, and therefore does not take into account the causal relationship of→+1. Thus, ∂𝓍𝒾+1∂𝓍𝒾 is utilized which encompasses the→+1relationship. The loss gradient, dLd𝓍𝒾 may also be used to compute the structural gradient, dLd𝓏, and corresponds to the total derivative of the field with respect to loss value, L. The loss gradient, dLd𝓍𝒾, at a particular time step,is equal to the summation of ∂L∂𝓍𝒾+dLd𝓍𝒾+1∂𝓍𝒾+1∂𝓍𝒾. Finally, ∂𝓍𝒾∂z, which corresponds to the field gradient, is used which is the contribution to dLd𝓏 from each time/update step. In particular, the memory footprint to directly compute ∂L∂𝓍𝒾 and dLd𝓏 is so large that it is difficult to store more than a handful of state Tensors. The state Tensor corresponds to storing the values of all of the FDTD cells (e.g., the plurality of voxels) for a single simulation time step. It is appreciated that the term “tensor” may refer to tensors in a mathematical sense or as described by the TensorFlow framework developed by Alphabet, Inc. In some embodiments the term “tensor” refers to a mathematical tensor which corresponds to a multidimensional array that follows specific transformation laws. However, in most embodiments, the term “tensor” refers to TensorFlow tensors, in which a tensor is described as a generalization of vectors and matrices to potentially higher dimensions (e.g., n-dimensional arrays of base data types), and is not necessarily limited to specific transformation laws. For example, for the general loss function ƒ, it may be necessary to store the fields,, for all time steps,. This is because, for most choices of ƒ, the gradient will be a function of the arguments of ƒ. This difficulty is compounded by the fact that the values of ∂L∂𝓍𝒾 for larger values ofare needed before the values for smallerdue to the incremental updates of the field response and/or through backpropagation of the loss metric, which may prevent the use of schemes that attempt to store only the values ∂L∂𝓍𝒾, at an immediate time step. An additional difficulty is further illustrated when computing the structural gradient, dLd𝓏, which is given by: dLd𝓏=∑𝒾dLd𝓍𝒾∂𝓍𝒾∂𝓏.(2) For completeness, the full form of the first term in the sum, dLd𝓏, is expressed as: dLd𝓍𝒾=∂L∂𝓍𝒾+dL∂𝓍𝒾+1d𝓍𝒾+1∂𝓍𝒾.(3) Based on the definition of ϕ as described by equation (1), it is noted that ∂𝓍𝒾+1∂𝓍𝒾=A(𝓏), which can be substituted in equation (3) to arrive at an adjoint update for backpropagation (e.g., the update operations such as update operation732), which can be expressed as: dLd𝓍𝒾=∂L∂𝓍𝒾+dLd𝓍𝒾+1A(𝓏),(4) or ∇𝓍𝒾L=A(𝓏)T∇𝓍𝒾+1L+∂LT∂𝓍𝒾.(5) The adjoint update is the backpropagation of the loss gradient (e.g., from the loss metric) from later to earlier time steps and may be referred to as a backwards solve for dLd𝓍𝒾. More specifically, the loss gradient may initially be based upon the backpropagation of a loss metric determined from the operational simulation with the loss function. The second term in the sum of the structural gradient, dLd𝓏, corresponds to the field gradient and is denoted as: ∂𝓍𝒾∂𝓏=dϕ(𝓍𝒾-1,𝒾-1,𝓏)d𝓏=dA(𝓏)d𝓏𝓍𝒾-1+dB(𝓏)d𝓏𝒾-1,(6) for the particular form of ϕ described by equation (1). Thus, each term of the sum associated dLd𝓏 depends on both dLd𝓍𝒾0 for>=and for<. Since the dependency chains of these two terms are in opposite directions, it is concluded that computing dLd𝓏 in this way requires the storage ofvalues for all of. In some embodiments, the need to store all field values may be mitigated by a reduced representation of the fields. FIG.8is a flowchart that illustrates a non-limiting example embodiment of a method800for generating a design of physical device such as a photonic integrated circuit, in accordance with various aspects of the present disclosure. It is appreciated that method800is an inverse design process that may be accomplished by performing operations with a system (e.g., system500ofFIG.5) to perform iterative gradient-based optimization of a loss metric determined from a loss function that includes at least a performance loss and a fabrication loss. In the same or other embodiments, method800may be included as instructions provided by at least one machine-accessible storage medium (e.g., non-transitory memory) that, when executed by a machine, will cause the machine to perform operations for generating and/or improving the design of the photonic integrated circuit. It is further appreciated that the order in which some or all of the process blocks appear in method800should not be deemed limiting. Rather, one of ordinary skill in the art having the benefit of the present disclosure will understand that some of the process blocks may be executed in a variety of orders not illustrated, or even in parallel. From a start block, the method800proceeds to block802, where an initial design of a physical device such as a photonic integrated circuit is received. In some embodiments, the physical device may be expected to have a certain functionality (e.g., perform as an optical demultiplexer) after optimization, and the initial design provided to the method800may include desired performance characteristics for the output of the method800. In some embodiments, the initial design may describe structural parameters of the physical device within a simulated environment. The simulated environment may include a plurality of voxels that collectively describe the structural parameters of the physical device. Each of the plurality of voxels is associated with a structural value to describe the structural parameters, a field value to describe the field response (e.g., the electric and magnetic fields in one or more orthogonal directions) to physical stimuli (e.g., one or more excitation sources), and a source value to describe the physical stimuli. In some embodiments the initial design may be a first description of the physical device in which values for the structural parameters may be random values or null values outside of input and output regions such that there is no bias for the initial (e.g., first) design. It is appreciated that the initial description or input design may be a relative term. Thus, in some embodiments an initial description may be a first description of the physical device described within the context of the simulated environment (e.g., a first input design for performing a first operational simulation). However, in other embodiments, the term initial description may refer to an initial description of a particular cycle (e.g., of performing an operational simulation702, operating an adjoint simulation704, and updating the structural parameters). In such an embodiment, the initial design or design of that particular cycle may correspond to a revised description or refined design (e.g., generated from a previous cycle). In some embodiments, the simulated environment includes a design region that includes a portion of the plurality of voxels which have structural parameters that may be updated, revised, or otherwise changed to optimize the structural parameters of the physical device. In the same or other embodiments, the structural parameters are associated with geometric boundaries and/or material compositions of the physical device based on the material properties (e.g., relative permittivity, index of refraction, etc.) of the simulated environment. In some embodiments, the design region may include one or more static design areas that include structural parameters that are “locked,” or otherwise are not updated by the method800. The determination and use of static design areas is described in further detail below. At block804, a simulated environment is configured to be representative of the initial design of the physical device (e.g., photonic device). Once the structural parameters have been received or otherwise obtained, the simulated environment is configured (e.g., the number of voxels, shape/arrangement of voxels, and specific values for the structural value, field value, and/or source value of the voxels are set based on the perturbed structural parameters). In some embodiments the simulated environment includes a design region optically coupled between a first communication region and a plurality of second communication regions. In some embodiments, the first communication region may correspond to an input region or port (e.g., where an excitation source originates), while the second communication may correspond to a plurality of output regions or ports (e.g., when designing an optical demultiplexer that optically separates a plurality of distinct wavelength channels included in a multi-channel optical signal received at the input port and respectively guiding each of the distinct wavelength channels to a corresponding one of the plurality of output ports). However, in other embodiments, the first communication region may correspond to an output region or port, while the plurality of second communication regions corresponds to a plurality of input ports or region (e.g., when designing an optical multiplexer that optically combines a plurality of distinct wavelength signals received at respective ones of the plurality of input ports to form a multi-channel optical signal that is guided to the output port). Block806shows mapping each of a plurality of distinct wavelength channels to a respective one of the plurality of second communication regions. The distinct wavelength channels may be mapped to the second communication regions by virtue of the initial design of the physical device. For example, a loss function may be chosen that associates a performance metric of the physical device with power transmission from the input port to individual output ports for mapped channels. In one embodiment, a first channel included in the plurality of distinct wavelength channels is mapped to a first output port, meaning that the performance metric of the physical device for the first channel is tied to the first output port. Similarly, other output ports may be mapped to the same or different channels included in the plurality of distinct wavelength channels such that each of the distinct wavelength channels is mapped to a respective one of the plurality of output ports (i.e., second communication regions) within the simulated environment. In one embodiment, the plurality of second communication regions includes four regions and the plurality of distinct wavelength channels includes four channels that are each mapped to a corresponding one of the four regions. In other embodiments, there may be a different number of the second communication regions (e.g., 8 regions) and a different number of channels (e.g., 8 channels) that are each mapped to a respective one of the second communication regions. Block808illustrates performing an operational simulation of the physical device within the simulated environment operating in response to one or more excitation sources to determine a performance loss value. More specifically, in some embodiments an electromagnetic simulation is performed in which a field response of the photonic integrated circuit is updated incrementally over a plurality of time steps to determine how the field response of the physical device changes due to the excitation source. The field values of the plurality of voxels are updated in response to the excitation source and based, at least in part, on the structural parameters of the integrated photonic circuit. Additionally, each update operation at a particular time step may also be based, at least in part, on a previous (e.g., immediately prior) time step. Consequently, the operational simulation simulates an interaction between the photonic device (i.e., the photonic integrated circuit) and a physical stimuli (i.e., one or more excitation sources) to determine a simulated output of the photonic device (e.g., at one or more of the output ports or regions) in response to the physical stimuli. The interaction may correspond to any one of, or combination of a perturbation, retransmission, attenuation, dispersion, refraction, reflection, diffraction, absorption, scattering, amplification, or otherwise of the physical stimuli within electromagnetic domain due, at least in part, to the structural parameters of the photonic device and underlying physics governing operation of the photonic device. Thus, the operational simulation simulates how the field response of the simulated environment changes due to the excitation source over a plurality of time steps (e.g., from an initial to final time step with a pre-determined step size). In some embodiments, the simulated output may be utilized to determine one or more performance metrics of the physical device. For example, the excitation source may correspond to a selected one of a plurality of distinct wavelength channels that are each mapped to one of the plurality of output ports. The excitation source may originate at or be disposed proximate to the first communication region (i.e., input port) when performing the operational simulation. During the operational simulation, the field response at the output port mapped to the selected one of the plurality of distinct wavelength channels may then be utilized to determine a simulated power transmission of the photonic integrated circuit for the selected distinct wavelength channel. In other words, the operational simulation may be utilized to determine the performance metric that includes determining a simulated power transmission of the excitation source from the first communication region, through the design region, and to a respective one of the plurality of second communication regions mapped to the selected one of the plurality of distinct wavelength channels. In some embodiments, the excitation source may cover the spectrum of all of the plurality of output ports (e.g., the excitation source spans at least the targeted frequency ranges for the bandpass regions for each of the plurality of distinct wavelength channels as well as the corresponding transition band regions, and at least portions of the corresponding stopband regions) to determine a performance metric (i.e., simulated power transmission) associated with each of the distinct wavelength channels for the photonic integrated circuit. In some embodiments, one or more frequencies that span the passband of a given one of the plurality of distinct wavelength channels is selected randomly to optimize the design (e.g., batch gradient descent while having a full width of each passband including ripple in the passband that meets the target specifications). In the same or other embodiments, each of the plurality of distinct wavelength channels has a common bandwidth with different center wavelengths. The performance metric may then be used to generate a performance loss value for the set of structural parameters708. The performance loss value may correspond to a difference between the performance metric and a target performance metric of the physical device. Block810shows determining a loss metric based on the performance loss value and a fabrication loss associated with, for example, a minimum feature size. In some embodiments the loss metric is determined via a loss function that includes both the performance loss value and the fabrication loss as input values. In some embodiments, a minimum feature size for the design region of the simulated environment may be provided to promote fabricability of the design generated by the inverse design process. The fabrication loss is based, at least in part, on the minimum feature size and the perturbed structural parameters of the design region. More specifically, the fabrication loss enforces the minimum feature size for the design such that the design region does not have structural elements with a diameter less than the minimum feature size. This helps this system provide designs that meet certain fabricability and/or yield requirements. In some embodiments the fabrication loss also helps enforce binarization of the design (i.e., rather than mixing the first and second materials together to form a third material, the design includes regions of the first material and the second material that are inhomogeneously interspersed). In some embodiments the fabrication loss is determined by generating a convolution kernel (e.g., circular, square, octagonal, or otherwise) with a width equal to the minimum feature size. The convolution kernel is then shifted through the design region of the simulated environment to determine voxel locations (i.e., individual voxels) within the design region that fit the convolution kernel within the design region without extending beyond the design region. The convolution kernel is then convolved at each of the voxel locations with the structural parameters associated with the voxel locations to determine first fabrication values. The structural parameters are then inverted and the convolution kernel is convolved again at each of the voxel locations with the inverted structural parameters to determine second fabrication values. The first and second fabrication values are subsequently combined to determine the fabrication loss for the design region. This process of determining the fabrication loss may promote structural elements of the design region having a radius of curvature less having a magnitude of less than a threshold size (i.e., inverse of half the minimum feature size). Block812illustrates backpropagating the loss metric via the loss function through the simulated environment to determine an influence of changes in the structural parameters on the loss metric (i.e., structural gradient). The loss metric is treated as an adjoint or virtual source and is backpropagated incrementally from a final time step to earlier time steps in a backwards simulation to determine the structural gradient of the physical device. Block814shows revising a design of the physical device (e.g., generated a revised description) by updating the structural parameters of the initial design to adjust the loss metric. In some embodiments, adjusting for the loss metric may reduce the loss metric. However, in other embodiments, the loss metric may be adjusted or otherwise compensated in a manner that does not necessarily reduce the loss metric, In one embodiment, adjusting the loss metric may maintain fabricability while providing a general direction within the parameterization space to obtain designs that will ultimately result in increased performance while also maintaining device fabricability and targeted performance metrics. In some embodiments, the revised description is generated by utilizing an optimization scheme after a cycle of operational and adjoint simulations via a gradient descent algorithm, Markov Chain Monte Carlo algorithm, or other optimization techniques. Put in another way, iterative cycles of simulating the physical device, determining a loss metric, backpropagating the loss metric, and updating the structural parameters to adjust the loss metric may be successively performed until the loss metric substantially converges such that the difference between the performance metric and the target performance metric is within a threshold range while also accounting for fabricability and binarization due to the fabrication loss. In some embodiments, the term “converges” may simply indicate the difference is within the threshold range and/or below some threshold value. As discussed in further detail below, the updates to the structural parameters may leave the structural parameters within one or more static design areas unchanged. At decision block816, a determination is made regarding whether the loss metric substantially converges such that the difference between the performance metric and the target performance metric is within a threshold range. Iterative cycles of simulating the physical device with the excitation source selected from the plurality of distinct wavelength channels, backpropagating the loss metric, and revising the design by updating the structural parameters to reduce the loss metric until the loss metric substantially converges such that the difference between the performance metric and the target performance metric is within a threshold range. In some embodiments, the structural parameters of the design region of the integrated photonic circuit are revised when performing the cycles to cause the design region of the photonic integrated circuit to optically separate each of the plurality of distinct wavelength channels from a multi-channel optical signal received via the first communication region and guide each of the plurality of distinct wavelength channels to the corresponding one of the plurality of second communication regions. If the determination is that the loss metric has not converged, then the result of decision block816is NO, and the method800returns to block806to iterate on the revised initial design. Otherwise, if the determination is that the loss metric has converged, then the result of decision block816is YES and the method800advances to block818. Block818illustrates outputting an optimized design of the physical device in which the structural parameters have been updated to have the difference between the performance metric and the target performance metric within a threshold range while also enforcing a minimum feature size and binarization. The method800then proceeds to an end block and terminates. The output optimized design may be provided to a fabrication system in order to fabricate the physical device. While the devices and techniques described above have been found to be effective, additional improvements can also be made. For example, other than fabricability constraints, the above techniques do not place any restrictions on the design of the structural parameters within the design region614. As such, segmented designs generated for physical devices using different initial designs may have little to no similarity within the design region614. This characteristic may have multiple drawbacks. For example, re-optimizing the entire design region614for every different initial design ignores the possibility of reusing optimizations for portions of the design region614that may work well for multiple initial designs and thus reducing the computing workload. As another example, if every physical device designed with the above techniques has a completely unique design region614, it becomes difficult to detect counterfeiting or other infringement of the design of the physical device. To address these drawbacks, some embodiments of the present disclosure use static design areas, in which the design of some portions of the design region614are predetermined and do not change during optimization. By using static design areas, multiple benefits can be obtained. For example, in some embodiments, the content of the static design area is not optimized during the inverse design process for a new physical device, and so computation costs are reduced. As another example, in some embodiments, the content of the static design area may be compared to the content of a corresponding area of an accused counterfeit device, and the counterfeiting may be established based on this reduced comparison instead of a comparison of the entire design region614. FIG.9A-Eare schematic illustrations of non-limiting example embodiments of physical devices that use static design areas according to various aspects of the present disclosure.FIG.9A-9Einclude similar features to those illustrated inFIG.3A. That is, the figures show a demultiplexer916having a width920and a length922, that has a dispersive region932with a width924and a length926and surrounded by a periphery region918. The demultiplexer916accepts input at an input region302at a first side928, and the dispersive region932separates the input into a plurality output regions904, including an output region908, an output region910, an output region912, and an output region914separated by distance906. As described above, the dispersive region932may be coextensive with the design region614illustrated inFIG.6A-C, and may be the portion of the demultiplexer916that is designed via an inverse design process. As discussed above with respect toFIG.3A, the layout of demultiplexer916with one input region902and four output regions904is a non-limiting example only. In some embodiments, more or fewer output regions904may be present, and more or fewer input regions902may be present. In some embodiments, the physical device may be a multiplexer, in which case the number of input regions may be greater than the number of output regions. InFIG.9A, a region near the input region902has been designated a first static design area934, and regions near the output region908, output region910, output region912, and output region914have been designated a second static design area936, a third static design area938, a fourth static design area940, and a fifth static design area942, respectively. In each of these static design areas, predetermined structural parameters may be provided for the dispersive region932within the areas, such that during the inverse design process that determines the structural parameters for the rest of the dispersive region932(such as the method800discussed above), the predetermined structural parameters within the static design areas will not change. The size of the static design areas may be determined using any suitable technique. For example (and as discussed in further detail below), the strength of the field values in the dispersive region932may be determined during the inverse design process, and one or more regions that have strong field values may be selected. Regions having strong field values are expected to have a strong effect on the overall performance of the demultiplexer916. Accordingly, in such embodiments, the static design areas may have a greater effect on the design of the remainder of the dispersive region932. This makes it more difficult to create a counterfeit version of the remainder of the dispersive region932without exactly copying the contents of the static design areas. If such techniques are used to determine the size of the static design areas, then differently sized static design areas may result depending on the particular designs used to determine the static design areas. For example, the first static design area934, second static design area936, third static design area938, fourth static design area940, and fifth static design area942ofFIG.9Aare larger than the corresponding static design areas ofFIG.9B. This may be due to smaller areas within the dispersive region932ofFIG.9Bhaving strong enough field values to be included, or may be due to a higher threshold being used for the field values to be included in the static design areas. FIG.9AandFIG.9Billustrate static design areas that are approximately equal in size. However, this is merely a non-limiting example.FIG.9Cillustrates an example wherein the first static design area934is relatively large, and the second static design area936, third static design area938, fourth static design area940, and fifth static design area942are relatively small. Because the sizes may be determined based on the field values, the differences in sizes may reflect the differences in the field values throughout the dispersive region932. FIG.9A-Cillustrate example embodiments wherein the input region902and each of the output regions904has a single associated static design area. This may occur due to a design decision (e.g., areas with high field values near an input region or an output region may be selected), or may occur due to the natural design process.FIG.9Dillustrates another example embodiment, wherein instead of separate static design areas associated with each of the output regions904, a single sixth static design area944is positioned at the second side930of the dispersive region932and is associated with all of the output regions904. Again, this may occur due to a design decision, or may occur due to the natural design process. FIG.9A-Deach illustrate example embodiments wherein the static design areas are associated with the first side928or the second side930. In other embodiments, one or more static design areas may be located in other positions.FIG.9Eillustrates a seventh static design area946that is not positioned close to the input region902or any of the output regions904. Such an embodiment may occur if the field values for the central portion of the dispersive region932are particularly high. FIG.10is a flowchart that illustrates a non-limiting example embodiment of a method of generating a design for a physical device according to various aspects of the present disclosure. In the method1000, appropriate areas for one or more static design areas are determined, as well as contents for the static design areas. Those static design areas are then used within generated designs for a plurality of physical devices, such that all of the generated designs will share a common design within the static design areas. From a start block, the method1000proceeds to block1002, where device specifications for a plurality of physical devices are received. The device specifications may be similar to the initial design described in method800above, in that the device specifications may include desired performance characteristics for the physical devices, may include initial structural parameters for the physical devices, or may specify a desired outcome for the design of the physical devices in any other suitable way. In some embodiments, the device specifications may each describe physical devices with some similar characteristics, such as matching dimensions, a matching number of input ports and/or a matching number of output ports, and some dissimilar characteristics, such as desired wavelength gain properties. Using device specifications that have some similar characteristics may lead to more effective static design areas due to the similarities in the actions to be performed by the physical devices. In some embodiments, device specifications that do not share matching numbers of input ports or output ports may be used. The method1000then proceeds to a for-loop defined between a for-loop start block1004and a for-loop end block1008, wherein each device specification is processed. From the for-loop start block1004, the method1000advances to subroutine block1006, where an inverse design process is conducted to generate a segmented design corresponding to the device specification, the segmented design including a material (e.g., structural parameters) and a field magnitude for each segment in the segmented design. The method800discussed above is one non-limiting example of an inverse design process that may be used at subroutine block1006to generate the segmented design based on the device specification, though in some embodiments, other techniques may be used. The method1000then proceeds to for-loop end block1008. If further device specifications remain to be processed, then the method1000loops back to for-loop start block1004to process the next device specification. Otherwise, if all of the device specifications have been processed, then the method1000proceeds to block1010. At this point, the method1000has determined a plurality of segmented designs based on the plurality of device specifications. At block1010, at least one highly impactful design area is determined based on the field magnitudes of the segmented designs. One purpose of determining at least one highly impactful design area is to find portions of the segmented designs that, if changed, would greatly change the overall performance of the segmented designs. The method1000may consider all of the segmented designs in order to find areas that are in general found to be highly impactful across the segmented designs. In some embodiments, the method1000may analyze field magnitudes in corresponding regions of the segmented designs. For example, the method1000may determine average field magnitudes in corresponding segments of the segmented designs, and may find the highly impactful design areas by selecting regions wherein the average field magnitudes are greater than a predetermined threshold, or are otherwise high when compared to average field magnitudes of other regions. In some embodiments, the method1000may consider field magnitudes within the entire design region614. In some embodiments, the method1000may consider field magnitudes in predetermined regions, such as regions near an input port or an output port. The method1000may start by specifying an initial region for a highly impactful design area (such as a region near an input port or an output port), and may thereafter increase a size, decrease a size, or change a shape of the region based on the average field values. At block1012, the at least one highly impactful design area is designated as at least one static design area. The description herein describes a separate determination of a highly impactful design area and a designation of a static design area for the sake of clarity in describing the various actions. In some embodiments, the actions of block1012may be combined with the actions of block1010, and the method1000may directly designate the static design area without first separately determining the highly impactful design area. At this point in the method1000, even though a static design area has been designated, each segmented design still likely includes different segments within the static design area. Accordingly, at block1014, at least one design portion is determined for the at least one static design area based on the segmented designs. Each design portion specifies structural parameters for the segments within each static design area, respectively. The design portions may be determined using any suitable technique. Typically, the design portions may be determined by determining a segmented design having a desired performance characteristic, and using the portion of the determined segmented design as the design portion for the static design area. Any desired performance characteristic may be used, including but not limited to having a maximum overall performance with respect to a corresponding device specification and having maximum field magnitudes within the highly impactful design area. In some embodiments, if more than one static design area is present, segmented designs may be determined separately for each static design area. In some embodiments, if more than one static design area is present, a single segmented design may be determined, and design portions from the single segmented design may be used for each of the static design areas. The method1000then proceeds to a for-loop defined between a for-loop start block1016and a for-loop end block1020, where each of the device specifications is again processed. In some embodiments, the for-loop may skip processing the device specification associated with the segmented design that was determined for use as the design portion for the static design area, if the same segmented design was used for all of the static design areas. From the for-loop start block1016, the method1000proceeds to subroutine block1018, where an inverse design process is conducted to generate a segmented design corresponding to the device specification that includes the at least one design portion for the at least one static design area. In some embodiments, the method800described above is used at subroutine block1018. In the method800, the design portions for the at least one static design area are provided as part of the initial design, and at block814, the structural parameters for the segments within the at least one static design area are not updated. The method1000then proceeds to for-loop end block1020. If further device specifications remain to be processed, then the method1000loops back to for-loop start block1016to process the next device specification. Otherwise, if all of the device specifications have been processed, then the method1000proceeds to decision block1022. At decision block1022, a determination is made regarding whether the performance of the newly generated segmented designs is acceptable. In some embodiments, the performance of the newly generated segmented designs may be acceptable if all of the newly generated segmented designs were able to be successfully generated by subroutine block1018, and if all of the newly generated segmented designs have a calculated performance that is within a predetermined threshold value of a desired performance specified in its corresponding device specification. If it is determined that the performance of the newly generated segmented designs is not acceptable, then the result of decision block1022is NO, and the method1000returns to block1010. In some embodiments, after returning to block1010, a different at least one highly impactful design area may be chosen in order to increase the chances that performance of the generated segmented designs is acceptable. For example, the size of the highly impactful design area may be reduced, thus leaving a larger area to be optimized by the inverse design process. In some embodiments, the at least one highly impactful design area chosen at block1010may be the same, but the at least one design portion determined at block1014may be different. At decision block1022, if it is determined that the performance of the newly generated segmented designs is acceptable, then the result of decision block1022is YES, and the method1000advances to block1024, where the at least one design portion for the at least one static design area is stored for later use. In some embodiments, one or more of the generated segmented designs may also be stored for later use and/or transmitted to a fabrication system for fabrication. At block1026, a new device specification for a new physical device is received. The new device specification is similar in content to the device specifications received at block1002, but is new in the sense that it was not included in the device specifications received at block1002. At subroutine block1028, an inverse design process is conducted to generate a segmented design that uses the at least one static design area and corresponds to the new device specification. Again, the subroutine block1028may use the method800described above to generate the segmented design for the new device specification, with the design portions for the at least one static design area provided as part of the initial design, and without updating the structural parameters for the segments within the at least one static design area in block814. Once subroutine block1028is complete, the segmented design for the new physical device may be transmitted to a fabrication system for fabrication. The method1000then proceeds to an end block and terminates. In the preceding description, numerous specific details are set forth to provide a thorough understanding of various embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. The order in which some or all of the blocks appear in each method flowchart should not be deemed limiting. Rather, one of ordinary skill in the art having the benefit of the present disclosure will understand that actions associated with some of the blocks may be executed in a variety of orders not illustrated, or even in parallel. The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described. Additionally, the processes may be embodied within hardware, such as an application specific integrated circuit (“ASIC”) or otherwise. The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation. | 105,383 |
11861292 | DETAILED DESCRIPTION Techniques are disclosed for providing an ability to selectively apply data compression based on contents of a schema. Typically, an application implementing an interface for streaming and/or sending data (e.g., an interface associated with a kafka topic) requires a schema or a reference to the schema that defines the variable interface. Conventionally, a schema provides a definition of a format of data being written and/or sent via the interface. Often, depending on a complexity of data structures within a schema, the schema can be very large. Generally, a schema is published to a schema registry or sent together with a message (e.g., a record), which can add a significant amount of overhead to transmission of the message (e.g., a record). Typically, extra overhead associated with sending a schema causes a system to waste bandwidth, time, and processing power sending the schema. As described in various examples disclosed herein, to facilitate reducing an amount of system resources required for transmitting a schema, the systems and methods disclosed herein advantageously applies an adaptive compression technique tailored to contents of the schema. In various implementations, an adaptive compression technique may apply one or more text compression algorithms (e.g., a short text compression algorithm or a pure text compression algorithm) to decrease a dimension of the overall schema and reduce an amount of network traffic and time required to complete the operation. For example, when an application attempts to output or write data (e.g., writing data to a kafka topic), a schema may be required to interpret the data being output. In most instances, a schema describes structures and/or format of data being output. In various implementations, a schema may be published to a schema registry or may be sent together with a message (e.g., a record). Prior to publishing or sending a schema, an application may apply a string compression algorithm (e.g., shoco compression algorithm, Huffman text compression algorithm, SMAZ compression algorithm, and/or other compression algorithms), selected based on contents of the schema, to the schema. In many implementations, compression algorithms applied may be able to reduce a size of a schema by up to 50%. FIG.1depicts a high-level component diagram of an example computing system100in accordance with one or more aspects of the present disclosure. The computing system100may include a server180, broker170, registry168, one or more virtual machines (VM150A-B,150generally), and nodes (e.g., nodes110A-C,110generally). In various implementations, an application (e.g., application198A) may stream and/or communicate data directly with other applications (e.g., application198B). In these implementations, an application (e.g., application198A) may send a schema (e.g., schema166) with streamed and/or communicated data (e.g., data162). In certain implementations, an application (e.g., application198A) may stream and/or communicate data with other applications via a broker (e.g., broker170). In these instances, an application (e.g., application198A) may register a schema (e.g., schema166) at a registry (e.g., registery168). In various implementations, an application (e.g., application198B) may retrieve a schema (e.g. schema166) from a registry (e.g., registry168) to decode data (e.g., data162) streamed by another application (e.g., application198A). In this implementation, an application (e.g., application198B) may subscribe to the data (e.g., data162) via a broker (e.g., broker170), where the broker may forward and/or stream the data to the application. In certain implementations, schemas (e.g. schema166) may be stored as a compressed schema (e.g., compressed schema172) to reduce an amount of bandwidth taken up when transmitting and/or retrieving a schema. Virtual machines150A-B may include a virtual machine memory (VM Memory), a virtual CPU (VCPU), virtual memory devices (VMD), and virtual input/output devices (VI/O). For example, virtual machine150A may include virtual machine memory195A, a virtual CPU190A, a virtual memory devices193A, and a virtual input/output device194A. Similarly, virtual machine150B may include virtual machine memory195B, a virtual CPU190B, a virtual memory devices193B, and virtual input/output device194B. In an example, Applications198A-B may be different applications or services. In another example, applications198A-B may be different instances of the same application or service. In an example, a virtual machine150A may execute a guest operating system and run applications198A-B which may utilize the underlying VCPU190A, VMD193A, and VI/O device194A. One or more applications198A-B may be running on a virtual machine150A under the respective guest operating system. A virtual machine (e.g., VM150A-B, as illustrated inFIG.1) may run on any type of dependent, independent, compatible, and/or incompatible applications on the underlying hardware and operating system (“OS”). In an example, applications (e.g., App198A-B) run on a virtual machine150A may be dependent on the underlying hardware and/or OS. In another example embodiment, applications198A-B run on a virtual machine150A may be independent of the underlying hardware and/or OS. For example, application198A run on a first virtual machine150A may be dependent on the underlying hardware and/or OS while application (e.g., application198B) run on a second virtual machine (e.g., VM150B) is independent of the underlying hardware and/or OS. Additionally, applications198A-B run on a virtual machine150A may be compatible with the underlying hardware and/or OS. In an example embodiment, applications198A-B run on a virtual machine150A may be incompatible with the underlying hardware and/or OS. For example, application198A run on one virtual machine150A may be compatible with the underlying hardware and/or OS while applications198B run on another virtual machine150B are incompatible with the underlying hardware and/or OS. In an example, virtual machines150A-B may instead be containers that execute applications or services, such as microservices. In an example, the containers may each run a process or service and the containers may be any execution environment. For example, the containers may be a virtual server. It should be appreciated that containers may be stand alone execution environments, similar to that of a virtual machine. The applications198A-B or services (e.g., microservices) may run in a software container or a virtual machine (e.g., virtual machines150A-B). The computer system100may include one or more nodes110A-C. Each node110A-C may in turn include one or more physical processors (e.g., CPU120A-E) communicatively coupled to memory devices (e.g., MD130A-D) and input/output devices (e.g., I/O140A-C). Each node110A-C may be a computer, such as a physical machine and may include a device, such as hardware device. In an example, a hardware device may include a network device (e.g., a network adapter or any other component that connects a computer to a computer network), a peripheral component interconnect (PCI) device, storage devices, disk drives, sound or video adaptors, photo/video cameras, printer devices, keyboards, displays, etc. Virtual machines150A-B may be provisioned on the same host or node (e.g., node110A) or different nodes. For example, VM150A and VM150B may both be provisioned on node110A. Alternatively, VM150A may be provided on node110A while VM150B is provisioned on node110B. As used herein, physical processor or processor120A-E refers to a device capable of executing instructions encoding arithmetic, logical, and/or I/O operations. In one illustrative example, a processor may follow Von Neumann architectural model and may include an arithmetic logic unit (ALU), a control unit, and a plurality of registers. In a further aspect, a processor may be a single core processor which is typically capable of executing one instruction at a time (or process a single pipeline of instructions), or a multi-core processor which may simultaneously execute multiple instructions. In another aspect, a processor may be implemented as a single integrated circuit, two or more integrated circuits, or may be a component of a multi-chip module (e.g., in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket). A processor may also be referred to as a central processing unit (CPU). As discussed herein, a memory device130A-D refers to a volatile or non-volatile memory device, such as RAM, ROM, EEPROM, or any other device capable of storing data. As discussed herein, I/O device140A-C refers to a device capable of providing an interface between one or more processor pins and an external device capable of inputting and/or outputting binary data. Processors (e.g., CPUs120A-E) may be interconnected using a variety of techniques, ranging from a point-to-point processor interconnect, to a system area network, such as an Ethernet-based network. Local connections within each node, including the connections between a processor120A-E and a memory device130A-D may be provided by one or more local buses of suitable architecture, for example, peripheral component interconnect (PCI). FIG.2illustrates a flowchart of an example method of selectively compressing a model, in accordance with an embodiment of the present disclosure. Although the example method200is described with reference to the flowchart illustrated inFIG.2, it will be appreciated that many other methods of performing the acts associated with the method200may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated and some of the blocks described are optional. The method200may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software, or a combination of both. As shown inFIG.2, an example method200may begin with receiving a request to compress a schema (block205). In various implementations, an application (e.g., application198A) may request that a compression module (e.g., a compression module164) compress a schema (e.g., schema166). For example, application198A may be streaming data162(e.g., a movie) directly to application198B. In this instance, the application198A may request that compression module164compress schema166prior to sending the schema166with data162to application198B. Next, the example method200may include analyzing the schema to determine whether to apply a first type of compression or a second type of compression (block210). In this instance, analyzing the schema includes determining whether the schema exceeds a threshold level. For example, in one implementation, compression module164may analyze the schema166to determine whether to apply short text string compression or text compression. In this implementation, compression module164may parse the schema166into words and may determine whether an average length of the words is less than or equal to a threshold level. In other implementations, compression module164may determine a size of the schema166and may determine whether the size of the schema166is less than or equal to the threshold level. In yet another implementation, compression module164may parse the schema into words and may determine a percentage of words included in a curated dictionary is less than or equal to the threshold level. In various implementations, different types of compression may be used. Next, the example method200may include upon determining that the schema exceeded a threshold level, generate a compressed schema by performing the second type of compression (block215). For example, in one implementation, upon determining that the schema166exceeded a threshold level, the compression module164may generate a compressed schema172by performing the second type of compression (e.g., text compression). In another implementation, upon determining that the schema166does not exceed a threshold level, the compression module164may generate a compressed schema172by performing the first type of compression (e.g., short text string compression). Next, the example method200may include responding to the request with the compressed module (block220). For example, in one implementation, the compression module164responds to the request with the compressed schema172. FIG.3illustrates a flow diagram of an example methods of streaming data to an application, in accordance with an embodiment of the present disclosure. Although the example method300is described with reference to the flow diagram illustrated inFIG.3, it will be appreciated that many other methods of performing the acts associated with the method300may be used. For example, the order of some of the blocks may be changed, certain blocks may be combined with other blocks, blocks may be repeated, and some of the blocks described are optional. For example, in the illustrated flow diagram, an application198A executing on virtual machine150A may use a compression module164to compress a schema used to stream data to an application198B on a virtual machine150B. As shown inFIG.3, application198A initializes and publishes an output message (block305). Similarly, upon execution, application198B subscribes to the output message (block310). For example, an application (e.g., application198B) may subscribe to a stream of financial information or a video stream. Next, application198A sets a threshold level (e.g., a threshold level may be set for 60%) for schemas (e.g., schema166) defining data (e.g., data162) streamed from application198A (block315). Application198A creates an output message including data162and schema166(block320). For example, in some instances, an application may subscribe directly to a producer of content (e.g., streaming video, financial data, daily news). In these instances, a producer of content may include a schema associated with their data with every message (e.g., a record) sent form the producer. Subsequently, compression module164inspects the output message to determine whether to compress the output message (block325). The compression module164analyzes the schema166to determine whether to perform short text compression or text compression (block330) and then compresses the schema166to generate compressed schema172(block335). For example, a compression module may parse a schema and calculate an average length of words in the schema. In this instance, if the average length is less than or equal to a threshold length, then a short text string compression algorithm is used. Otherwise, a text compression algorithm may be used. In an alternate example, a compression module may parse a schema and have a threshold level equal to a maximum percentage of words from a curated dictionary that may be in the schema. In this example, if the percentage of words from the curated dictionary is greater than or equal to a threshold level, a short text string compression algorithm may be used, otherwise a text compression algorithm may be used. In some instances, a compression module may select a compression algorithm based on a size of a schema. If a schema exceeds a threshold level, a text string compression algorithm may be used, otherwise a text compression may be used. In most implementations, an ability to modify a compression algorithm may provide significant bandwidth savings when transmitting schemas to either another application or a registry. In certain instances, when an application streams directly to another application, bandwidth savings may be significant as a schema may be transmitted with each stream of data. Next, application198A sends the output message which includes data162and compressed schema172(block340). Application198B receives the output message including the data162and compressed schema172(block345) and decodes the compressed schema172to retrieve the original schema166(block350). In various implementations, a type of compression used within a schema may be stored in meta-data associated with the schema. In some implementations, a type of compression used within a schema may be included in a header of an output stream. Next, application198B processes data162using decoded schema166(block355). FIG.4is a block diagram of system400which includes memory410and processor405. The processor405is in communication with the memory410. The processor is configured to receive a request440to compress a schema415. The schema415is analyzed to determine whether to apply a first type of compression425or a second type of compression430, where analyzing the schema415includes determining whether the schema exceeds a threshold level420. Upon determining that the schema415exceeds the threshold level420, a compressed schema435is generated by performing the second type of compression. The compressed schema435is sent in response to the request440. It will be appreciated that all of the disclosed methods and procedures described herein can be implemented using one or more computer programs or components. These components may be provided as a series of computer instructions on any conventional computer readable medium or machine readable medium, including volatile or non-volatile memory, such as RAM, ROM, flash memory, magnetic or optical disks, optical memory, or other storage media. The instructions may be provided as software or firmware, and/or may be implemented in whole or in part in hardware components such as ASICs, FPGAs, DSPs or any other similar devices. The instructions may be configured to be executed by one or more processors, which when executing the series of computer instructions, performs or facilitates the performance of all or part of the disclosed methods and procedures. It should be understood that various changes and modifications to the example embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims. | 18,295 |
11861293 | DETAILED DESCRIPTION FIG.1illustrates the principal components of an embodiment of a customer-merchant client-server based ordering system8, which includes a customer10(or consumer or client or shopper or buyer) and an e-commerce or Internet-based merchant (or supplier or retailer or seller or reseller or distributor)20. The customer10can be any entity or individual that wishes to purchase, rent, lease, borrow, or otherwise obtain, goods (or products) or services from the merchant20. The customer10uses a web browser12running on a computer14. The merchant20is an entity that sells items from a merchant website22which is implemented using one or more physical computer servers24. The customer computer14is connected to or communicates with the merchant server24through a communications network18, such as the Internet, as indicated by lines16, by sending and receiving of digital data over the communications network18. The customer10uses the web browser12as a user interface to view and/or communicate with the merchant website22that is displayed on the customer computer14allowing the customer10to interact with the merchant website22. In addition, one or more of the goods ordered by the customer10may be made by or obtained from one or more third party vendors (or manufacturers)26. Also, the merchant20may be the vendor26. The vendor26is an entity that manufactures goods or has access to goods that the merchant20desires to supply to the customer10and may sell the goods to the merchant20through a vendor website (or other type of order processor)28, which is implemented using one or more physical computer servers30. The vendor computer server30is connected to or communicates with the merchant server24and the customer computer14, through the communications network18, as indicated by lines16. If used by the merchant20, the vendor26may deliver the desired goods to either the merchant20or to the customer10, as indicated by the lines32,34, respectively. If the goods are delivered to the merchant20from the vendor26, the merchant20delivers the goods to the customer, as indicated by a line32. There may be more than one vendor26that supplies goods to the merchant20and/or the customer10. The computers, servers, and the like described herein have the necessary electronics, software, memory, storage, databases, firmware, logic/state machines, microprocessors, communication links, displays or other visual or audio user interfaces, printing devices, and any other input/output interfaces to perform the functions described herein and/or achieve the results described herein. Except where otherwise explicitly or implicitly indicated herein, the term “merchant” or “vendor” refers to the associated computer systems operated or controlled by a merchant or vendor, respectively. Thus, process steps described as being performed by the “merchant” or the “vendor”, may be automated steps performed by their respective computer systems. These steps are implemented within software modules (or computer programs) executed by one or more general purpose computers. For example, the web browser (or user interface)12may be implemented on the computer14using one or more software applications. Specially designed hardware could alternatively be used to perform certain operations. Process steps described as being performed by a “customer” are typically performed by a human operator via the computer14, but could, alternatively, be performed by an automated agent. The customer10may use any web-enabled or Internet applications, such as the web browser12, or any other web-enabled applications or features including email, or other messaging techniques to communicate with (or connect to) the merchant web site22and/or server24through the communications network18. In addition, the computer14may be any of a number of computing devices that are capable of communicating over the network, including but not limited to set-top boxes, personal digital assistants, mobile phones, digital media players, Web pads, tablets, laptop computers, desktop computers, electronic book readers, and the like. The protocols and components for providing communication between the customer computer14and the merchant website22and/or server24are well known to those skilled in the art of computer communications and thus, need not be described in more detail herein. The data and/or computer executable instructions, programs, firmware, software and the like (also referred to herein as “computer executable components”) described herein may be stored on a computer-readable medium that is within or accessible by the customer computer14and/or the server24, having sequences of instructions which, when executed by a processor (or CPU), cause the processor to perform all or a portion of the functions and/or methods described herein. Such computer executable instructions, programs, software and the like may be loaded into the memory of the customer computer14or the server24, using a drive mechanism associated with the computer readable medium, such as a floppy drive, CD-ROM drive, DVD-ROM drive, network interface, or the like. In order to make purchases, the customer10browses through information concerning goods (or products) or services available for purchase from the merchant20. After selecting one or more product(s) or service(s) (collectively, “items”) that the customer10wishes to purchase, an order is sent to the merchant20. The order is placed via a communication from the web browser12to the web site22operating on the server24of the merchant20, which includes payment by the customer10to the merchant20for the items. The merchant20then delivers the items to the customer10as indicated by the line36. Alternatively, the vendor26may deliver the items directly to the customer10as indicated by the line34. Referring toFIG.2, a sample merchant website screen90(or graphical user interface (GUI) or web page or browser screen) for use by the customer10in selecting items for purchase from the merchant20via the web site22, includes a search results list (or group, or set, or collection)100of one or more items102-132. As used herein, the term “list” includes, but is not limited to any list, group, set, or collection of two or more items. The items list100may be displayed on one or more separate screens or web pages. The number of screens and the number of items on each screen depends on the format and content of images displayed in the items list100. Each of the items102-132has corresponding unique images140-170associated therewith. In addition, there may be a brief description123of each item on the list100, such as the item name, manufacturer, availability (e.g., in stock, out of stock, discontinued, etc.), price, shipping cost, and the like. The sample screen90shows search results for women's shoes, and, in particular, women's sandals having the Bandolino brand, and shows a product filter window or section80similar to that described in U.S. patent application Ser. No. 11/694,675, entitled “Method and System for Selecting and Displaying Items,” having the same filing date as U.S. patent application Ser. No. 11/694,726, which is incorporated herein by reference in its entirety. However, the scope of the present disclosure is not limited in this regard as the system and method of the present disclosure can be utilized in connection with any items list or any other grouping or list having any types of items, such as a list of items in a shopping cart, on a wish list, or any other type of items list in any format, including text and/or images, or any other format. Further the present disclosure may be used with an individual item not in a list. Referring toFIG.3, when the customer10selects (or clicks on) one of the items102-132in the list100(FIG.2), e.g., the first item102, a Bandolino Venema Sandal, a zoom detail window or screen200appears. The zoom detail screen200includes a large image view window or screen section232, a plurality of small thumbnail image views (or image view icons or view icons)300, and an item (or product) detail window240. In addition, an optional slide show viewer (or slider) window or screen section202may also be displayed which allows the customer10to select other items from the list100without having to leave the web page, such as is described in U.S. patent application Ser. No. 11/694,597, entitled “Method and System for Selecting and Displaying Items,” which has the same filing date as U.S. patent application Ser. No. 11/694,726, and is incorporated herein by reference in its entirety (discussed more hereinafter). In the large image view section232on the screen200is a large image230of the selected item102. Below the large image view section232are the view icons302-314(collectively, numeral300), each indicative of a different view of the item102. For example, the view icon302is a right side elevated perspective view, shown as the large image230in the large image section232. Also, the view icon304is a back view of the item102, the view icon306is a bottom view of the item102, the view icon308is a front view of the item102, the view icon310is a right side view of the item102, the view icon312is a left side view of the item102, and the view icon314is a top view of the item102. When the customer10mouses over (or selects) one of the view icons302-314, the associated image is displayed as a large image in the large image view section232. Thus, the customer10may sequentially view a plurality of different views of the item102prior to purchase on the same browser screen using the “mouse-over” feature (thus, no new browser screen is generated for each different view displayed). Although there are seven view icons302-314shown in the embodiment ofFIG.3, any number of view icons300may be used and the view icons300may be displayed in any order. The mouse-over selection of the view icons300allows the customer10to easily change views by moving the mouse across the icons300. Instead of or in addition to mousing over the view icons300to select the large image view in the section232, the customer10may select a view by clicking on the corresponding icon300. Referring toFIG.4, illustrations (a)-(g), more specifically, each of the view icons302-314are shown having the large images230,404-414, respectively, as the large image in the large image section232. In particular,FIG.4, illustration (a) shows the image230as the large image if a right side perspective view of the item102corresponding to the view icon302; illustration (b) shows a large image404of a back view of the item102corresponding to the view icon304; illustration (c) shows a large image406of a bottom view of the item102corresponding to the view icon306; illustration (d) shows a large image408of a front view of the item102corresponding to the view icon308; illustration (e) shows of an image410of a right side view of the item102corresponding to the view icon310; illustration (f) shows an image412of a left side view of the item102corresponding to the view icon312; and illustration (g) shows an image414of a top view of the item102corresponding to the view icon314. Referring again toFIG.3, to the right of the large image view screen section232is the item (or product) detail window or screen section240having a header or title242with the name of the selected item102, in this example, “Bandolino Women's Venema Sandal”. The section240contains predetermined detail information about the selected item102. In particular, there may be a color selector244, a size selector250, a width selector260. The color selector244, the size selector250and the width selector260may provide an interactive attribute selection and availability feature or tool for displaying available colors, sizes and widths, for the selected item102, as is described in detail in U.S. patent application Ser. No. 11/694,597, entitled “Method and System for Selecting and Displaying Items,” which has the same filing date as U.S. patent application Ser. No. 11/694,726, and is incorporated herein by reference in its entirety. When the customer10selects an attribute in one of the selectors244,250,260, the other selectors are all automatically updated to indicate the availability of their respective attributes based on the selected attribute, as is discussed in the aforementioned patent application. Further, selectors244,250,260and the availability indicator262may be located within a zoom window or screen section250(discussed more hereinafter) within the items details section240. Also, the item details section240may have an availability field262indicative of the availability of the selected item102based on attributes selected in the selectors244,250,260. In addition, the section240may have a price field264, displaying the current price or sale price of the item; an “add to cart” button266, that allows the customer10to add the item102to the shopping cart; and/or a “save for later” button268that allows the customer10to save the information on the current screen for later use. Also, the section240may have a section270labeled “The details.”, which provides a narrative description of the item and some item features, attributes, characteristics, and suggestions of the selected item102that may be of interest to the customer10. Other item details, attributes, features, characteristics, marketing information, and/or specifications may be included in the item details screen240. When the selected item102can be purchased in different colors, the item detail window240may provide an interactive item color viewing feature that allows the customer10to view a large view of the selected item in the selected color. In particular, if the shoe102is available in a plurality of colors, the color selector244, may have color item thumbnail images (or color icons or color selectors)246, one for each of the available colors for the item102. When the customer10selects (mouses over or clicks on) one of the color icons246, a color description248appears, e.g., dark brown, black, yellow, dark red leather, black fabric, etc., describing the color and/or the material or “feel” of the item. For example, if the customer10mouses over the color icon243, e.g., indicative of the color dark brown, the color field248shows “dark brown” as the color, and the image230in the large image view section232becomes an image of the item102in the color dark brown. When the customer10mouses over the next color icon245, indicative of the color black, the color field248shows “black” as the color, and the image230in the large image view section232becomes an image of the item102in the color black. A similar process is performed for any of the color icons246in the color selector244. Also, if the customer10clicks on one of the color icons244, e.g., the icon243, a dark box247appears around the corresponding color icon244, the large image230“locks” (or becomes fixed) with that color image when the mouse is moved away from the color icons, and the view icons300update to provide view icons300for the selected color. The customer10can then mouse over the icons300and view various different views of the large image230in the section232of the selected item102in the selected color. Thus, the customer10can easily view various views of large images of the shoe in a selected color in the section232to assist in purchasing the proper color shoe. This color viewing feature may apply to any item sold by the merchant and desired to be used by the customer. It should be understood that for any of the embodiments herein, when an image (or icon) in the zoom detail screen200is selected, e.g., the view icons300, the item detail screen204icons, or anywhere else in the zoom detail screen200, the screen may be updated to display the associated image with a box, an outline, a color, shading, shadow, or some type of highlighting, mark, or indicia, so that the selected status thereof is distinguishable from the icons that have not been selected. Also, the view icons300and the color icons246may be small thumbnail images of the selected item or may be other graphical or text icons, buttons, or selectors indicative of the function to be performed. The interactive interface for the color, size and width attributes described hereinbefore is similar to that described in U.S. patent application Ser. No. 11/617,998, filed Dec. 29, 2006, entitled “Methods and Systems for Selecting Item Variations for Display In User Interfaces,” which is incorporated herein by reference in its entirety. Referring toFIG.5, if the customer10moves the mouse cursor over (or mouses over) the large image230in the large image view section232, a zoom box352appears over a portion of the image230around where the mouse cursor is located. At the same time, the magnified image screen or window350in the item details section240, displays a magnified (or enlarged) image354of the image within the zoom box352. In the example ofFIG.5, the zoom box352is over the buckle and a portion of the upper strap of the shoe image230. The magnification (or enlargement) from the image231in the zoom box352to the magnified image354in the magnified image screen or window350is determined as discussed hereinafter withFIG.9. However, any magnification can be used that displays the magnified image352in the magnified image screen350larger than the image231in the zoom box352. Referring toFIG.9, illustrations (a)-(c), the magnified image354(FIG.5) within the magnified image window (or zoom image screen)350may be formed by displaying a portion912(FIG.9) of a second high resolution image906in the large image view window232. In particular, referring toFIG.9, illustration (a), the image view window232may have a low resolution image902. The dimensions of the view window352are X1 by Y1, e.g., 300 pixels by300pixels. Also, there is a high resolution image906that has an image frame910having dimensions X2 by Y2, e.g., 600 pixels by 600 pixels. In such a case, when the customer10mouses over the image902in the window232, a mouse cursor location is used as a reference point for the center904of the zoom box352. For example, in some embodiments, the mouse cursor location defines the center point904of the zoom box352; however, the mouse cursor location may define any other reference point for the zoom box and, thus, may be located at a corner of the zoom box352or even outside of the zoom box352, among many other possibilities. The location of the center point904of the zoom box352is mapped onto a corresponding point908on the high resolution image906. This mapping may be done by knowing the x,y location of the mouse cursor with respect to the center point904within the window232, the size of the window232, e.g., 300x300 pixels, and the size of the high resolution image frame910, e.g, 600x600 pixels. The corresponding location908of the center point904in the high resolution image906can then be determined, e.g., by calculating the percentage along the x and y dimensions that the center point904is located within the view window232and applying these percentages to the corresponding x,y dimensions of the frame910to locate the corresponding point908within the frame910for the image906. Other techniques may be used to determine the location of the point908. The points904,908may be called “anchor points” or “reference points” as they are the points from which the boxes352,350are derived. Once the location of the anchor point908on the high resolution image is determined, a portion912of the image906for the magnified image window350is identified based on the dimensions X3,Y3 of the window350, e.g., 300x300 pixels. Thus, in that case, the window350would be the portion912of the image906that is defined by a box which is150pixels up, down, left, and right of the anchor point908for the image data. The aspect ratio of the dimensions of the high resolution image frame910to the dimensions of the magnified image window350, when applied to the dimensions of the image view window232, determines the dimensions Xz,Yz of the zoom box352. For example, in that case, the horizontal (X) aspect ratio may be calculated as X3/X2=300/600=0.5, and the vertical (Y) aspect ratio may be calculated as Y3/Y2=300/600=0.5. Applying this aspect ratio to the dimensions X1,Y1 of the image view window232, provides the zoom box352dimensions Xz,Yz of: Xz=X1*0.5=300*0.5=150 pixels; and Yz=Y1*0.5=300*0.5=150 pixels, centered around the point904. Thus, the zoom box352dimensions Xz,Yz are such that the portion914of the image902within the zoom box352is indicative of the portion912of the high resolution image906in the magnified image window350. Referring toFIG.9, illustration (b), if the dimensions of the high resolution image frame910are only slightly larger than the magnified image window350, e.g., X2=325, Y2=325 and X1=300, Y1=300 pixels, the size of the zoom box352is calculated to be a relatively large portion of the image view window232. For example, in such a case, the horizontal (X) aspect ratio is X3/X2=300/325=0.923 and the vertical (Y) aspect ratio is Y3/Y2=300/325=0.923. Applying this aspect ratio to the dimensions X1,Y1 of the image view window232provides the zoom box352dimensions Xz,Yz of: Xz=X1*0.923=300*0.923=277 pixels (rounded to the nearest pixel); and Yz=Y1*0.923=300*0.923=277 pixels (rounded to the nearest pixel). Thus, the zoom box352dimensions Xz,Yz are again set such that the portion914of the image902within the zoom box352is indicative of the portion912of the high resolution image906in the magnified image window350. Thus, because the aspect ratios are close to 1, the zoom box is a large portion of the view window232. Referring toFIG.9, illustration (c), if the horizontal and vertical dimensions X2,Y2 of the high resolution image frame910are not the same value, e.g., X2=1200, Y2=600 pixels (a rectangle instead of a square), the dimensions Xz,Yz of the zoom box352will adjust accordingly. For example, in such a case, the horizontal (X) aspect ratio is X3/X2=300/1200=0.25 and the vertical (Y) aspect ratio is Y3/Y2=300/600=0.5. Applying this aspect ratio to the dimensions X1,Y1 of the image view window232, provides the zoom box352dimensions Xz,Yz of: Xz=X1*0.25=300*0.25=75 pixels (rounded to the nearest pixel); and Yz=Y1*0.5=300*0.5=150 pixels (rounded to the nearest pixel). Thus, the zoom box352dimensions Xz,Yz are again set such that the portion914of the image902within the zoom box352is indicative of the portion912of the high resolution image906in the magnified image window350. In this example, the high resolution image906is a rectangular shape because the image906that shape maximized the amount of the image in the frame910. Thus, the zoom box in this example, is a corresponding rectangular shape based on the aspect ratios. Therefore, the dimensions Xz,Yz of the zoom box352may be determined using the following equations: Xz=X1(X3/X2) Eq. 1 Yz=Y1(Y3/Y2) Eq. 2 where X1, Y1 are the dimensions of the large image window232, X2,Y2 are the dimensions of the high resolution image frame910(or the outer dimensions of the high resolution image906), and X3,Y3 are the dimensions of the magnified image window350. Other equations may be used provided the zoom box size is set based on the aspect ratio of the high resolution image906to the magnified image window350. It should be understood that the high resolution image may be a cropped image, e.g., the frame910around the high resolution image906may be as close as possible to the outer edges of the image906in both the X and Y dimensions to minimize the amount of blank space916in the high resolution image906. This minimizes the magnification of unnecessary aspects of the image and maximizes the image resolution for a given set of pixel dimensions. In addition, this zoom technique automatically adjusts for different aspect ratios between the high resolution image frame910and the magnified image window350. Further, the anchor points904,908from which the boxes352,350are derived, respectively, need not be in the center of the boxes352,350, but may be any located anywhere in the window frames232,910, provided the boxes352,350can be formed on their respective images902,906therefrom. Also, it should be understood that the dimensions of the magnified image window350and the zoom box352may be any values and the shape need not be square, and the technique discussed herein will adjust accordingly to the shape and size of same. Referring toFIG.6, illustration (a), when the customer10moves the mouse to a new position356on the image230, the zoom box352moves to that location, and a new magnified image358of a new portion359of the image230in the zoom box352is displayed in the magnified image window350. Referring toFIG.6, illustration (b), when the customer10mouses over the view icon310(right side view), an image410appears in the large view section232without further selection of the view icon310by the customer10(e.g., no mouse click is required). When the customer10moves the mouse cursor from the view icon310to a position360on the image410, the zoom box352appears and moves to that position, and a magnified image362of a portion363of the image410within the zoom box352is displayed in the magnified image window350. Referring toFIG.6, illustration (c), when the customer10next mouses over the view icon312(left side view), an image412appears in the large view section232without further selection of the view icon312by the customer10. When the customer10moves the mouse cursor from view icon312to a position364on the image412, the zoom box352appears and is moved to that location, and a magnified image366of a portion367of the image412within the zoom box352is displayed in the magnified image window350. Similarly, according to some embodiments, as the customer10moves the mouse cursor across the view icons302-314, the corresponding image412dynamically updates to show the view represented by the respective icon302-314that the mouse cursor is over at that time. Also, the zoom box352and magnified image screen350also work with the color view feature discussed hereinbefore withFIG.3. As discussed hereinbefore, when a color has been selected (clicked on), the large image230in the large image view section232is updated (and locked) to show the item in the selected color, and the view icons300update to show the image views in the selected color. The customer10can then mouse over the color-selected image and the zoom box352will appear, the magnified image section350will appear, and the customer10can view zoomed details of the color-selected image for the selected view in the selected color. Furthermore, for any of the embodiments described herein whenever the screen is updated in response to a customer10action or selection/deselection, it may be updated such that a new window (or screen section) is displayed within a currently displayed HTML (Hyper Text Markup Language) page, web page, or browser screen (and, thus, no new HTML page, web page, or browser screen is generated). This is to be distinguished from other conventional techniques, where new HTML pages open up over an existing page when a feature, attribute, or icon is selected by the user. Referring again toFIG.3, as discussed hereinbefore, the slide show viewer (or slider)202may be used in the display screen200to allow the customer10to select items from the list100without having to leave the web page, as is described in detail in U.S. patent application Ser. No. 11/694,597, entitled “Method and System for Selecting and Displaying Items,” which has the same filing date as U.S. patent application Ser. No. 11/694,726, and is incorporated herein by reference in its entirety. In particular, the slider section202displays a series of eight adjacent thumbnail item images140-154in eight corresponding adjacent locations indicative of the first eight items102-116in the list100(FIG.3), respectively. However, the slider202may display any number of images desired. Also, the slider may display images corresponding to any of the items on the list100. In addition, there may be certain of the item details information123displayed with each of the item images140-154. Further, the selected item102has a box204around it in the slider202to indicate it is selected. If there are more than the predetermined maximum number of images, e.g., eight, in the slider202(e.g., there are more than eight recommended items in the list100), left and right scroll arrow buttons222,224, respectively, appear. The maximum number of images in the slider202may be any desired number, depending in part on the size of the images140-154and the size of the browser screen. When the customer10selects (clicks on or mouses over) the left scroll arrow button222, the images140-154(and the associated item details123) all scroll (or index or shift) to adjacent positions to the right. Similarly, if the customer10clicks on the right scroll button224, the images140-154(together with the associated item summaries123) all scroll (or index or move) to the adjacent positions to the left. The scroll type for the slider202may be an index-type scroll, where there are preset positions for each image in the slider202, or a smooth or continuous-type scroll, where there are no fixed positions for the images140-154in the slider202, and the images140-154scroll smoothly as a group across the slider screen202in the desired direction based on the selection of the scroll buttons222,224. Also, if there are more than the predetermined maximum number of images in the slider202, a “search results” summary status226of which items are displayed in the slider202is provided. Referring toFIG.7, when the customer desires to view another item104on the list100, the customer10may go back to the screen90(FIG.2) and select the item104from the list100which will bring the customer10to the screen200. However, if the slider202is used, the customer10can click on a corresponding image122in the slider202, causing the box206(FIG.3) around the item102to disappear and a new box500to appear around the selected image142. In either case, the large image view section232is updated to display a large image502of the newly selected item104, and the view icons300are updated to show available views for the newly selected item104. In addition, the item detail section240is updated to display details of the item104. With the item104selected, the customer10may zoom in on the large image502or a large view corresponding to any of the icon views300for that item104as discussed hereinbefore withFIG.5for the item102. Referring toFIG.8, a process700for providing the zoom detail window user interface disclosed herein begins at block702, which displays the sections of the zoom detail window200for the selected item, including displaying the large image view section232, displaying the view icons300, and displaying the item details240for the selected item. Next, block704determines whether a new one of the view icon300has been selected. If YES, block706displays a large image for the selected image icon300in the large image view section232. After the block706or if the result of the block704is NO, a step708determines if the customer10has selected one of the color icons246in the item details section240. If YES, block710displays a large230image of the selected item in the large image view section232having the selected color. After the block710or if the result of block708is NO, block712determines if the customer10has moused over the large item view section232. If YES, block714displays the zoom box252(FIG.3) and a block716displays the magnified image254in the zoom screen section350and then the process exits. If the result of the block712is NO, block718displays the large image view without the zoom box and block720displays the portion of the full item details section without the magnified image window350. It should be understood that the screen200may be reached by selecting on any item or image on the merchant web site that would bring the customer10to an item (or product) details page. Thus, the item need not be selected from a list (or group, or set, or collection), but may be a stand alone item on the merchant web site. It should be understood that it is not important for the present disclosure how the customer10actually purchases or otherwise obtain the desired item. For example, the desired item may be obtained by the customer10using the computer14and the network18or off-line without the use of the computer14or network18, e.g., via telephone, fax, mail, in person, CD, or DVD, or the like. Although the disclosure has been described herein using exemplary techniques, algorithms, and/or processes for implementing the present disclosure, it should be understood by those skilled in the art that other techniques, algorithms and processes or other combinations and sequences of the techniques, algorithms and processes described herein may be used or performed that achieve the same function(s) and/or result(s) described herein and which are included within the scope of the present disclosure. Any process descriptions, steps, or blocks in flow diagrams should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the preferred embodiments of the systems and methods described herein in which functions may be deleted or executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art. It should be understood that, unless otherwise explicitly or implicitly indicated herein, any of the features, characteristics, alternatives or modifications described regarding a particular embodiment herein may also be applied, used, or incorporated with any other embodiment described herein. Also, the drawings herein are not drawn to scale. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, but do not require, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. Although the invention has been described and illustrated with respect to exemplary embodiments thereof, the foregoing and various other additions and omissions may be made therein and thereto without departing from the spirit and scope of the present disclosure. | 35,018 |
11861294 | The drawings referred to in this description of embodiments should be understood as not being drawn to scale except if specifically noted. DESCRIPTION OF EMBODIMENTS Reference will now be made in detail to embodiments of the present technology, examples of which are illustrated in the accompanying drawings. While the technology will be described in conjunction with various embodiment(s), it will be understood that they are not intended to limit the present technology to these embodiments. On the contrary, the present technology is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the various embodiments as defined by the appended claims. Furthermore, in the following description of embodiments, numerous specific details are set forth in order to provide a thorough understanding of the present technology. However, the present technology may be practiced without these specific details. In other instances, well known methods, procedures, user interface controls, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present embodiments. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present description of embodiments, discussions utilizing terms such as “creating,” “syndicating,” “displaying,” “executing,” “detecting,” “receiving,” “determining,” “notifying,” “alerting,” “warning,” or the like, refer to the actions and processes of a computer system, or similar electronic computing device. The computer system or similar electronic computing device, such as a smart phone, or handheld mobile device, manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission, or display devices. Embodiments of the present technology are also well suited to the use of other computer systems such as, for example, optical and mechanical computers. Overview of Syndication of Associations Relating Data to Metadata Users of computer systems receive vast amounts of data that may or may not make logical sense out of context. However, the user may benefit by receiving secondary data that places the data in context. Such a user may be a working a professional, a web surfer, a software developer, or any other type of computer user. Metadata may exist that gives context to the data and may be useful to user. However, if the user has actively to go and find the metadata, then time is wasted and the user may not be able to locate the metadata. Therefore the present technology passively creates associations between the data and metadata and then presents the user with visual clues in a graphical interface that the metadata is readily and immediately available. The user may then interact with the visual clue to display the metadata or a portion of the metadata. A link may also be displayed that will lead the user to more metadata to place the data into context or provide more information for the user. Thus the user is not required to actively seek the metadata. In one embodiment, the present technology is CONNECT by Embarcadero Technologies. The associations that relate the data to the metadata may be created and then syndicated to a software application that is designed to provide access to the data and/or metadata. The data may be stored in a first database and the metadata in a second database. In one embodiment, the data is in a database managed by Structured Query Language (SQL) tool. The present technology presents the metadata from a repository of information where the data is being used. In other words, the present technology is a passive search performed in the tool the user is using to access the data where the additional metadata related to the data becomes readily available. The additional metadata is readily available without requiring the user to actively seek the additional metadata. Using the present technology in the context of SQL provides the ability to link to a repository of metadata and present the additional information to the user. Metadata about data storage generally includes things such as a data entity, data attributes, table descriptions, column descriptions, related attachments, and terms or domains. This technical information may be available in some documented forms and users interested in this metadata can look up the documentation and understand intent or description. This could be described as an active search of the metadata. The problem is the user needs to know where to find the information and often needs to search whether more information exists. The present technology operates to display the metadata in an interface where the data is being used. In other words, the present technology performs a passive search in the tool that the user is using to access the data. The results of the passive search are then made readily available to the user in the form of additional metadata and may provide a user with a link to more information and/or metadata. In one embodiment, the metadata is presented to the user as a notification, warning, or alert regarding the access of the data. For example, the user may be accessing sensitive data such as identity data. The notification is then automatically generated and displayed to warn the user regarding the access of the sensitive information. Syndication of Associations Relating Data to Metadata Referring to the figures, exemplary embodiments of the technology will now be described. The following description will focus on an embodiment of the present technology, which is implemented using a device with an operating environment or platform such as an operating system. The present technology, however, is not limited to any one particular device, operating system, environment, or platform. Instead, those skilled in the art will find that the system and methods of the present technology may be advantageously embodied on a variety of different platforms, including Microsoft Windows, iOS, Android, Macintosh, Linux, Solaris, UNIX, FreeBSD, and the like. Therefore, the description of the exemplary embodiments that follows is for purposes of illustration and not limitation. FIG.1is a block diagram illustrating environment100which is an example environment comprising computer system102and user computer system114. It should be appreciated that computer system102and user computer system114may be a standard or customized computer system and may be a desktop, server, laptop, tablet, handheld or other computer system. Computer system102and user computer system114may be employed by a developer for use of the present technology. Computer system102and user computer system114comprises memory, processor(s), data storage and other standard components. In one embodiment, computer system102has access to first database104and second database108. It should be appreciated that first database104and second database108may or may not be relational databases and may or may not be programmed or managed by SQL and may be accessed via SQL tools. First database104and second database108may be stored at computer system102, user computer system114, a third party computer system, or any combination thereof. Moreover, first database104or second database108may refer to more than one database stored at more than one location. First database104comprises data106. Data106is data or information that is for use by a user or software application. Second database108comprises metadata110. Metadata110comprises information that relates to data106and places data106in context. In other words, metadata110provides additional information regarding data106. Metadata110may be described as a repository of information and may comprise entities, attributes, columns, maps, charts, models, sub models, objects, descriptions, discussions, following, tables, and other data commonly organized in databases. In one embodiment, computer system102creates associations112relating data106to metadata110. The associations may also be described as a relations or relationships. The association ties data together, links data together, or points a tool to the metadata associated with the data. Once associations112are created, they may be stored at computer system102or other locations and may be used by software applications or tools that access and otherwise use data106. The software applications or tools, such as application116and application118, may execute on computer system102, user computer system114, or another computer system, or a combination thereof. In one embodiment, associations112are syndicated for use by a variety of software applications and/or tools. In one embodiment, associations112are employed by an SQL tool. The software application or tool, such as the SQL tool, is then able to parse and identify what is in data106and relates that back to the metadata in second database108. Application116and application118are shown in FIG. as executing on user computer system114but may also execute on computer system102or another computer system. Application116and application118are able to make use of the syndicated associations112that relate data106to metadata110. Application116and application118may be web browser, word processors, spreadsheet software, database management software, wiki, web application, or other software applications. In one embodiment, application116and application118comprise graphic user interfaces that display data including data106and/or metadata110. Therefore, application116and application118have access to first database104and second database108. It should be appreciated that once associations112have been created they may be used by more than one type of software application. Additionally, different versions of associations112may be created to be used by different types of applications. FIG.2Ais a block diagram illustrating environment200which is an example environment comprising user computer system114. It should be appreciated that user computer system114inFIG.2Amay also refer to computer system102or another computer system that has access to data106, metadata and associations112. In one embodiment, application116displays data106and visual clue202in a graphical interface. The graphical interface may be on a computer monitor, a touch screen, or other display. In one embodiment, visual clue202is displayed in a manner that associates visual clue202with data106in the graphical interface and provides some type of visual clue, symbol, or information to a user that more information or data regarding data106. Visual clue202may be generated when application116access or displays data106. Visual clue202is created based on associations112and metadata110. For example, application116may be programmed to understand and use associations112or application116may be modified via plug-ins or other techniques to make use of associations112which are syndicated to application116. It should be appreciated that visual clue202may be displayed in a variety of forms using a variety of techniques. For example, data106may be text displayed in application116and visual clue202may be a line or a dotted line that underlines the text of data106. Visual clue202may also highlight data106or change data106to a designated color. Visual clue202may also be some other form of button in the graphical display of application116. In one embodiment, visual clue202is configured such that if it is interacted with or selected by a user, then the graphical interface of application116will display additional information such as a portion of the metadata. FIG.2Bis a block diagram illustrating environment250which is an example environment comprising user computer system114. It should be appreciated that user computer system114inFIG.2Bmay also refer to computer system102or another computer system that has access to data106, metadata and associations112. In one embodiment, a user of user computer system114interacts with visual clue202and in response, the graphical display of application116displays first portion of metadata252. First portion of metadata252displays at least a portion of metadata110and provides additional information to the user related to data106. It should be appreciated that a user may interact with or select visual clue202in a variety of ways. For example, the user may use a mouse to click or double click on visual clue202. In one embodiment, the user hovers a cursor in the graphical display over visual clue202for a predetermined amount of time. In one embodiment, the user employs a cursor to highlight the text or image of data106by clicking and dragging a mouse cursor over the text or image to be highlighted. The act of a user interacting with visual clue202invokes the display of a portion of metadata110that may also be described as a passive search for additional information related to data106. Such a passive search is carried out by the tool employed by the user and does not require specific knowledge on the part of the user. For example, the user may not even be aware that additional information is available. Instead the tool employs associations112to search for, find, and display the additional information. In one embodiment, first portion of metadata252displays additional information regarding or relating to data106in the form of a pop up. For example, the pop up may be a region of the graphical display that overlaps other portions of the graphical display. The pop up may remain in the display until the user takes an action to remove the pop up. For example, the pop up may remain displayed so long as a cursor in the graphical interface hovers over visual clue202or the pop up itself. The user may also be required to click in a region outside of the pop up to make the pop disappear or click on an exit button in the pop up. In one embodiment, first portion of metadata252comprises link254. Link254may be a hyperlink that represents a uniform resource locator (URL) address or another type of address. Link254may be represented by a symbol or by the text of first portion of metadata252. Upon selection of link254by a user, a second portion of metadata110may be displayed. In one embodiment, the second portion of metadata110is displayed in the graphical interface of application116. In one embodiment, upon selection of link254, the second portion of metadata110is displayed in a second application such as application118. In other words, the selection of link254may launch application118and display the additional information. A user may select link254by clicking on it. FIG.2Cis a block diagram illustrating environment260which is an example environment comprising user computer system114. It should be appreciated that user computer system114inFIG.2Cmay also refer to computer system102or another computer system that has access to data106, metadata and associations112. In one embodiment, a user of application116has selected link254and launched application118. Application118then displays second portion of metadata262. Second portion of metadata262comprises additional information regarding or related to data106. Second portion of metadata262of is at least a portion of metadata110and is associated with data106via associations112. It should be appreciated that the second portion of metadata262need not be displayed in application118but could be displayed in application116or some other application. In one embodiment, the second portion of metadata262comprises discussions and following of data106. For example, data106may refer to a person such as an employee, entertainer, or professional or may refer to a place such as a restaurant or may refer to a thing such as a sales report, a consumer product, or a software application. Second portion of metadata262may then comprises feedback from users or people associated with the person, place or thing. The discussions and following may aid the user of second portion of metadata262in making decisions related to data106. The feedback may also be described as social interactions. In one embodiment, the user of data106and application116may be able to add to metadata110to add their own feedback, discussions or following of data106. In one embodiment, data106is an error that is presented to a user who is using application116. The error may be associated with additional information in the form of metadata and the present technology may have created associations between the error and the additional information. The error may have a visual clue such as visual clue202that when interacted with, will provide the user with the additional information about the error. The additional information may be provided using the techniques described regarding first portion of metadata252and second portion of metadata262herein. In one embodiment, the present technology does not employ visual clues such as visual clue202but rather employs associations112to create warnings, alerts, or notifications automatically. For example a user may be developer that is accessing data106and may be employing tools to alter, change, or otherwise affect data106. The developer may be developing software that will provide end users with access to data106. The developer may be aided by the present technology by being given warnings about actions attempted by the developer. In one embodiment, data106may be sensitive data. Sensitive data may be identity data of real people such as names, addresses, contact information, credit or payment information, etc. Therefore, a developer may wish to be warned when the developer is taking an action that will alter data106or alter who has access to data106. The present technology may employ associations112to determine that data106or a portion of data106is sensitive information and then detect with the developer is attempting to access data106. At this point the present technology will then issue a warning, alert or notification automatically that the sensitive information is being accessed. The warning may then be used by the developer to make decisions. The warning may be a pop as described for first portion of metadata252and may have a link to additional information such as second portion of metadata262. In one embodiment, data106is replicated and stored in many places, is redundant data, or was stored elsewhere before it was stored in first database104. In such an embodiment, it may be useful for a user to know the history of data106or to have a map or chart of where redundant copies of data106are stored and how many times it has been replicated. This information may be useful to a developer accessing data106to give a developer information regarding the scope of changes to data106that are being contemplated by data106. Such histories, charts and maps may be metadata such as metadata110and may be displayed to a developer using techniques described for first portion of metadata252and second portion of metadata262. Such histories, charts and maps may be described as an impact analysis of data106. The impact analysis may be generated passively and/or automatically without a specific command from the developer. Operations FIG.3is a flowchart illustrating process300for passively relating data to metadata, in accordance with one embodiment of the present invention. In one embodiment, process300is carried out, at least in part, by processors and electrical user interface controls under the control of computer readable and computer executable instructions stored on a computer-usable storage medium. The computer readable and computer executable instructions reside, for example, in data storage features such as computer usable volatile and non-volatile memory and are non-transitory. However, the non-transitory computer readable and computer executable instructions may reside in any type of computer-usable storage medium. In one embodiment, process300is performed by the devices, components, data, metadata, databases, applications, associations, and modules inFIGS.1and2A-C. At302, associations are created, at a first computer system, that relate data in a first database to metadata in a second database. For example, computer system102, data106, first database104, metadata110, and second database108ofFIG.1may be employed. The associations are relationships between the data and metadata where the metadata provides additional information about the data and places the data in context. In one embodiment, the associations are created passively, automatically, and without specific commands or directions from a user. At304, the associations are syndicated at the computer system to a first software application associated with the data. The first software application may be application116ofFIG.1. The first software application may also be an SQL tool. At306, the syndicating to cause the first software application executing at a second computer system to display a visual clue to a user in a graphical interface associated with the first software application wherein the visual clue is visually associated with a visual representation of the data. The second computer system may be user computer system114or may be the same computer system as the first computer system. The visual clue may be visual clue202ofFIG.1. At308, upon detecting a user interaction with the visual clue, cause the first software application to display a first portion of the metadata in the graphical interface. The first portion of metadata may be first portion of metadata252ofFIG.1. The user interaction may be a cursor hover, a click, a double click, or a highlight of the visual clue or of the data itself. In one embodiment, the data is an error and the metadata is additional information regarding the error. In one embodiment, the data is, at least in part, sensitive information and the metadata is a warning or notification that sensitive information is being accessed. In one embodiment, the metadata is an impact analysis regarding the access of the data. In one embodiment, the metadata is social feedback from other users regarding the data such as discussions, following, and descriptions. In one embodiment, the first portion of metadata is displayed as a pop up in the graphical interface. At310, upon detecting a user interaction with the visual clue, cause the first software application to display a link in the first portion of the metadata such that upon a selection of the link from the user, a second portion of the metadata will be displayed. Link254and second portion of metadata262ofFIG.1may be employed. FIG.4is a flowchart illustrating process400for passively relating data to metadata, in accordance with one embodiment of the present invention. In one embodiment, process400is carried out, at least in part, by processors and electrical user interface controls under the control of computer readable and computer executable instructions stored on a computer-usable storage medium. The computer readable and computer executable instructions reside, for example, in data storage features such as computer usable volatile and non-volatile memory and are non-transitory. However, the non-transitory computer readable and computer executable instructions may reside in any type of computer-usable storage medium. In one embodiment, process400is performed by the devices, components, data, metadata, databases, applications, associations, and modules inFIGS.1and2A-C. At402, associations are created, at a user computer system, that relate data in a first database to metadata in a second database. For example, computer system102, data106, first database104, metadata110, and second database108ofFIG.1may be employed. The associations are relationships between the data and metadata where the metadata provides additional information about the data and places the data in context. In one embodiment, the associations are created passively, automatically, and without specific commands or directions from a user. At404, a visual clue is displayed to a user in a graphical interface associated with the first software application wherein the visual clue is visually associated with a visual representation of the data. The visual clue may be visual clue202ofFIG.1and the first software application may be application116. At406, a user interaction with the visual clue is detected. The user computer system may detect the interaction and it may be a mouse hover, a click or another interaction with the visual clue. At408, in response to the detecting, a first portion of the metadata related to the data is displayed and a link to a second portion of the metadata is displayed. For example this may be first portion of metadata252, link254and second portion of metadata262ofFIG.1. At410, in response to a selection of the link, the second portion of the metadata is displayed. This may be second portion of metadata262ofFIG.1. FIG.5is a flowchart illustrating process500for passively relating data to metadata, in accordance with one embodiment of the present invention. In one embodiment, process500is carried out, at least in part, by processors and electrical user interface controls under the control of computer readable and computer executable instructions stored on a computer-usable storage medium. The computer readable and computer executable instructions reside, for example, in data storage features such as computer usable volatile and non-volatile memory and are non-transitory. However, the non-transitory computer readable and computer executable instructions may reside in any type of computer-usable storage medium. In one embodiment, process500is performed by the devices, components, data, metadata, databases, applications, associations, and modules inFIGS.1and2A-C. At302, associations are created, at a first computer system, that relate data in a first database to metadata in a second database. For example, computer system102, data106, first database104, metadata110, and second database108ofFIG.1may be employed. The associations are relationships between the data and metadata where the metadata provides additional information about the data and places the data in context. In one embodiment, the associations are created passively, automatically, and without specific commands or directions from a user. At504, the association are syndicated at the computer system to a first software application associated with the data. The first software application may be application116ofFIG.1. The first software application may also be an SQL tool. At506, the syndicating to cause the first software application executing at a second computer system to detect a command from a user to access at least a portion of the data. The second computer system may be user computer system114ofFIG.1which may also accomplish the detecting. At508, the first software application further to determine that the at least the portion of the data is sensitive data based on the associations and the metadata. The sensitive data may be identity data. At510, the first software application further to notify the user that the data is sensitive data in a graphical interface associated with the first software application. Such a notification may be a warning or alert that is generated automatically without requiring a request or an interaction from the user for additional information. While the technology is described in some detail with specific reference to embodiments and alternatives, there is no intent to limit the technology to a particular embodiment or specific alternatives. For instance, those skilled in the art will appreciate that modifications may be made to embodiments without departing from the teachings of the present technology. Example Computer System Environment The present technology may be carried out, associated with or otherwise practiced with a computer system. Portions of the present technology are composed of computer-readable and computer-executable instructions that reside, for example, in computer-usable media of a computer system or other user device such as computer system101and/or device112ofFIG.1. Described below is an example computer system or components that may be used for or in conjunction with aspects of the present technology such as the ability to run or create a three dimensional interface. It is appreciated that that the present technology can operate on or within a number of different computer systems including general purpose networked computer systems, embedded computer systems, a personal computer such as a desktop computer, a laptop, a notebook, an electronic handheld device, a personal digital assistant, a smart phone, a tablet computer, a net book, user devices, and the like. The computer system is well adapted to having peripheral computer readable media such as, for example, a floppy disk, a compact disc, flash memory and the like coupled thereto. The computer system includes an address/data bus for communicating information, and a processor coupled to bus for processing information and instructions. The computer system is also well suited to a multi-processor or single processor environment and also includes data storage features such as a computer usable volatile memory, e.g. random access memory (RAM), coupled to bus for storing information and instructions for processor(s). The computer system may also include computer usable non-volatile memory, e.g. read only memory (ROM), as well as input devices such as an alpha-numeric input device, a mouse, or other commonly used input devices. The computer system may also include a display such as liquid crystal device, cathode ray tube, plasma display, and other output components such as a printer or other common output devices. The computer system may also include one or more signal generating and receiving device(s) coupled with a bus for enabling the system to interface with other electronic devices and computer systems. Signal generating and receiving device(s) of the present embodiment may include wired serial adaptors, modems, and network adaptors, wireless modems, and wireless network adaptors, and other such communication technology. The signal generating and receiving device(s) may work in conjunction with one or more communication interface(s) for coupling information to and/or from the computer system. A communication interface may include a serial port, parallel port, Universal Serial Bus (USB), Ethernet port, antenna, or other input/output interface. A communication interface may physically, electrically, optically, or wirelessly (e.g. via radio frequency) couple the computer system with another device, such as a cellular telephone, radio, a handheld device, a smart phone, or computer system. Although the subject matter is described in a language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. | 31,351 |
11861295 | DETAILED DESCRIPTION Described herein are methods and systems for encoding a job posting as an embedding, using a graph neural network—a type of neural network that operates on graph data. In the following description, for purposes of explanation, numerous specific details and features are set forth in order to provide a thorough understanding of the various aspects of different embodiments of the present invention. It will be evident, however, to one skilled in the art, that the present invention may be practiced and/or implemented with varying combinations of the many details and features presented herein. An online job hosting service is an online service that allows those who are seeking employees to post online job postings that describe available job opportunities, while simultaneously allowing those seeking job opportunities to search for and browse online job postings. One of the fundamental ways in which most online job hosting services operate is by performing some type of matching between characteristics or attributes of a job opportunity as expressed by the various data fields of the online job posting, with characteristics or attributes of a job-seeking user, as may be embodied in a user profile associated with the job-seeker, or by processing a user's search query. However, because different words may be used to describe or express the same or similar concepts in different job postings, some of the more important characteristics or attributes may be processed to map the concepts expressed by the words to a standardized attribute that may be part of an expert curated knowledge graph, taxonomy, ontology, or some other classification scheme. While referred to herein as standardized attributes, these attributes may also be known as standardized entities. To that end, job hosting services may use a variety of natural language processing and machine learning techniques to process the raw text of a job posting to derive for the job posting one or more standardized job attributes, such as, titles or job titles, skills, company names, and so forth. These standardized job attributes are then used as job-related features in a variety of machine learning tasks. By way of example, a job recommendation engine may utilize one or more of the standardized job attributes associated with a job posting as a feature for a machine learning model that has been trained to rank a set of job postings for a given user. Similarly, a job-specific search engine may use standardized job attributes as features for a machine learning model trained to rank a set of job postings in response to a user's job search query. However, one of the drawbacks of this approach is that the individual standardized attributes do not provide a holistic representation of the job posting. At best, using only standardized attributes, the overall representation of a job posting may be achieved by concatenating the individual standardized attributes. Consistent with some embodiments of the present invention, a holistic approach to generating a learned representation of a job posting involves generating an embedding for a job posting using a graph neural network (GNN). A GNN is a specific type of neural network that operates on graph data by generating for each node in an input graph a node embedding. By way of example,FIG.1illustrates an example of a GNN100operating as an encoder to generate for each node in an input graph102, a corresponding embedding in an embedding space104. Here, the larger nodes (e.g., labeled as A, B, C, D and E) in the input graph102are representative of job postings, while the smaller nodes are representative of standardized job attributes associated with job postings. As illustrated inFIG.1, the GNN encoder100has generated a node embedding106in the embedding space104for the job posting represented in the input graph as the node with label A and reference number108. As described in greater detail in connection with the description ofFIG.4below, when generating a node embedding for a particular node, the GNN100generates the embedding based on information associated with the node itself, in addition to information from nodes in the neighborhood (e.g., those nodes connected by an edge in the input graph), and information conveyed by the structure of the graph. As the GNN learns the structure of the graph and the relationships between nodes during the training phase, the parameters (e.g., the weight values) of the individual neurons of the GNN are adjusted to ensure that similar job postings will have similar embeddings in the embedding space. Consequently, job postings that have similar job titles and that share various standardized job attributes in common with one another will have similar vector representations, or embeddings, in the embedding space. Similarly, job postings that are connected via the graph will have similar embeddings in the embedding space. As described in greater detail below in connection withFIG.2, the first step in generating the node embeddings for each job posting is to define the input graph. Consistent with some embodiments, an input graph referred to herein as a unified job posting graph is formed by joining a job-to-attribute graph with a job-to-job graph, where the connections (e.g., edges) between the various job postings are determined based on analyzing the co-occurrence of certain user activities directed to a pair of job postings. For instance, first and second job postings may be connected via an edge in the input graph based on multiple people applying to a first job associated with the first job posting, and then subsequently applying for a second job associated with the second job posting. This user activity is referred to as a co-apply. Once the input graph has been defined, the GNN is trained with the task objective of predicting edges in the graph. Accordingly, the training data for training the GNN may consist of positive examples, in the form of a pair of job postings that are known to be connected, and negative examples, in the form of a pair of job postings known to not have a connection in the graph. Once trained, the GNN is applied to the entire input graph to derive for each node (e.g., each job posting) a node embedding that is a holistic representation of the job posting. Finally, consistent with some embodiments, the technique described herein involves what is known as an inductive technique, where the dataset used in training the GNN is different from the dataset used in testing, and the resulting GNN is capable of encoding new job postings not represented as nodes in the original input graph. Accordingly, when a new job posting is posted to the online job hosting service, the GNN encoder can be invoked to generate a new embedding for the new job posting. In contrast with other techniques, the technique described herein has low online inference latency. For example, other techniques for encoding a job posting may involve deep, multi-layered neural networks, and thus, introduce significant delay during the inference stage. Consistent with some embodiments of the present invention, the GNN encoder has only one hidden layer and therefore the GNN encoder can generate an embedding for a job posting efficiently, with minimal latency at inference time. Referring now toFIG.2, GNNs operate on graph data. Accordingly, the first task in any graph-based analysis using a GNN is to structure the data to be analyzed or processed as an input graph. Some types of data have an inherent graph structure, such as a social graph developed and maintained by a social networking service, where the connections established between people via the social networking service form the edges of the social graph and provide information about the relationships between the people, which are typically represented as nodes in the graph. However, in other instances the data may not necessarily have an inherent graph structure, and hence, the graph must first be defined.FIG.2is a diagram illustrating an example of how a unified job posting graph200may be formed by joining a job-to-attribute graph202and a job-to-job graph204, consistent with an embodiment of the present invention. As shown inFIG.2, the job posting with reference206is a node, or vertex, in a job-to-attribute graph202. In this instance, the node206is connected via several edges to other nodes representing standardized job attributes208. For example, as shown inFIG.2, the standardized job attributes108include a job title, a role, an occupation, a skill, a specialty, a parent specialty, a function, and an industry. In this context, a specialty is a pursuit, area of study, or skill to which a user has devoted much time and effort and in which they are expert. While specialties may include skills, not all skills are specialties. For example, “Accounting” may be both a skill and a specialty, but not every skill that falls within the category of or is otherwise related to “Accounting” is necessarily a specialty. In some embodiments, specialties are a subset of skills. For example, an online service may manage a list of 40,000 skills, and only 1,400 of 40,000 skills may be identified and treated by the online service as specialties. For each of these several standardized job attributes208, one or more values for the standardized attribute is generated from the data representing and/or otherwise associated with the online job posting206. By way of example, using a variety of natural language processing and machine learning techniques, one or more specific skills may be identified as being associated with the online job posting. Similarly, based on the raw text of the job title and other information associated with the job posting, a single standardized job title may be selected as representative of the job title for the job posting. Accordingly, with some standardized job attributes (e.g., such as skills) multiple values may be derived and associated with the online job posting, whereas, with other standardized job attributes (e.g., job title), only a single value may be derived and associated with the job posting. The value of each standardized job attribute may have or be associated with an identifier by which the standardized attribute can be referenced. For example, the skill, “C++ Programming,” may be associated with a skill identifier that identifies and represents the skill in a knowledge graph, taxonomy, ontology, or some other classification scheme. In addition, the value of each standardized attribute may be represented by an embedding. It is this embedding—for example, a vector representation—representing the value of a standardized attribute that is ultimately used as an input to the GNN for generating the node embeddings, which are the learned representation of the individual job postings. Also shown inFIG.2, the online job posting206, which is a node in the job-to-attribute graph, is originally represented as an embedding210of the raw text of the job title for the job posting, as derived by a pre-trained machine learning model. For example, the raw text of the original job posting may be used as input to a pre-trained machine learning model to derive for the raw text of the job title, an embedding. Any of a number of machine learning techniques and corresponding models may be used to generate the embedding for the raw title of the text of the job posting. For example, consistent with some embodiments, an embedding of the raw text of a job title of a job posting may be derived using a Universal Sentence Encoder model, a software library known as fastText, or a pre-trained Transformer encoder. Referring now to the job-to-job graph204in the upper right portion ofFIG.2, the pair of job postings with reference numbers212and214are nodes in the graph204, connected by an edge216. Consistent with some embodiments of the invention, an edge216is formed to connect two job postings based on applying a set of rules relating to user activity that has been logged by an activity logging service of the online job hosting service. For example, with some embodiments, an activity log is analyzed to derive a count of the number of users who have taken a particular action with respect to a pair of job postings. For instance, if a user applies to a first job posting, and then within some predetermined amount of time, the same user applies to a second job posting, this combination of actions by the user is counted as an activity referred to as a co-apply. With some embodiments, the edges connecting job postings in the job-to-job graph204are based on determining that a certain minimum number of users co-applied to a particular pair of job postings. The intuition here is that if multiple people are applying to the same two jobs, there is a high likelihood that the jobs described in the job postings are similar, and therefore should be connected in the job-to-job graph204. With some embodiments, other user activities may be analyzed and counted for purposes of forming edges between nodes representing job postings. For example, a co-view is a user activity that involves a user selecting a first job posting to view, for example, as presented in a job search results interface or job recommendation interface, followed by the user selecting a second job posting to view. Similarly, a co-share is a user activity that involves a first user sharing a first job posting with a second user, and then the first user sharing a second job posting with the same user. These activities (e.g., co-applies, co-views, co-shares) by the user can be used as inferred signals to indicate that two job postings may be similar to one another, and should therefore be connected in the job-to-job graph104by an edge. Consistent with some embodiments, the rules that are applied to establish the connections or edges between job postings in the job-to-job graph may specify a minimum number of co-applies, co-views, co-shares, or some weighted combination of the various user activities, that are necessary before an edge is created in the input graph to connect two job postings. Other rules may involve or relate to the timing of the user activities. For example, in some instances, a co-apply is considered for purposes of establishing an edge between two job postings only when the first application and the second application occur within some predefined window of time—such as three years. The intuition here is that the closer in time that the two job applications occur, the more likely it is that the job postings are similar. With some embodiments, if a particular job posting is paired with a significant number of other job postings to which various users have co-applied, only some subset of the parings will be considered for connection with an edge in the graph. For example, the selection of the subset of pairs of job postings may be based on the combination of job postings having the highest number of co-applies. By way of example, if a particular job posting (e.g., job posting A) has been co-applied at a high rate with a high number of other job postings, it may be the case that some predetermined number of the pairs of job postings having the highest number of co-applies are selected for purposes of establishing edges between pairs of job postings. As shown inFIG.2, the unified job posting graph200is derived by simply joining the job-to-attribute graph202with the job-to-job graph204. The unified job posting graph200is then used as the input graph on which the GNN is trained, and from which the node embeddings are derived. Referring now toFIG.3, once the unified job posting graph100has been constructed, the next step involves training a GNN300to generate node embeddings for the nodes, based on the input graph100. Consistent with some embodiments, a GNN300is trained with the objective or learning task of predicting an edge between nodes (e.g., job postings) in the graph. For example, the GNN is provided an instance of training data302—for example, a pair of nodes representing job postings—and the objective is to predict whether an edge connects the two nodes. As shown inFIG.3, the training data302that is used to train the GNN involves positive examples—pairs of job postings in the unified job posting graph that are known to be connected by an edge—and negative examples—pairs of job postings in the unified job posting graph that are known to not have an edge. If, for example, the training data is the pair of nodes shown inFIG.3with labels B and C, the objective of the training task is to determine a value for use in predicting whether an edge304connects the nodes in the input graph. In this sense, the edge prediction task is a binary classification problem where the label simply indicates whether an edge is present between two nodes, or not. Furthermore, as a portion of the input graph is used as the training data, without requiring any data labeling from an external source, the training of the GNN300may be considered or characterized as self-supervised. Referring now toFIG.4, consistent with some embodiments, during the training phase, the parameters (e.g., the weights) of the neurons of the GNN300are randomly initialized. To train the GNN300, individual instances of training data are selected, where some instances represent positive examples and some negative examples. For instance, a positive example is a pair of nodes in the graph, with each node representing a job posting, where the nodes are known to have a connecting edge. A negative example is a pair of nodes known to not have a connecting edge. After processing a first instance of training data302, the GNN300outputs an embedding or vector representation for each node in the pair of nodes representing the instance of training data. Next, the two embeddings or vector representations for the pair of nodes are concatenated and provided as input to a neural network (NN)402, which performs a binary classification task by processing the concatenated embeddings to generate an output in the form of a probability score that represents a measure of likelihood that the two nodes are similar, such that the two nodes should be connected by an edge. Consistent with some embodiments, the neural network402that performs the binary classification task is a Multi Layer Perceptron (MLP) neural network having a single layer. During training, the probability score as output by the neural network402is provided as an input to a module (e.g., loss function404) for deriving a measure of loss. Consistent with some embodiments, the loss function may be a cross entropy function, and the loss may generally be characterized as a difference between two probability distributions. For instance, the loss may be the difference between a first probability distribution corresponding with the actual label for the pair of nodes, and the probability distribution of edge values derived by the output layer of the neural network402—typically, after applying an activation function, such as SoftMax. Next, the loss derived by the loss function module404is evaluated by an evaluation function406. If, for example, when processing an instance of training data, the GNN300and neural network404predict that two nodes represented by the training data are connected by an edge, when, the nodes from the instance of training data are not connected by an edge, the evaluation function will generate and backpropagate408values for updating the parameters of the GNN300and neural network402, with the objective of training the GNN300and neural network402to make a more accurate prediction. This process is repeated iteratively with individual instances of training data until the edge prediction task obtains some level of accuracy in processing the training data. Referring now toFIG.5, when generating a node embedding for each node in the input graph, the pre-trained GNN performs what are commonly referred to in the art as an aggregation function, followed by an update function. As a general matter, the aggregation and update functions are the operations by which the GNN obtains information from neighboring nodes and combines this information for generating the node embedding for each node. More specifically, the aggregation function is typically characterized as the operation by which information from neighboring nodes is obtained and combined, while the update function is typically characterized as the operation by which the aggregate information obtained from neighboring nodes is combined with the existing information of the target node. Various GNN models utilize different techniques to perform the aggregation and update functions. Consistent with embodiments of the present invention, the aggregation function, and in some cases the update function, are learned functions in the sense that the parameters of the GNN used in performing the aggregation and/or update functions are learned as a result of the training process.FIG.5illustrates the concepts of aggregation and updating as it relates to a GNN, consistent with various embodiments of the present invention. As illustrated inFIG.5, a portion of an input graph500is shown, where the node with label (“A”) and reference number502is the target node—that is, the node for which the node embedding is being derived. Consistent with an embodiment of the present invention, the target node is representative of an online job posting generally, and has an initial embedding—for example, based on the raw text of the job title for the online job posting. As shown inFIG.5, the target node (“A”) has three neighbors—the nodes with labels, “B”, “C” and “D”. The bounding box with reference number504corresponds with what is referred to as a one-hop neighborhood aggregation technique, as the node embedding generated for the target node (“A”) is based on information (e.g., embeddings) associated with all nodes in the one-hop neighborhood. Specifically, the one-hop neighborhood includes all nodes directly connected to the target node by an edge in the input graph. The aggregation function receives as input an embedding associated with each node in the one-hop neighborhood (e.g., the nodes with labels, “B”, “C” and “D”), and combines or aggregates the embeddings in a manner consistent with the learned aggregation function for the GNN. Although not separately shown inFIG.5, an update function learned for the GNN will then update an embedding associated with the target node by combining the embedding associated with the target node with the embedding that results from the aggregation function. With a technique that uses a one-hop neighborhood aggregation function, the result of the aggregation and update functions is the resulting node embedding for the target node. However, in a multi-hop neighborhood aggregation technique, information from additional nodes—for example, nodes connected to the target node via one or more intermediary nodes—will be aggregated, iteratively, and ultimately combined with an embedding associated with the target node. Accordingly, as shown by the bounding box with reference number506, a technique that involves aggregating information from a two-hop neighborhood is shown. Consistent with some embodiments of the present invention, the GNN is implemented using a particular form of graph convolutional network (GCN) model referred to as the pinSage model. This particular model is beneficial in that it is a web-scale model that provides the ability to process extremely large input graphs as may be used with various online, web-based services. In various alternative embodiments, other models may be used, to include models based on GraphSage or Graph Attention Network (GAT). Consistent with some embodiments, the aggregation technique that is used with the GCN model is a one-hop, normalized neighborhood aggregation technique referred to generally as mean pooling. With a mean pooling aggregation function, the embeddings from each node in the neighborhood are summed, and then normalized, for example, by taking the average or mean. By taking the average or mean of the embeddings from the neighborhood nodes, problems that may arise with significant variations in the degrees of the nodes are lessened. In alternative embodiments of the invention, other aggregation functions and update techniques may be used, to include element-wise mean, element-wise sum, encoder-decoder style attention, self-attention, additive attention, and other techniques based on pooling (e.g., set pooling or Janossy pooling). FIG.6is a diagram of an inductive GNN encoder for use in generating an embedding for a new job posting not included in the original input graph, consistent with some embodiments of the present invention. As illustrated inFIG.6, after a GNN600has been trained, and subsequent to the GNN600being used to derive node embeddings for each node in the input graph, the GNN600is used to derive an embedding for a new online job posting602—for example, one that is not in the original input graph. Accordingly, upon receiving a new online job posting602at the job hosting service, the online job posting602is first analyzed using a variety of natural language processing and machine learning techniques to generate for the new job posting an embedding604of the raw text of the job title of the new online job posting, and various embeddings606corresponding with values of a variety of different standardized job attributes. The embedding604of the raw text of the job title of the new online job posting and the various embeddings606corresponding with the values of the several standardized attributes are provided as input to the GNN600, which outputs an embedding608that is a holistic, learned representation of the new online job posting. Consistent with some embodiments, each embedding that represents an online job posting may be used as an input feature to any number of machine learning models that are used in various tasks. By way of example, with some embodiments, an embedding of a job posting may be used as an input feature with a machine learning model that has been trained to predict or otherwise identify skills associated with a job posting. Similarly, an embedding of a job posting may be used as an input feature with a machine learning model that is used in ranking job postings in the context of a search for job postings or in generating job recommendations to present to a user. FIG.7is a block diagram800illustrating a software architecture802, which can be installed on any of a variety of computing devices to perform methods consistent with those described herein.FIG.7is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software architecture802is implemented by hardware such as a machine900ofFIG.8that includes processors910, memory930, and input/output (I/O) components950. In this example architecture, the software architecture802can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software architecture802includes layers such as an operating system804, libraries806, frameworks808, and applications810. Operationally, the applications810invoke API calls812through the software stack and receive messages814in response to the API calls812, consistent with some embodiments. In various implementations, the operating system804manages hardware resources and provides common services. The operating system804includes, for example, a kernel820, services822, and drivers824. The kernel820acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel820provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services822can provide other common services for the other software layers. The drivers824are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers824can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries806provide a low-level common infrastructure utilized by the applications810. The libraries606can include system libraries830(e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries806can include API libraries832such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic context on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries806can also include a wide variety of other libraries834to provide many other APIs to the applications810. The frameworks808provide a high-level common infrastructure that can be utilized by the applications810, according to some embodiments. For example, the frameworks608provide various GUI functions, high-level resource management, high-level location services, and so forth. The frameworks808can provide a broad spectrum of other APIs that can be utilized by the applications810, some of which may be specific to a particular operating system804or platform. In an example embodiment, the applications810include a home application850, a contacts application852, a browser application854, a book reader application856, a location application858, a media application860, a messaging application862, a game application864, and a broad assortment of other applications, such as a third-party application866. According to some embodiments, the applications810are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications810, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application866(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application866can invoke the API calls812provided by the operating system804to facilitate functionality described herein. FIG.8illustrates a diagrammatic representation of a machine900in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.8shows a diagrammatic representation of the machine900in the example form of a computer system, within which instructions916(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine900to perform any one or more of the methodologies discussed herein may be executed. For example the instructions916may cause the machine900to execute any one of the methods or algorithms described herein. Additionally, or alternatively, the instructions916may implement a system or model as described in connection withFIGS.3and5, and so forth. The instructions916transform the general, non-programmed machine900into a particular machine900programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine900operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine900may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine900may comprise, but not be limited to, a server computer, a client computer, a PC, a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions916, sequentially or otherwise, that specify actions to be taken by the machine900. Further, while only a single machine900is illustrated, the term “machine” shall also be taken to include a collection of machines900that individually or jointly execute the instructions916to perform any one or more of the methodologies discussed herein. The machine900may include processors910, memory930, and I/O components950, which may be configured to communicate with each other such as via a bus902. In an example embodiment, the processors910(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor912and a processor914that may execute the instructions916. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.9shows multiple processors910, the machine900may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory930may include a main memory932, a static memory934, and a storage unit936, all accessible to the processors910such as via the bus902. The main memory930, the static memory934, and storage unit936store the instructions916embodying any one or more of the methodologies or functions described herein. The instructions916may also reside, completely or partially, within the main memory932, within the static memory934, within the storage unit936, within at least one of the processors910(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine900. The I/O components950may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components950that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components950may include many other components that are not shown inFIG.9. The I/O components950are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components950may include output components952and input components954. The output components952may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components954may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components950may include biometric components956, motion components958, environmental components960, or position components962, among a wide array of other components. For example, the biometric components956may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components758may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components760may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components962may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components950may include communication components964operable to couple the machine900to a network980or devices970via a coupling982and a coupling972, respectively. For example, the communication components964may include a network interface component or another suitable device to interface with the network980. In further examples, the communication components964may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices970may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components964may detect identifiers or include components operable to detect identifiers. For example, the communication components964may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components764, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (i.e.,930,932,934, and/or memory of the processor(s)910) and/or storage unit936may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions916), when executed by processor(s)910, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network980may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network980or a portion of the network980may include a wireless or cellular network, and the coupling982may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling982may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. The instructions916may be transmitted or received over the network980using a transmission medium via a network interface device (e.g., a network interface component included in the communication components964) and utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Similarly, the instructions916may be transmitted or received using a transmission medium via the coupling972(e.g., a peer-to-peer coupling) to the devices070. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions916for execution by the machine900, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. | 45,561 |
11861296 | DETAILED DESCRIPTION FIG.1schematically depicts an example environment100in which selected aspects of the present disclosure may be implemented, in accordance with various implementations. Any computing devices depicted inFIG.1or elsewhere in the figures may include logic such as one or more microprocessors (e.g., central processing units or “CPUs”, graphical processing units or “GPUs”) that execute computer-readable instructions stored in memory, or other types of logic such as application-specific integrated circuits (“ASIC”), field-programmable gate arrays (“FPGA”), and so forth. Some of the systems depicted inFIG.1, such as a document editor system110, may be implemented using one or more server computing devices that form what is sometimes referred to as a “cloud infrastructure,” although this is not required. In implementations, the environment100may include a document editor system110that implements an online document editor application (e.g., a collaborative online text editor or word processer) that is accessible from various clients, including clients140-1, . . . ,140-nthat may be included in the environment100, through either a thin client interface, such as a web browser (e.g., a web-based collaborative online text editor application), or a program interface. In implementations, the online document editor application that is implemented by the document editor system110may be a software as a service (SaaS) document editor application. The document editor system110and the clients140-1, . . . ,140-nmay be in communication via a computer network150, which may be any suitable network including any combination of a local area network (LAN), wide area network (WAN), or the Internet. The document editor system110may be configured to perform selected aspects of the present disclosure in order to automatically provide people suggestions in documents that are created, modified, and/or viewed using one or more of the clients140-1, . . . ,140-n. Each of the clients140-1, . . . ,140-nmay be, for example, a user computing device that is used by a user to access a document editor application via a document editor application user interface, such as a SaaS document editor application, that is provided by the document editor system110, e.g., through a web browser. In an example, the clients140-1, . . . ,140-nmay be user computing devices associated with an individual or an entity or organization such as a business (e.g., financial institute, bank, etc.), non-profit, club, university, government agency, or any other organization that uses a document editor application. For example, a business may operate a document editor application to create, modify, and/or view one or more documents to manage reports, proposals, financial records, business records, client lists, and so forth. In various implementations, each of the clients140-1, . . . ,140-nmay include one or more user interface input devices such as a physical keyboard, a touch screen, and/or a microphone, to name a few. Additionally, each of the clients140-1, . . . ,140-nmay include one or more user interface output devices such as a display screen, a haptic feedback device, and/or speaker(s), to name a few. In various implementations, the environment100may include contact stores120-1, . . . ,120-mthat are accessible to the clients140-1, . . . ,140-nvia the computer network150or another network. Each of the contact stores120-1, . . . ,120-mmay include information about multiple contacts (e.g., persons). For example, the contact stores120-1, . . . ,120-mmay be databases that store contact information for multiple contacts, such as email addresses, physical addresses, telephone numbers, user names, etc. In various implementations, the environment100may include document corpuses130-1, . . . ,130-xthat are accessible to the clients140-1, . . . ,140-nvia the computer network150or another network. Each of the document corpuses130-1, . . . ,130-xmay include multiple documents (e.g., text documents) created by one or more of the clients140-1, . . . ,140-n, e.g., using the document editor system110. In an example, the document corpuses130-1, . . . ,130-xmay include a set of documents created, edited, or viewed by users of one or more of the clients140-1, . . . ,140-nassociated with a particular entity or organization. Each of the documents stored in the document corpuses130-1, . . . ,130-xmay be associated with a set of permissions which may, as an example, define users and/or groups who have access to view and/or edit the document. The document editor system110may be configured to automatically provide people suggestions in documents that are created, modified, and/or viewed using one or more of the clients140-1, . . . ,140-n. For example, the document editor system110may be configured to receive, from one of the clients140-1, . . . ,140-n, user interface input that corresponds to a document in a document editing application. The document editor system110may be configured to automatically parse the received user interface input to identify a name included in the user interface input and, in response to identifying the name included in the user interface input, provide an option to create a link in the document between the name and a corresponding contact in a contact store120-1, . . . ,120-m. The document editor system110may be configured to receive additional user interface input from one of the clients140-1, . . . ,140-nthat indicates acceptance of the option to create the link in the document, and in response to receiving the additional user interface input, automatically create the link in the document between the name and the corresponding contact in the contact store120-1, . . . ,120-m. FIG.2depicts a flowchart illustrating an example method200of automatically providing people suggestions in documents. For convenience, the operations of the method200are described with reference to a system that performs the operations. This system of method200includes one or more processors and/or other component(s) of various computer systems. Moreover, while operations of method200are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted, or added. At block205, the system receives user interface input that corresponds to a document in a document editing application. In some implementations, a document editing application (document editor) rendered by a web browser may be displayed on one or more of the user interface output devices of one of the clients140-1, . . . ,140-n. The user may use one or more of the user interface input devices of one of the clients140-1, . . . ,140-nto provide the user interface input that is received by the document editing system110. In some implementations, the user interface input may include one or more characters or words input into a document in the document editing application via a keyboard (e.g., a physical keyboard or an on-screen keyboard) or a microphone of one of the clients140-1, . . . ,140-n. At block210, the system automatically parses the received user interface input to identify a name included in the user interface input. In some implementations, the document editing system110parses the user interface input received at block205to determine whether or not any of the characters or words input into the document in the document editing system110are a name that matches a corresponding contact in one of the contact stores120-1, . . . ,120-m. In particular, in some implementations, the document editing system110may determine that one or more words in the user interface input received at block205corresponds to at least a first name or a last name of the corresponding contact in one of the contact stores120-1, . . . ,120-m. At block215, the system determines a confidence score associated with the name. In some implementations, the document editing system110determines a first confidence score associated with the name based upon a strength of the match between the name and the corresponding contact at block210. For example, the document editing system110may determine a relatively lower confidence score in the case of a partial match, and the document editing system110may determine a relatively higher confidence score in the case of a complete match. Still referring to block215, in some implementations, the document editing system110also determines a second confidence score based on a likelihood of the word that was identified at block210being a name. The document editing system110may determine the second confidence score based on the context of a sentence (or set of words) that surrounds the word that was identified at block210(i.e., based on a current sentence that is being typed). As an example, one of the contact stores120-1, . . . ,120-mmay include a contact “Will Smith”. In a first case, if the user interface input received at block205includes the phrase “Please chat with Will”, the word “Will” may be identified at block210as a name. At block215, the document editing system110may determine a relatively higher second confidence score based on a relatively higher likelihood of the word “Will” being a name. In a second case, if the user interface input received at block205includes the phrase “These things will”, the word “will” may be identified at block215as a name. However, at block215, the document editing system110may determine a relatively lower second confidence score, as compared to the first case, based on a relatively lower likelihood of the word “will” being a name. At block220, the system determines whether or not the confidence score satisfies a threshold. In some implementations, the document editing system110determines whether or not the first confidence score satisfies a first confidence score threshold and/or the second confidence score determined at block215satisfies a second confidence score threshold. If the document editing system110determines that the first confidence score and/or the second confidence score does not satisfy the respective threshold (e.g., there is not a sufficiently strong match between the user interface input received at block205and a contact in one of the contact stores120-1, . . . ,120-m, and/or there is not a sufficiently high likelihood of the word that was identified at block210being a name), then the flow returns to block205. On the other hand, if the document editing system110determines that the first confidence score and the second confidence score satisfy the respective thresholds (e.g., there is a sufficiently strong match between the user interface input received at block205and a contact in one of the contact stores120-1, . . . ,120-m, and there is a sufficiently high likelihood of the word that was identified at block210being a name), then the flow proceeds to block225. Still referring to block220, continuing with the example above, given that one of the contact stores120-1, . . . ,120-mincludes the contact “Will Smith”, in both the first case and the second case, the document editing system110may determine that the first confidence score satisfies the first threshold based on there being a sufficiently strong match between the user interface input received at block205(“Please chat with Will” in the first case and “These things will” in the second case) and the contact “Will Smith” in one of the contact stores120-1, . . . ,120-m. The document editing system110may determine that the second confidence score satisfies the second threshold in the first case, based on a sufficiently high likelihood of the word “Will” being a name in the phrase “Please chat with Will”. However, the document editing system110may determine that the second confidence score does not satisfy the second threshold in the second case, based on there not being a sufficiently high likelihood of the word “will” being a name in the phrase “These things will”. At block225, in response to identifying the name included in the user interface input at block210and determining that the confidence score satisfies the threshold at block220, the system provides an option to create a link in the document between the name and the corresponding contact in one of the contact stores120-1, . . . ,120-m. In some implementations, the document editing system110provides the option to create the link in the document between the name and the corresponding contact in one of the contact stores120-1, . . . ,120-min response to both the first confidence score satisfying the first threshold and the second confidence score satisfying the second threshold at block220. If either or both of the first confidence score and the second confidence score do not satisfy their respective thresholds, then the document editing system110may avoid providing the option to create the link in the document. Still referring to block225, in some implementations, the document editing system110causes a dialog box, pop-up, tool tip, or overlay to appear on the user interface of the document editor on one of the clients140-1, . . . ,140-nthat provides the user with the option to create a link that may be selected by clicking or tapping on or adjacent to the name in the document. In other implementations, the document editing system110causes the name to appear in a different font and/or color (e.g., gray) on the user interface of the document editor on one of the clients140-1, . . . ,140-nto indicate the option to create the link in the document between the name and the corresponding contact in one of the contact stores120-1, . . . ,120-m. At block230, the system determines whether or not additional user interface input that indicates acceptance of the option to create the link in the document has been received. In some implementations, the document editing system110determines whether or not additional user interface input that indicates acceptance of the option to create the link in the document, provided at block225, has been received (e.g., via one or more of the user interface input devices of one of the clients140-1, . . . ,140-n). If the document editing system110determines that the additional user interface input that indicates acceptance of the option to create the link has not been received, then the flow returns to block205. On the other hand, if the document editing system110determines that the additional user interface input that indicates acceptance of the option to create the link has been received, then the flow proceeds to block235. Still referring to block230, in an example, the additional user interface input may include a selection of button (e.g., “Yes” or “Accept” to accept the option to create a link, or “No” or “Decline” to decline the option to create a link) in a dialog box, pop-up, tool tip, or overlay provided at block225. In another example, the additional user interface input may include a keystroke or keystroke combination (e.g., tab to accept the option to create a link, or escape to decline the option to create a link) or a voice command (e.g., “Accept” or “Decline”). Still referring to block230, in some implementations, a user may have the option to not take any action in response to the option to create the link in the document between the name and the corresponding contact in one of the contact stores120-1, . . . ,120-mprovided at block225. In this case, the document editing system110may initially determine, subject to change based on later-received user interface input, that the user has declined the option to create the link between the name and the corresponding contact. At block235, in response to receiving the additional user interface input that indicates acceptance of the option to create the link in the document, the system automatically creates the link in the document between the name and the corresponding contact in the contact store. In some implementations, in response to receiving the additional user interface input at block230that indicates acceptance of the option to create the link in the document, the document editing system110automatically creates the link in the document between the name identified at block210and the corresponding contact in one of the contact stores120-1, . . . ,120-m. At block240, in response to automatically creating the link, the system determines whether or not the corresponding contact has permission to access the document. In some implementations, in response to automatically creating the link at block235, the document editing system110determines whether or not the corresponding contact has permission to access the document (e.g., view and/or edit), for example, based on a set of permissions which may, as an example, be stored with or in association with the document in one of the document corpuses130-1, . . . ,130-x(e.g., as metadata) or in another location. Still referring to block240, if the document editing system110determines that the corresponding contact does not have permission to access the document, then the flow proceeds to block245. On the other hand, if the document editing system110determines that the corresponding contact has permission to access the document, then the flow proceeds to block260. At block245, in response to determining that the corresponding contact does not have permission to access the document, the system provides an option to share the document with the corresponding contact. In some implementations, the document editing system110causes a dialog box, pop-up, tool tip, or overlay to appear on the user interface of the document editor on one of the clients140-1, . . . ,140-nthat provides the user with the option to share the document with the corresponding contact (i.e., the contact linked at block235). At block250, the system determines whether or not further user interface input that indicates acceptance of the option to share the document with the corresponding contact has been received. In some implementations, the document editing system110determines whether or not further user interface input that indicates acceptance of the option to share the document with the corresponding contact, provided at block245, has been received (e.g., via one or more of the user interface input devices of one of the clients140-1, . . . ,140-n). If the document editing system110determines that the further user interface input that indicates acceptance of the option to share the document with the corresponding contact has been received, then the flow proceeds to block255. On the other hand, if the document editing system110determines that the further user interface input that indicates acceptance of the option to share the document with the corresponding contact has not been received, then the flow proceeds to block265. Still referring to block250, in an example, the further user interface input may include a selection of button (e.g., “Yes” or “Accept” to accept the option to share the document with the corresponding contact, or “No” or “Decline” to decline the option to share the document with the corresponding contact) in a dialog box, pop-up, tool tip, or overlay provided at block245. In another example, the further user interface input may include a keystroke or keystroke combination (e.g., tab to accept the option to share the document with the corresponding contact, or escape to decline the option to share the document with the corresponding contact) or a voice command (e.g., “Accept” or “Decline”). Still referring to block250, in some implementations, a user may have the option to not take any action in response to the option to share the document with the corresponding contact provided at block245. In this case, the document editing system110may initially determine, subject to change based on later-received user interface input, that the user has declined the option to share the document with the corresponding contact. The document editing system110may also provide an option to “don't show again”, which may prevent another sharing prompt from showing up when a user creates a link between a name and a corresponding contact within the same session (e.g., if a user selects “don't show again” but then refreshes and creates another link between a name and a contact, the user may once again be shown the prompt to share based on being in a new session). At block255, in response to receiving the further user interface input, the system automatically shares the document with the corresponding contact. In some implementations, the document editing system110modifies the set of permissions which may, as an example, be stored with or in association with the document in one of the document corpuses130-1, . . . ,130-x(e.g., as metadata) or in another location, to include permission for the corresponding contact to view and/or edit the document. At block260, in response to automatically creating the link, the system sends a notification to the corresponding contact. In some implementations, the document editing system110notifies the corresponding contact that they have been linked in the document and/or notifies the corresponding contact that they have been granted permission to view and/or edit the document. For example, the document editing system110may cause an email to be sent to the corresponding contact to provide the notification, and/or the document editing system110may cause the notification to be presented to the user via a user interface of the document editor on one of the clients140-1, . . . ,140-n. In other implementations, the document editing system110may not send a notification to the corresponding contact. At block265, the system receives further user interface input associated with the link. In some implementations, the document editing system110receives user interface input (e.g., via one or more of the user interface input devices of one of the clients140-1, . . . ,140-n) indicating a selection of the link that was created at block235(e.g., by a user clicking or tapping on the link or hovering a cursor over the link). At block270, in response to receiving the further user interface input associated with the link, the system displays information associated with the corresponding contact in the contact store. In some implementations, in response to receiving the further user interface input associated with the link at block265, the document editing system110causes a card, dialog box, pop-up, tool tip, overlay, or information panel to appear on the user interface of the document editor on one of the clients140-1, . . . ,140-nthat displays information associated with the corresponding contact in one of the contact stores120-1, . . . ,120-m. In an example, the card, dialog box, pop-up, tool tip, overlay, or information panel may display contact information for the corresponding contact, such as one or more email addresses, physical addresses, telephone numbers, and/or user names, etc. The flow may then return to block205. FIG.3depicts a flowchart illustrating an example method300of automatically providing people suggestions in documents. For convenience, the operations of the method300are described with reference to a system that performs the operations. This system of method300includes one or more processors and/or other component(s) of various computer systems. Moreover, while operations of method300are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted, or added. At block305, the system receives user interface input that corresponds to a document in a document editing application. In some implementations, a document editing application (document editor) rendered by a web browser may be displayed on one or more of the user interface output devices of one of the clients140-1, . . . ,140-n. The user may use one or more of the user interface input devices of one of the clients140-1, . . . ,140-nto provide the user interface input that is received by the document editing system110. In some implementations, the user interface input may include one or more characters or words input into a document in the document editing application via a keyboard (e.g., a physical keyboard or an on-screen keyboard) or a microphone of one of the clients140-1, . . . ,140-n. At block310, the system determines that one or more words in the received user interface input correspond to at least two names of a plurality of names in a contact directory. In some implementations, the document editing system110parses the user interface input received at block305to identify characters or words input into the document in the document editing system110at block305that correspond to (e.g., partially or fully match) two or more names of contacts from among a plurality of names of contacts in one or more of the contact stores120-1, . . . ,120-m. In particular, in some implementations, the document editing system110may determine that one or more words in the user interface input received at block205corresponds to a first name and/or a last name for two or more contacts in one or more of the contact stores120-1, . . . ,120-m. At block315, the system determines, for each of the at least two names, a confidence score. In some implementations, the document editing system110determines, for each of the at least two names in one or more of the contact stores120-1, . . . ,120-mto which the one or more words in the received user interface input correspond, determined at block310, a first confidence score associated with the name based upon a strength of the match between the one or more words in the received user interface input and the name. For example, the document editing system110may determine a relatively lower confidence score in the case of a partial match between the one or more words and a first name and/or last name of a contact in one or more of the contact stores120-1, . . . ,120-m, and the document editing system110may determine a relatively higher confidence score in the case of a complete match between the one or more words and a first name and a last name of a contact in one or more of the contact stores120-1, . . . ,120-m. Still referring to block315, in some implementations, the document editing system110also determines a second confidence score based on a likelihood of the word that was identified at block310being a name. The document editing system110may determine the second confidence score based on the context of a sentence (or set of words) that surrounds the word that was identified at block310(i.e., based on a current sentence that is being typed). At block320, the system selects a name in the contact directory from the at least two names based on the confidence scores. In some implementations, the document editing system110selects the name in one of the contact stores120-1, . . . ,120-mfor which the highest first confidence score was determined at block315. In the example described above in which the document editing system110determines the relatively lower first confidence score in the case of the partial match between the name and a first name and/or last name of a contact in one or more of the contact stores120-1, . . . ,120-m, and in which the document editing system110determines the relatively higher first confidence score in the case of the complete match between the name and a first name and a last name of a contact in one or more of the contact stores120-1, . . . ,120-m, at block320, the document editing system110may select the name in one of the contact stores120-1, . . . ,120-mthat is the complete match. In some implementations, selecting the name in the contact directory is in response to at least one of the first confidence scores satisfying a first threshold and the second confidence score satisfying a second threshold. At block325, the system provides an option to create a link in the document between the one or more words and the contact in the contact directory that is associated with the name selected at block320. In some implementations, the document editing system110causes a dialog box, pop-up, tool tip, or overlay to appear on the user interface of the document editor on one of the clients140-1, . . . ,140-nthat provides the user with the option to create a link that may be selected by clicking or tapping on or adjacent to the name in the document. In other implementations, the document editing system110causes the one or more words to appear in a different font and/or color (e.g., gray) on the user interface of the document editor on one of the clients140-1, . . . ,140-nto indicate an option to create a link in the document between the one or more words and the contact. At block330, the system determines whether or not additional user interface input that indicates acceptance of the option to create the link in the document has been received. In some implementations, the document editing system110determines whether or not additional user interface input that indicates acceptance of the option to create the link in the document, provided at block325, has been received (e.g., via one or more of the user interface input devices of one of the clients140-1, . . . ,140-n). If the document editing system110determines that the additional user interface input that indicates acceptance of the option to create the link has not been received, then the flow returns to block305. On the other hand, if the document editing system110determines that the additional user interface input that indicates acceptance of the option to create the link has been received, then the flow proceeds to block335. At block335, in response to receiving the additional user interface input that indicates acceptance of the option to create the link in the document, the system automatically creates a link in the document between the one or more words and a contact in the contact directory that is associated with the selected name. In some implementations, in response to receiving the additional user interface input at block330that indicates acceptance of the option to create the link in the document, the document editing system110automatically creates the link in the document between the one or more words identified at block310and the contact in one of the contact stores120-1, . . . ,120-mthat is associated with the name selected at block320. In some implementations, automatically creating the link in the document is in response to at least one of the confidence scores satisfying a threshold. At block340, in response to automatically creating the link, the system determines whether or not the contact has permission to access the document. In some implementations, in response to automatically creating the link at block335, the document editing system110determines whether or not the contact in one of the contact stores120-1, . . . ,120-mthat is associated with the name selected at block320has permission to access the document (e.g., view and/or edit), for example, based on a set of permissions which may, as an example, be stored with or in association with the document in one of the document corpuses130-1, . . . ,130-x(e.g., as metadata) or in another location. Still referring to block340, if the document editing system110determines that the contact does not have permission to access the document, then the flow proceeds to block345. On the other hand, if the document editing system110determines that the contact has permission to access the document, then the flow proceeds to block360. At block345, in response to determining that the contact does not have permission to access the document, the system provides an option to share the document with the contact. In some implementations, the document editing system110causes a dialog box, pop-up, tool tip, or overlay to appear on the user interface of the document editor on one of the clients140-1, . . . ,140-nthat provides the user with the option to share the document with the contact in one of the contact stores120-1, . . . ,120-mthat is linked at block335and associated with the name selected at block320. At block350, the system determines whether or not further user interface input that indicates acceptance of the option to share the document with the contact has been received. In some implementations, the document editing system110determines whether or not further user interface input that indicates acceptance of the option to share the document with the contact in one of the contact stores120-1, . . . ,120-m, that is linked at block335and associated with the name selected at block320, has been received (e.g., via one or more of the user interface input devices of one of the clients140-1, . . . ,140-n). If the document editing system110determines that the further user interface input that indicates acceptance of the option to share the document with the contact has been received, then the flow proceeds to block355. On the other hand, if the document editing system110determines that the further user interface input that indicates acceptance of the option to share the document with the contact has not been received, then the flow proceeds to block360. Still referring to block350, in an example, the further user interface input may include a selection of button (e.g., “Yes” or “Accept” to accept the option to share the document with the contact, or “No” or “Decline” to decline the option to share the document with the contact) in a dialog box, pop-up, tool tip, or overlay provided at block345. In another example, the further user interface input may include a keystroke or keystroke combination (e.g., tab to accept the option to share the document with the contact, or escape to decline the option to share the document with the contact) or a voice command (e.g., “Accept” or “Decline”). At block355, in response to receiving the further user interface input that indicates acceptance of the option to share the document with the contact, the system automatically shares the document with the contact. In some implementations, the document editing system110modifies the set of permissions which may, as an example, be stored with or in association with the document in one of the document corpuses130-1, . . . ,130-x(e.g., as metadata) or in another location, to include permission for the contact in one of the contact stores120-1, . . . ,120-mthat is linked at block335and associated with the name selected at block320to view and/or edit the document. At block360, the system receives further user interface input associated with the link. In some implementations, the document editing system110receives further user interface input (e.g., via one or more of the user interface input devices of one of the clients140-1, . . . ,140-n) indicating a selection of the link that was created at block335(e.g., by a user clicking or tapping on the link or hovering a cursor over the link). At block365, in response to receiving the further user interface input associated with the link, the system displays information associated with the contact in the contact directory. In some implementations, in response to receiving the further user interface input associated with the link at block360, the document editing system110causes a dialog box, pop-up, tool tip, overlay, or information panel to appear on the user interface of the document editor on one of the clients140-1, . . . ,140-nthat displays information associated with the contact in one of the contact stores120-1, . . . ,120-m. In an example, the dialog box, pop-up, tool tip, overlay, or information panel may display contact information for the corresponding contact, such as one or more email addresses, physical addresses, telephone numbers, and/or user names, etc. The flow may then return to block305. FIGS.4A and4Bdepict an example of how people suggestions may be provided by a document editor system110that causes a document editor to be displayed on one or more of the user interface output devices of one of the clients140-1, . . . ,140-n. The scenario ofFIGS.4A and4Bis for illustrative purposes only. InFIGS.4A and4B, a graphical user interface (“GUI”)400is depicted that may be used by a document editor application user to view, create, or edit a document410(e.g., “Untitled document”). As shown inFIG.4A, in some implementations, in response to a user inputting the words “Person One” into the document410in the GUI400of the document editor application, the document editor system110automatically parses the received user interface input to identify a name420(e.g., “Person One”) that is included in the user interface input. In response to identifying the name420included in the user interface input, the document editor system110displays a prompt430in the GUI400that provides an option to create a link in the document between the name and the corresponding contact in one of the contact stores120-1, . . . ,120-m. In response to receiving the additional user interface input that indicates acceptance of the option to create the link in the document, the document editor system110automatically creates the link in the document between the name420and the corresponding contact in one of the contact stores120-1, . . . ,120-m. In other implementations, in response to a user inputting the word “Person” into the document410in the GUI400of the document editor application, the document editor system110may automatically parse the received user interface input to identify the first name “Person” that is included in the user interface input. In response to identifying the first name “Person” included in the user interface input, the document editor system110may display the prompt430in the GUI400that provides the option to create the link in the document between the first name “Person” and a corresponding contact “Person One” in one of the contact stores120-1, . . . ,120-m. In response to receiving the additional user interface input that indicates acceptance of the option to create the link in the document, the document editor system110automatically completes the name to “Person One” and automatically creates the link in the document between the name “Person One” and the corresponding contact in one of the contact stores120-1, . . . ,120-m. As shown inFIG.4B, in some implementations, in response to receiving further user interface input associated with a link450between the name420and the corresponding contact in one of the contact stores120-1, . . . ,120-m(e.g., by a user clicking or tapping on the link450or hovering over the link450), the document editor system110displays, in the GUI400, an information card450that includes information associated with the contact in the contact store that is associated with the name420. FIG.5is a block diagram of an example computing device510that may optionally be utilized to perform one or more aspects of techniques described herein. Computing device510typically includes at least one processor514which communicates with a number of peripheral devices via bus subsystem512. These peripheral devices may include a storage subsystem524, including, for example, a memory subsystem525and a file storage subsystem526, user interface output devices520, user interface input devices522, and a network interface subsystem516. The input and output devices allow user interaction with computing device510. Network interface subsystem516provides an interface to outside networks and is coupled to corresponding interface devices in other computing devices. User interface input devices522may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device510or onto a communication network. User interface output devices520may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device510to the user or to another machine or computing device. Storage subsystem524stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem524may include the logic to perform selected aspects of the methods ofFIGS.2and3, as well as to implement various components depicted inFIG.1. These software modules are generally executed by processor514alone or in combination with other processors. The memory subsystem525included in the storage subsystem524can include a number of memories including a main random access memory (RAM)530for storage of instructions and data during program execution and a read only memory (ROM)532in which fixed instructions are stored. A file storage subsystem526can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem526in the storage subsystem524, or in other machines accessible by the processor(s)514. Bus subsystem512provides a mechanism for letting the various components and subsystems of computing device610communicate with each other as intended. Although bus subsystem512is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses. Computing device510can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device510depicted inFIG.5is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computing device510are possible having more or fewer components than the computing device depicted inFIG.6. While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure. | 44,119 |
11861297 | DETAILED DESCRIPTION This detailed description discusses features and advantages of systems and methods related to smart interfaces with facilitated input and mistake recovery in relation to certain described embodiments, some of which are illustrated in the figures. Although several embodiments, examples, and illustrations are disclosed below, it will be understood by those of ordinary skill in the art that the inventions described herein extend beyond the specifically disclosed embodiments, examples, and illustrations and includes other uses of the inventions and obvious modifications and equivalents thereof. Embodiments of the inventions are described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner simply because it is being used in conjunction with a detailed description of certain specific embodiments of the inventions. In addition, embodiments of the inventions can comprise several novel features and no single feature is solely responsible for its desirable attributes or is essential to practicing the inventions herein described. Chapter 1: Introduction How the Present Specification is Organized For the convenience of the reader, the present specification is organized into numbered chapters. In some cases, where the present specification refers to a topic that is explained in more detail at a location that is many pages distant, it may reference the number of the chapter where the topic is explained. The chapters within the present specification generally include section headings that are not numbered. For the most part, chapter titles are intended to broadly summarize the content of the chapters, and section headings are intended to broadly summarize the content of the paragraphs that follow the section headings. However, such chapter titles and section headings are intended only for the convenience of the reader, and are not intended to characterize the invention in a limiting way. For example, the chapter title that says “Alterable Formatting” does not limit the invention to embodiments that have alterable formatting functionality as described in that section. In fact, the chapter title that says “Alterable Formatting” is not even intended to characterize the contents of that chapter itself in a limiting way: certain of the formatting behaviors that are described in that chapter may be quite useful even in an embodiment that includes no alteration functionality whatsoever. The present specification may be divided into three large sections, along with a smaller amount of additional material at the beginning and at the end. Part I consists of Chapters 2 to 15, which explain various interface features that may facilitate altering “alterable decisions” in order to efficiently recover from mistakes. Part II consists of Chapters 16 to 23, which explain interface features that may tend to prevent the interface from yielding undesired results and interface features that may help educate the user how to achieve desired results more efficiently. Part III consists of Chapters 24 to 52, which explain interface improvements that are applicable to specific situations, and which explain various circumstances in which the interface may make alterable decisions; for example, Chapter 36 explains interface improvements that are applicable when editing a spreadsheet. The preceding paragraph is intended to summarize the most prominent topic within each of the three large sections of the patent, but is not intended to characterize the invention in a limiting way. Each large section may include descriptions of additional features that do not fit the summaries provided in the preceding paragraph, and such features may be advantageous even in an embodiment that has none of the functionality mentioned in the preceding paragraph. For example, Part I includes explanations of improvements to undo functionality that may be advantageous even in an embodiment that has no functionality pertaining to alterable decisions or to anything else that was mentioned in the preceding paragraph. Each large section itself begins with a summary of the contents of that large section; such summaries may be more detailed and comprehensive than the brief summaries provided above, but still are to be construed in a nonlimiting way. In order to understand the scope of a particular section, it is necessary to actually read the section itself, not just a summary. An Embodiment May be Described as a Device, Method, or Interface In some embodiments, the invention that is disclosed herein comprises a computing device that has an improved user interface. Phrased another way, the invention that is disclosed herein comprises an improved method by which a computing device responds to user input actions. Phrased yet another way, the invention that is disclosed herein comprises an improved user interface for a computing device. Various ways of characterizing some embodiments of the invention may simultaneously be equally valid. A programmer who reads the present specification may observe that the present specification describes an improved method by which a computing device will respond to user input actions; if the programmer then programs a computing device accordingly, then the change that the programmer makes to the computing device will consist of an improvement of the user interface of the computing device; a user may then correctly perceive that the result is an improved computing device. Embodiments of the invention will generally be described herein in terms of a detailed explanation of how “the interface” will behave in response to various user input actions in certain embodiments. Sufficient information about such interface behaviors will be provided that those of ordinary skill in the art will be able to configure a computing device to follow methods that cause its user interface to behave as described herein. Various Embodiments Are Possible It is possible for an embodiment to include nearly all the new interface behaviors that are disclosed herein, or to include only a few. Whenever the present specification begins a sentence or paragraph by saying “In an embodiment,” this does not necessarily mean that the embodiment whose behavior is described in that sentence or paragraph does or does not include any of the other interface behaviors described elsewhere. However, in some cases the present specification will describe two or more distinct and incompatible approaches to a particular situation. When the present specification begins a sentence or paragraph by saying, “In an alternative embodiment,” in some cases this may mean that the behavior that is described for that alternative embodiment is incompatible with a behavior described prior to that sentence or paragraph. In such cases, this does not necessarily mean that the alternative embodiment is inferior to any such previously described embodiment: it only means that it may be incompatible. A variety of computing devices may be improved by integrating various interface behaviors that are disclosed herein. For example, an embodiment may be a desktop computer, smartphone, or graphing calculator. However, some of the new interface behaviors that are disclosed herein may be more readily applicable to some computing devices than to others. For example, if a computing device has a touchscreen or mouse or some other means of quickly indicating a location on the display of the computing device, it will be relatively straightforward to implement a feature that is invoked by means of a “touch gesture” that requires the user to indicate a specific location on the display of the computing device. As another example, the present specification may suggest that a particular interface behavior may be applicable “in an embodiment that is a computing device that has a mouse” in order to indicate that the description of that particular interface behavior makes reference to a mouse, and that the behavior may be more readily implemented on a computing device that has a mouse. (Such phrasing emphasizes the configuration of the interface hardware for a particular embodiment, but the phrase “in an embodiment that is an interface that has mouse functionality” would have a substantially identical meaning, since it is equally valid to characterize the invention as an improved interface for a computing device as well as characterizing it as a computing device with an improved interface.) Internal Implementation Variations do not Distinguish Features Generally, an interface behavior may be conceived of as a mapping of circumstances and sequences of user input actions to interface responses. In other words, anyone who knows exactly how an interface will respond to every possible sequence of user input actions in all circumstances has a complete understanding of all the interface's behaviors. If two distinct interface algorithms are guaranteed to always yield exactly the same results in response to the same sequence of user input actions in all circumstances, then these two algorithms implement the exact same interface behavior. Internal implementation details do not distinguish one interface behavior from another. For example, if one interface algorithm responds to the backspace key by deleting the character to the left of the input cursor, and another interface algorithm responds to the backspace key by first moving the input cursor past the character to its left and then immediately deleting the character that is now to the right of the input cursor, then these two algorithms implement the exact same interface behavior: they work differently, but yield the same results. Generally, interface behaviors that make it easier for users to achieve desired results faster may nevertheless be relatively complex and difficult to understand. For example, a user may find that prior art autocorrection functionality makes it easier to achieve desired results faster even if the algorithm that determines the behavior of the autocorrection functionality is too complex for the user to understand. The present specification describes in detail various interface behaviors that may enable users to achieve desired results faster. Certain of these interface behaviors are relatively complex. In many cases, in order to fully define a complex interface behavior, the present specification will specify a particular means by which a particular embodiment will determine how to respond to a sequence of user input actions. In other words, the present specification will often specify a complex interface behavior by means of explaining how to construct one particular algorithm that implements the interface behavior. However, those of ordinary skill in the art will understand that it is always possible to create various equivalent algorithms that implement the exact same interface behavior. If a programmer implements an interface feature that will always respond to a sequence of user input actions in exactly the same way as an interface feature that is described herein, but the programmer implements such an interface feature by means of an algorithm that operates in an entirely different way than is suggested herein, then those of ordinary skill in the art will understand that the feature that the programmer has implemented still constitutes exactly the same feature as the one that is described herein. Furthermore, those of ordinary skill in the art will understand that it is possible to create various algorithms that implement interface features that are mostly equivalent to interface features that are described herein, but that behave slightly differently in rare cases. For example, the present specification describes an embodiment in which a key is configured so that in certain circumstances if a user actuates that key then the interface will move a plus sign that is immediately after a mathematical structure into the mathematical structure. A programmer may instead create an embodiment such that if a user actuates the same key in the same circumstances then the interface will first insert a plus sign within the mathematical structure and will then delete the plus sign that is immediately after the mathematical structure. If a programmer were to do so, then the interface feature the programmer thus implemented would always appear to respond to a user's input actions in exactly the same way as if it moved the plus sign from immediately after the mathematical structure into the mathematical structure, and those of ordinary skill in the art would understand that such use of a distinct yet equivalent algorithm would not distinguish the feature that the programmer thus implemented from the feature that is described herein. Improvements May be Implemented in Various Software Domains Those of ordinary skill in the art will understand that various improvements that are described herein may be implemented in the interface of an individual application, or in the interface functionality of a code library, or in the interface layer of the operating system of a computing device, or in some other software domain. Certain specific improvements that are described herein are applicable when editing text documents, or when editing spreadsheets, or in other specific situations that typically occur within the context of an individual application. Such improvements may of course be implemented within the interface of a particular application, so that a computing device will embody the invention whenever that application is installed on the computing device. However, even improvements such as these may instead be implemented in the interface layer of the operating system of a computing device, within the library of functionality that is available to applications that run on the computing device, so that various applications can easily make use of the improvements. When improvements are implemented in the operating system, in some cases this may enable certain applications to automatically take advantage of such improvements, without necessarily requiring the programmers of such applications to make an explicit effort to integrate these improvements. For example, if the autocorrection functionality within the interface layer of the operating system of a computing device is replaced with improved autocorrection functionality that is described herein, then any application that uses such autocorrection functionality may automatically be improved. It may even be advantageous for relatively specialized functionality to be implemented within the library of functionality that is available to applications that run on a computing device; for example, it may be advantageous for a tablet computer to have an operating system that includes math input interface functionality with math input interface improvements that are described herein, so that programmers of applications for such a tablet computer can easily take advantage of such math input interface functionality. Some of the improvements that are described herein pertain to functionality that is typically the responsibility of the interface layer of the operating system of a computing device, such as functionality for moving the mouse cursor, for switching the keyboard focus from one application to another, for superimposing a window over an application window, and so forth. Such improvements may of course be implemented within the interface layer of the operating system of a computing device, so that the computing device will embody the invention whenever that operating system is installed on the computing device, regardless of what applications are installed. However, many operating systems include functionality that enables applications to essentially modify the operation of the operating system itself, so that even functionality that is the responsibility of the interface layer of the operating system of the computing device can be improved by installing an appropriately configured application. For example, installing a prior art speech recognition application on a computer with a Microsoft Windows operating system may make it possible for a user to move the mouse cursor by means of spoken commands, as though the operating system functionality pertaining to the mouse cursor had itself been modified; likewise, those of ordinary skill in the art will understand that an application can be created such that installing the application on a computer with a Microsoft Windows operating system may cause the computer to embody the present invention in essentially the same way as if certain improvements were implemented within the interface layer of Microsoft Windows itself. Putting Features in Perspective The present specification describes numerous features. An embodiment of the invention may have significant advantages over prior art even if it has only one or two of these features. However, many of these features may have a great deal of synergy with one another, so that an embodiment that has several of the features that are described herein may have advantages that exceed the sum of the advantages of the individual features. In various places throughout the present specification, in order to put various features in perspective, the present specification will include brief discussions of how certain features compare to prior art, or how the features relate to other features that are described herein, or both. Such a discussion may consist of a single paragraph at the end of a section, or an entire section within a chapter. Such a discussion may point out how a certain feature or group of features has advantages over prior art in its own right, may explain synergistic advantages that a certain feature has in conjunction with other features, may suggest ways to ensure that an embodiment that lacks a certain feature still retains most of the advantages of its other features, and so forth. At the end of the present specification there is a longer discussion of additional possible advantages of some embodiments. Such discussions are intended to be helpful, but are not intended as exhaustive explanations of advantages, synergies, or interrelationships of features; if a particular feature is never specifically mentioned in any such discussion, then this does not mean that the feature does not have any synergy with other features. Keyboards In some embodiments, the computing device has a hardware keyboard. In particular, in some embodiments the computing device has a keyboard with an individual physical key that serves as an alteration key as described in the following chapter.FIG.1Bshows a computer keyboard featuring an alteration key labeled “Oops!” (100) as well as featuring an F12 Key (102) which is also referenced inFIG.25B.FIG.1Cshows a calculator keypad featuring an alteration key labeled “Oops!” (100), an ANS key (103), an Enter key (104), a Log key (105), a “Right” arrow key (106), a “Left” arrow key (107), a “±” Key (108) that may serve as a fraction initialization key as defined in Chapter 44, a “−” Key (109), and a “+” Key (110). Key Actuations For purposes of the following discussion of key actuations, an “invocable function” is an interface function that a user can invoke that takes no parameters. In other words, an invocable function is an interface function that can be assigned to a single key, without any need for the user to supply additional information before the function is invoked. For example, many application interfaces have a Print function such that invoking the function opens a Print dialog box; this is an invocable function that is often assigned to the key combination Ctrl-P, and can often be invoked by other means as well. In most cases, on most computing devices, most of the invocable functions that are referred to in the present specification can be invoked by pressing a single physical key or a single depiction of a key on a virtual keyboard. In some cases, however, on certain computing devices, certain invocable functions that are referred to in the present specification cannot be invoked by pressing a single physical key or a single depiction of a key, but must instead be accessed by some other means. For example, a TI-84 Plus graphing calculator does not have a dedicated square root key, so on a TI-84 the invocable function of inputting a square root symbol cannot be invoked by pressing a single key, but is instead invoked by means of first pressing the key labeled 2nd and then pressing the key labeled x2. Generally, for the sake of simplicity, the present specification will not refer to invocations of specific invocable functions, but will instead refer to “actuations” of specific “keys.” Those of ordinary skill in the art will understand that any reference to “keys” or “keystrokes” in the present specification applies equally to other means for invoking specific invocable functions; in most cases, the details of how specific invocable functions are invoked are not essential to the present invention. Thus, in the present specification, the word “key” may refer to any means whereby the user of an interface can invoke a single specific invocable function, without regard to whether such means actually consists of pressing a single physical key on a keyboard. Likewise, “actuating a key” refers to invoking such a function, “having a key” refers to having a means for invoking such a function, “keystroke” refers to an invocation of such a function, and so forth. For example, on a typical desktop computer keyboard, pressing the 2 key while holding down the Shift key constitutes “actuating the @ key” because this combination of actions invokes the function that causes the character @ to be entered into the user's input. Such a keyboard may be said to “have an @ key.” Similarly, in many computer software applications, pressing the S key while holding down the Ctrl key constitutes “actuating the Save key,” because this combination of actions invokes the function that causes the software to save the file that the user is currently editing. Likewise, using a computer mouse to open a menu of options and choose the Save option constitutes “actuating the Save key” according to the definition used in the present specification. Similarly, many software applications have a toolbar that can be displayed that includes a depiction of a Save button, and clicking on such a depiction of a button or otherwise actuating such a depiction of a button constitutes “actuating the Save key” according to the definition used in the present specification. On certain graphing calculators, the sequence of actions consisting of first pressing a key labeled 2nd and then pressing a key labeled x2has the effect of entering a square root into the user's input; thus, such a sequence of actions constitutes “actuating the square root key” according to the definition used in the present specification, and such a calculator may be said to “have a square root key.” On certain cell phones that have text messaging capabilities and do not have touchscreens, in certain circumstances it is possible to type the letter a by pressing a key labeled 2; thus, pressing the 2 key in such circumstances constitutes “actuating the a key” according to the definition used in the present specification. On certain such cell phones, in certain circumstances, it is possible to type the letter b by pressing the key that is labeled 2 twice rapidly: the first time the key is pressed, the letter a appears, and the second time the key is pressed, the letter a is replaced with the letter b. From the user's perspective, pressing the 2 key twice rapidly is the means of typing the letter b; when the user types the letter b, the temporary appearance of the letter a serves no purpose that the user then desires other then enabling the user to access the letter b, so pressing the 2 key twice rapidly is, from the user's perspective, equivalent to “actuating the b key” according to the definition used in the present specification. Several examples have now been given in which a user performs more than one action in order to “actuate a key.” In each of these examples, the first action that the user performs—such as holding down the Shift key or Ctrl key on a computer keyboard, or opening a menu of options, or pressing the 2nd key on a graphing calculator—has no effect that the user desires other than enabling the user to subsequently perform an action that then causes the invocation of the function that the user desires to invoke. Thus, in each case, although the user may perform a plurality of actions, the user has in view the invocation of a single function. If a user performs a sequence of actions that has the same result as if the user had invoked a single function, but this sequence of actions consists of a plurality of independent actions each of which causes the invocation of an independent function that yields a portion of the desired result, then this sequence of actions is not a single key actuation according to the definition used in the present specification. For example, if an interface has a cosine key such that actuating the cosine key causes the three letters “cos” to be inserted into the user's input, but a user of the interface instead sequentially types the same three letters by means of actuating three individual letter keys, then the user will not be said to have “actuated the cosine key.” As another example, a TI-84 Plus graphing calculator has a cubed function that causes an exponent of 3 to be entered into the user's input, and a user can access this function as the #3 option within the calculator's MATH menu, so if a user presses the calculator's MATH key and then presses the 3 key, this sequence of actions constitutes “actuating the cubed key” according to the definition used in the present specification; but if a user instead enters an exponent of 3 by pressing the A key and then pressing the 3 key, this latter sequence of actions does not constitute “actuating the cubed key.” (In a user interface that has Undo functionality, a single invocation of the Undo function will typically reverse a single invocation of an invocable function other than a cursor navigation function; thus, the intuitions of an experienced user as to how an Undo function will operate can serve as an approximate guide to what constitutes a single invocation of an invocable function and thus constitutes a single “key actuation” according to the definition used in the present specification. For example, a user of a TI-84 Plus graphing calculator can press the calculator's MATH key to cause the MATH menu to appear and then press the 3 key to choose the cubed function from the MATH menu, thus causing the MATH menu to disappear and an exponent of 3 to be entered into the user's input; if an experienced user of computing devices were then asked what would happen if an Undo function were to be invoked a single time, the user would answer that the exponent of 3 would be removed from the input that was entered. Such a user would not answer that the exponent of 3 would be removed and the MATH menu would reappear; thus, the user's answer indicates that the user would expect a single invocation of an Undo function to undo the entire cumulative effect of pressing the MATH key and then pressing the 3 key, and not to undo only the effect of pressing the 3 key. The user's intuition thus confirms that such a sequence of two keystrokes may be considered to constitute a single invocation of an invocable function.) Because a “key actuation” as defined above may actually involve a sequence of several actions, reducing the number of key actuations required to accomplish a particular editing task may not always make it easier for a user to perform the task: the ease of accomplishing a task depends not only on how many key actuations are required but also on how easy it is to perform those key actuations. Nevertheless, reducing the number of key actuations required to accomplish various editing tasks will generally tend to make it easier to perform those tasks. Furthermore, reducing the number of actuations of distinct keys required to accomplish various editing tasks will generally tend to make it easier to accomplish those tasks: double-clicking a single key is usually easier than finding and pressing two distinct keys. The present specification will explain specific means by which specific editing tasks can be made more efficient by reducing the number of key actuations or reducing the number of distinct key actuations that are needed in order to accomplish the tasks. In the present specification, a “first actuation” of a particular key is any actuation of that key such that the most recent user action prior to that actuation was not also an actuation of that key. In other words, any actuation of a key is a first actuation of that key if it is not at least the second consecutive actuation of that key. Naming of Keys A “key,” as defined above, may serve as a means to invoke a function that determines based on current circumstances which other function to invoke from among a plurality of other functions. When the present specification refers to a particular key, the present specification will typically name the key in terms of the function it invokes; however, because the function that a key invokes may at times be a means for invoking some other particular function, such a key may at times be named in terms of another function it may invoke, and thus a single key may have more than one name in the present specification. For example, a typical desktop computer keyboard has a physical key that could be called the “numeric keypad 2 key” which invokes the same function as a “2 key” when the keyboard is operating in Num Lock mode and invokes the same function as a “down arrow key” at other times; this “numeric keypad 2 key” thus may also be referred to as a “2 key” or as a “down arrow key” under the appropriate circumstances. Under circumstances where the function that a key invokes is currently a means for invoking some other particular function, that key may be named in terms of the other function it currently invokes; thus, the “numeric keypad 2 key” may be referred to as the “2 key” when the keyboard is operating in Num Lock mode. Under such circumstances, however, a key that serves a simpler function will not be named in terms of the more complex function that currently yields the same result. For example, a “2 key” that never serves any purpose other than entry of a 2 character will not be referred to as a “numeric keypad 2 key” even under circumstances where a “numeric keypad 2 key” would happen to serve the purpose of entry of a 2 character. As another example, it is possible to devise circumstances in a typical text document editing program such that either invoking the Undo function or invoking the Paste function will result in the entry of a 2 character, but a “2 key” that never serves any purpose other than entry of a 2 character will not be referred to as an “Undo key” or a “Paste key” even under such circumstances. Touch Gestures The details of how a particular location on the screen of a computing device is indicated by the user are not generally essential, and will not always be explicitly specified in the present specification. Thus, in the present specification, except as otherwise specified, the word “touch” may refer to any means whereby the user of an interface indicates a specific location on the screen of a computing device, without regard to whether or not said means actually consists of touching the screen with a finger or stylus. For example, on a computing device that does not have a touch-sensitive display but that does have a mouse, positioning the mouse cursor over a highlighted region on the screen of a computing device and clicking the mouse button may constitute “touching a highlighted region.” In the present specification, whenever an interface action is described that is invoked by a touch of a specific location, it is to be understood that various alternative embodiments may require various means of “touch” in order to invoke the specified interface action. For example, if an interface action is described that is invoked by “touching a highlighted region,” then in one alternative embodiment the interface action may be invoked only if the user rapidly double-clicks the mouse button when the mouse cursor is on the highlighted region, and in another alternative embodiment the interface action may be invoked only if the user presses the highlighted region with a finger and holds the touch for at least one second, and so forth. Various such alternative touch gestures will be obvious to those of ordinary skill in the art. Embodiment-Specific Terminology In the discussion of specific features of certain embodiments of the present invention, many terms are defined. In some cases, a term is defined that is said to have a certain meaning “in an embodiment” or is said to pertain to a certain circumstance “in an embodiment,” and the specification makes use of that term when describing a feature that is present in certain embodiments. In such cases, for purposes of any feature that is described using that term, that meaning of the term is applicable in at least one embodiment, but it is not necessarily applicable in all embodiments. For example, in Chapter 11, the present specification details certain functionality pertaining to “significant interface interventions” in certain embodiments. Chapter 11 also says, “In an embodiment, for purposes of the following paragraphs, a ‘significant interface intervention’ is an alterable interface intervention that does not currently have its non-intervention option selected, and that has a probability of alteration that is above a certain ‘undoable intervention threshold.’” It is to be understood that it may also be possible to create an alternative embodiment that still includes functionality pertaining to significant interface interventions, but uses a different definition for “significant interface interventions.” (In fact, one such alternative embodiment is also described in Chapter 11: “In an alternative embodiment, a ‘significant interface intervention’ is any alterable interface intervention that does not currently have its non-intervention option selected, regardless of its probability of alteration.”) Example Device or System FIG.1Ais a block diagram of an embodiment of a device10including an interface as described herein. The device10can be any type of electronic or computing device including, for example, a desktop computer, a laptop, a tablet, a smartphone, a voice computer, or a calculator, among others. The interface allows a user to interact with the device10. For example, the interface allows a user to input information into the device10and view or receive information from the device10. In the illustrated embodiment, the device10includes a processor12and a computer readable medium14. The computer readable medium14can include instructions that, when executed by the processor12, cause the device10to implement one or more of the many interface functions described herein. The computer readable medium14can also be used to store information, as necessary, for use by the interface. As shown, the device10includes an input device16. The input device16allows the user to input information into and interact with the device10. Many types of input devices16are possible, including keyboards, mice, touchscreens, microphones (for voice commands), etc. The input device16can include an alteration key18. The alteration key18can be used, for example, to alter an alterable decision made by the interface as described below, as well as for additional functionality as described throughout this application. In some embodiments, the alteration key18is a dedicated key or button. In some embodiments, the alteration key18is a soft-key implanted on a touch screen. In some embodiments, the alteration key18can be implemented by having a user click or press on an alterable decision to alter it. In some embodiments, the alteration key18can be a voice command. Additional examples of the alteration key18are found throughout this application. The device10also includes an output device20. The output device20can be configured to display or otherwise communicate information to a user. For example, the output device20can be a monitor or screen. In some embodiments, the output device20can comprise a speaker for audibly communicating information to the user. As noted previously, the interface of the device10can be configured to implement one or more of the various interface functions described herein. Chapter 2: Alterable Decisions and the Alteration Key About Part I Part I of the present specification includes Chapters 2 to 15. Part I explains various features that may facilitate mistake recovery, including features that pertain to “alterable decisions” as described below, and also including features that pertain to undo functionality and autocorrection functionality. Farther below, Part II explains features that may tend to prevent the interface from repeating mistakes and features that may tend to prevent users from wasting time. Part III explains for various situations interface improvements that may make it easier for a user to achieve desired results quickly; in particular, Part III explains for various situations various specific circumstances in which the interface may make alterable decisions. Alterable Decisions Various embodiments include various features that pertain to “alterable decisions.” As used in the present specification, an “alterable decision” or “alterable interface decision” can be an interface decision such that the interface either saves sufficient information to subsequently identify the portion of input that the decision pertained to, or sufficient information to later determine what the user's input would then be if the outcome of the earlier decision had been different, or both. For example, in an embodiment, when the interface decides to autocorrect a word that a user typed, the decision to autocorrect the word will be an alterable decision, which means that the interface will either save sufficient information to subsequently identify the word that was autocorrected as the portion of input that this alterable decision pertained to, or sufficient information to later determine what the user's input would be if the word's uncorrected spelling were retroactively restored, or both. However, the term “alterable decision” may be a misnomer in some embodiments because not all embodiments will necessarily facilitate altering such decisions: some embodiments may, and others may not. For example, an embodiment is described in Chapter 5 in which the interface highlights the outcomes of “alterable decisions” in order to call attention to possible mistakes, and such an embodiment may yield significant advantages over prior art even if it does not also facilitate the correction of such mistakes. Also, the term “alterable decision” may be a misnomer for certain types of alterable decisions because when a user performs an action that causes the interface to “make an alterable decision,” the interface will not necessarily have any real choice as to how it initially responds to the user's action. For example, an embodiment is described in Chapter 33 such that if a user pauses for a long time while the interface is in Caps Lock mode, the interface may “alterably decide to remain in Caps Lock mode.” In such an embodiment, even though the interface makes a so-called “alterable decision” in such circumstances, it has no real choice: arguably, a long pause is not a sufficient reason for the interface to automatically exit Caps Lock mode. Thus, the term “alterable decision” should be understood in light of the entirety of the present specification, and should not be strictly construed. Above, and in various other places, the present specification mentions the “portion of input” that an interface decision pertains to. However, in some embodiments, the interface functionality that is specified herein may also interact with software output or interact with something else that is not input. Throughout the present specification, where appropriate, the word “input” should be understood to also refer to software output or to anything else that interface functionality may display and modify. For example, in an embodiment where the interface makes a decision whether to display the output of a calculation as a fraction or as a decimal, the so-called “portion of input” that this interface decision pertains to is actually the output of the calculation. Alterable Decisions in Perspective In the present specification, alterable decisions are often associated with potential user mistakes: in some embodiments, if an interface decision has an outcome that a user does not expect and does not desire, then in many cases the user will be able to correct the mistake by altering the decision. However, even though alteration functionality is often associated with mistake recovery herein, it is to be understood that alteration functionality need not be exclusively used for mistake recovery. For example, in some cases, a user who is familiar with alteration technology may deliberately make a “mistake” and then alter an interface decision because this is an efficient way to achieve a desired outcome. The most common mistakes that a typical user of a computing device will encounter are word entry mistakes, which are typographical errors or spelling mistakes that are the user's fault. In some interfaces autocorrection functionality will often, but not always, correct word entry mistakes. Arguably, the most frustrating mistakes that a typical user of a computing device will commonly encounter are “undesired interface interventions,” which are the mistakes that occur when the computing device performs an autocorrection or automatic formatting action that the user does not desire. An interface decision regarding whether or not to autocorrect a word is a familiar example of an interface decision that is reasonably likely to have an outcome that a user regrets, and for that reason, autocorrection decisions are frequently used herein as convenient examples of alterable decisions. However, autocorrection decisions are not the only possible type of alterable decision: in Part III the present specification explains many other types of interface decisions that are alterable decisions in some embodiments. Where prior art interfaces include specialized mistake recovery features (other than generic undo functionality), those features typically only facilitate recovery from word entry mistakes or undesired interface interventions. For that reason, when the present specification compares new mistake recovery features to prior art mistake recovery features, the present specification will generally emphasize how new mistake recovery features may do better at facilitating recovery from word entry mistakes or undesired interface interventions. Despite this emphasis on word entry mistakes and undesired interface interventions, it is to be understood that the new mistake recovery features described herein may also do better than prior art at facilitating recovery from various other types of mistakes, since prior art interfaces generally lack functionality that especially facilitates recovery from the other types of mistakes that are described herein. An embodiment that has even just one feature that interacts with alterable decisions and has just one type of alterable decision may have significant advantages. For example, in a very early prototype, the only feature that interacted with alterable decisions was a simple alteration key, and the only alterable decisions that the alteration key could interact with were alterable structure exiting decisions; even in that relatively simple embodiment, the alteration key was useful. However, the more interface decisions are alterable decisions, the more useful it will be to add functionality pertaining to such decisions, and conversely, the more functionality an embodiment has pertaining to alterable interface decisions, the more useful it will be to make interface decisions be alterable decisions. Alterable decisions may thus be at the core of a synergistic virtuous cycle. Much of the present specification is devoted to explaining features that interact with alterable decisions in certain circumstances and features that cause the interface to make alterable decisions in certain circumstances. Other features are explained herein that do not directly pertain to alterable decisions, but may be more advantageous in an embodiment that has alterable decision functionality. For example, if a particular feature usually facilitates faster input, but careless users occasionally tend to make a certain type of mistake when using that feature, then such a feature may be advantageous on its own, but may be more advantageous in an embodiment that has alterable decision functionality that is specifically configured to facilitate correcting that type of mistake. An Alteration Key Example FIG.2Aillustrates what may happen as a result of repeated consecutive actuations of the alteration key in an embodiment that has an alteration key as described below and in which additional consecutive actuations of the alteration key may have additional effects as described below. Each block of the figure illustrates what the user's input will be after a certain number of consecutive actuations of the alteration key. In each block, the text that the currently alterable decision pertains to is highlighted (if there is a currently alterable decision). In each block, the number in parentheses corresponds to an interface state fromFIG.2B, and thus indicates what state the interface is in at that point. The effect of each actuation can be seen by comparing the block above that actuation's arrow to the block below it. At the beginning of the example inFIG.2A, the user has typed “Tursday is its premiere” but the interface has made an alterable decision to autocorrect “Tursday” to read “Tuesday” and an alterable decision to autocorrect “its” to read “it's.” The text thus reads “Tuesday is it's premiere,” as shown in Block200. The alterable decision pertaining to the word “it's” is the currently alterable decision. The interface is in default operating mode as shown inFIG.2B, Block206. The user then actuates the alteration key a first time. The interface alters the currently alterable decision, yielding “Tuesday is its premiere,” as shown in Block201. The alterable decision pertaining to the word “its” is still the currently alterable decision. The interface is now in alternate option selected mode as shown inFIG.2C, Block218. The user then actuates the alteration key a second consecutive time. The interface alters the currently alterable decision again and thus reverts it to its default option, yielding “Tuesday is it's premiere,” as shown in Block202. The interface then causes the alterable decision pertaining to the word “Tuesday” to become the currently alterable decision, as indicated by the highlighting of that word in Block202. The interface is now in alteration cycle operating mode as shown inFIG.2C, Block215. The user then actuates the alteration key a third consecutive time and a fourth consecutive time. This yields “Thursday is it's premiere” as shown in Block203and then “Tursday is it's premiere” as shown in Block204. After these actuations, the alterable decision pertaining to the word “Thursday” or “Tursday” is the currently alterable decision, and the interface is in alternate option selected mode as shown inFIG.2C, Block218. The user then actuates the alteration key a fifth consecutive time. The interface reverts the currently alterable decision to its default option, yielding “Tuesday is it's premiere” as shown in Block205. There is now no currently alterable decision, as indicated by the lack of highlighting in Block205. The interface has returned to default operating mode as shown inFIG.2B, Block206. The user then actuates the alteration key a sixth consecutive time. Because there is no currently alterable decision, this actuation of the alteration key does not affect the user's input; instead, it causes the alterable decision pertaining to the word “it's” to become the currently alterable decision again. The arrow labeled “Actuation6” that points from Block205to Block200indicates that this sixth consecutive actuation of the alteration key causes the interface to return to the same state it was in prior to the first of these consecutive actuations of the alteration key. Subsequently, a seventh consecutive actuation of the alteration key would again yield the result shown in Block201, and an eighth consecutive actuation would again yield the result shown in Block202, and so on. An Alteration Key Algorithm FIGS.2B and2Care flowcharts that illustrate one possible algorithm for an embodiment that has an alteration key as described below, and in which additional consecutive actuations of the alteration key may have additional effects as described below. This algorithm yields the interface behavior that is illustrated inFIG.2A. In an embodiment that behaves as indicated by these flowcharts, whenever the user's most recent action was not an actuation of the alteration key (and in certain other circumstances), the interface is in a “default operating mode” as shown inFIG.2B, Block206. While the interface is in default operating mode, each time the user performs an editing action (Block207) other than an actuation of the alteration key, the interface will handle that action (Block208) and remain in default operating mode. Whenever the interface handles an editing action other than an actuation of the alteration key, this may cause the interface to update which decision is the currently alterable decision or “CAD” (Block209): in particular, if the interface makes an alterable decision in response to an editing action, then in certain circumstances this new alterable decision will become the currently alterable decision. While the interface is in default operating mode (Block206), if a user actuates the alteration key (Block210) when there is no currently alterable decision (Block213), this actuation of the alteration key will cause the most relevant alterable decision to become the currently alterable decision (Block214) if there is any alterable decision, or will have no effect if there are no alterable decisions; in either case, the interface will remain in default operating mode and return to Block206. While the interface is in default operating mode (Block206), if the user actuates the alteration key when there is a currently alterable decision (Block210), then the interface will add the other alterable decisions to the alteration cycle (Block211). The interface will then select an alternate option of the currently alterable decision (Block212, leading toFIG.2C, Block217) and enter an “alternate option selected mode” (Block218). While the interface is in alternate option selected mode, each time the user actuates the alteration key (Block220), if the currently alterable decision is a multi-alternative decision and there is an alternate option that has not yet been selected, then the interface will select such an option (Block217) and remain in alternate option selected mode (Block218). While the interface is in alternate option selected mode (Block218), if the user performs an action other than an actuation of the alteration key, then the user has explicitly selected an alternate option of the currently alterable decision (Block226), which has effects that are described in Chapter 3. The interface will handle the action (Block227) and revert to default operating mode (Block206). While the interface is in alternate option selected mode (Block218), if the user actuates the alteration key (Block219) when there is no alternate option (Block220) for the currently alterable decision that has not yet been selected, the interface will revert the currently alterable decision to its default option (Block221). The user has completed review of the currently alterable decision, which has effects that are described in Chapter 3. The interface will then move on to the next decision in the alteration cycle (Block222), as described in the following paragraphs. When the interface moves on to the next decision in the alteration cycle, if any alterable decisions are remaining in the alteration cycle, then the most relevant such decision will be removed from the alteration cycle and will become the new currently alterable decision (Block225). The interface will then enter an “alteration cycle operating mode” (Block215). The alteration cycle operating mode (Block215) and the default operating mode (Block206) are quite similar, but when the interface is in the alteration cycle operating mode (Block215) it already has an alteration cycle in mind, so if the user's next action is an actuation of the alteration key (Block216) then the interface will not need to initialize the alteration cycle before proceeding to alter the currently alterable decision (Block217) and enter alternate option selected mode (Block218). When the interface is in the alteration cycle operating mode (Block215), if the user's next action is not an actuation of the alteration key, the interface will handle the action (Block227, leading to Block208), update which decision is the CAD (Block209), and revert to default operating mode (Block206). When the interface moves on to the next decision in the alteration cycle, if no alterable decisions are remaining in the alteration cycle, there will then be no currently alterable decision (Block223) and the interface will revert to default operating mode (Block224, leading to Block206). Other algorithms that implement the same interface behavior or similar behavior will be evident to those of ordinary skill in the art. Altering Decisions In the present specification, the “default option” of an alterable interface decision can be the actual initial outcome of that decision, and an “alternate option” can be some other outcome that the user might prefer for that decision, except as otherwise specified. For example, in an embodiment, if the interface has automatically corrected the spelling of a word that a user typed, and the interface's decision to correct this word's spelling is an alterable interface decision, then the corrected spelling is the default option and the user's original uncorrected spelling is an alternate option. In an embodiment, when the interface makes an alterable decision, in addition to saving enough information for the interface to subsequently identify the portion of input that the decision pertained to, the interface will also save enough information about the alternate options of that decision that the interface can later determine what the user's input would then be if an alternate option were chosen instead of the default option. For example, in an embodiment, when the interface makes an alterable decision to automatically correct a word that a user typed, the interface will save enough information to later be able to determine what the user's input would then be if the user's original uncorrected spelling were restored retroactively. Below, various embodiments are specified in which the interface will “alter an alterable interface decision” in certain circumstances. Except as otherwise specified, when the interface alters an alterable interface decision, it replaces that decision's default option with an alternate option, without prompting the user for any additional confirmation or clarification; however, various other ways that the interface may alter an alterable interface decision in certain circumstances are specified below. In an embodiment, when the interface alters an alterable interface decision, it will retain enough information to be able to subsequently revert that decision to its default option. In an embodiment, after a user has caused the interface to alter a decision so that an alternate option of that decision is selected, if the user then performs some action that does not cause the interface to alter that decision, this constitutes “explicitly selecting an option” of that alterable interface decision. Unless otherwise specified, when the user's most recent action caused the interface to alter a decision, the user has not yet explicitly selected an option of that alterable interface decision—not until the user performs some other action. In an embodiment, once a user has explicitly selected an alternate option of an alterable interface decision, if the decision is still alterable, then for purposes of the alteration functionality described herein, the interface will subsequently treat the option the user explicitly selected as the default option of that particular alterable decision and will treat the former default option of that decision as an alternate option. For example, in an embodiment, after the interface makes an alterable decision to automatically replace the word “goof” with the word “good” but then the user explicitly selects the alternate option “goof,” if the decision is still alterable, then “goof” will subsequently be treated as the default option of that particular decision for purposes of alteration functionality and “good” will be treated as an alternate option. (This does not mean that explicitly selecting the alternate option “goof” necessarily has any effect on future interface decisions regarding whether or not to automatically replace the word “goof”: it only means that “goof” will subsequently be treated as the default option of the alterable decision the interface already made.) In the present specification, any mention of “alteration features” or “alteration functionality” may refer to features that are disclosed herein that are specifically designed to interact with alterable interface decisions. Any mention of “altering” an alterable interface decision generally refers only to altering such a decision by means of such alteration features, and generally does not refer to manual editing of the portion of the user's input that an alterable interface decision pertains to, unless otherwise specified. For example, if the interface has made an alterable decision to automatically insert an apostrophe in the word “its,” then if the user manually deletes the apostrophe, the user will not be considered to have “altered” an alterable interface decision. The Alteration Key In an embodiment, in certain circumstances an alterable interface decision is the “currently alterable decision” for purposes of alteration functionality. In Chapter 3, methods and systems are described for determining which alterable interface decision, if any, is the currently alterable decision at any given time. (In an embodiment, the currently alterable decision is often, but not always, the most recent alterable decision, as is explained in Chapter 3. In an embodiment, it may be possible to alter an alterable decision by various means even when it is not the so-called currently alterable decision.) In an embodiment, the computing device has an alteration key such that when the alteration key is actuated, if there is a currently alterable decision, then the interface will immediately alter that decision. For example, if the interface has made an alterable decision to automatically insert an apostrophe in the word “its” and that decision is the currently alterable decision, then actuating the alteration key will cause that apostrophe to be deleted, regardless of the input cursor's current location, without prompting the user for any additional confirmation or clarification. Such an alteration key will often enable a user to correct an undesired outcome of an interface decision with a single keystroke, without the need to go back and manually correct the interface decision. In an embodiment, when the alteration key is actuated, if there is then no currently alterable decision but at least one alterable decision exists, then in response to that actuation of the alteration key the interface will cause the “most relevant decision” as defined in Chapter 3 to become the currently alterable decision, and no other effect will occur. (In such an embodiment, it may be particularly advantageous for the interface to then highlight the currently alterable decision, as is described in Chapter 5, so that the user can see the effect of such an actuation of the alteration key.) This behavior is illustrated by Actuation6ofFIG.2A. As is discussed in Chapter 1, the alteration “key” need not necessarily be an individual key on a hardware keyboard, but may be any means of invoking the function that is specified above in the description of the alteration key. For example, in various embodiments, a key labeled “Oops” on a hardware keyboard may be the alteration key, or the F12 key may be the alteration key, or the key combination Ctrl-T may be the alteration key, or a specific virtual key on a virtual keyboard may be the alteration key, and so forth. Deleting Alterable Decisions In the present specification, when an alterable interface decision is said to “cease to exist,” this means that subsequently none of the functionality disclosed herein that pertains to alterable interface decisions will treat that decision as an alterable interface decision, unless otherwise specified. For example, if an alterable interface decision has “ceased to exist,” then that alterable interface decision cannot be the currently alterable decision and actuating the alteration key will not alter that decision. This does not mean that the portion of input that the decision pertained to ceases to exist. After an alterable interface decision has ceased to exist, information about the alterable interface decision may still be retained in the memory of the computing device for various purposes, such as for purposes of the automatic decision variation feature described in Chapter 17 and the manual alteration detection feature described in Chapter 18. In the present specification, if an alterable interface decision is said to be “deleted,” this means that the alterable interface decision ceases to exist. In an embodiment, if an action that causes an alterable interface decision to cease to exist is subsequently undone by means of the Undo key, then the alterable interface decision will be made to exist once again. In an embodiment, an alterable interface decision will immediately cease to exist if the portion of input that it pertains to is deleted, because that alterable interface decision is no longer applicable. For example, when the interface has made an alterable decision to automatically convert the word “friday” to “Friday,” that alterable interface decision will cease to exist if the entire sentence containing the word “Friday” is deleted. In an embodiment, an alterable interface decision will immediately cease to exist if the portion of input that it pertains to is modified in some way other than by means of altering that particular interface decision, if the modification is sufficiently relevant to the nature of the alterable decision. For example, in an embodiment, when the interface has made an alterable decision to automatically convert the word “friday” to “Friday,” that alterable interface decision will cease to exist if the user manually changes the word to “Friendly,” but not if the user italicizes the word. In such an embodiment, in certain circumstances, the interface may make more than one alterable decision that pertains to the same portion of input, and altering one such decision need not necessarily cause all the other decisions that pertain to the same portion of input to cease to exist. For example, in an embodiment, if the interface makes an alterable decision to convert the word “friday” to “Friday,” and the interface also makes an alterable decision whether or not to italicize this word, then the user can alter either, both, or neither of these two decisions. In an alternative embodiment, an alterable interface decision will immediately cease to exist if the portion of input that it pertains to is modified in some way other than by means of altering that particular interface decision, regardless of whether or not the modification is relevant to the nature of the alterable decision. In an alternative embodiment, an alterable interface decision will not necessarily cease to exist immediately as soon as the portion of input that it pertains to is deleted or modified: instead, as soon as it becomes relevant whether or not a particular alterable interface decision still exists, the interface will determine whether the portion of input that alterable decision pertains to appears to have been deleted or modified. For example, in such an embodiment, if a certain portion of input that an alterable interface decision pertains to is deleted and is then retyped exactly as before, the interface may not notice that such activity occurred and may not delete the alterable decision in response to such activity. In an embodiment, when an alterable interface decision is altered, no other alterable interface decision will be deleted in response to the alteration of that alterable interface decision until the user has explicitly selected an option of that decision, and then only if deletion of the other decision is appropriate based on the option the user selected. In other words, in such an embodiment, if for example a user cycles past various options of an alterable interface decision by repeatedly actuating the alteration key as is described below, then no option that is only temporarily selected will cause any other alterable interface decision to permanently cease to exist. In implementing an embodiment, a programmer should ensure by some means that if the interface makes an alterable decision and saves information regarding that alterable decision, subsequent editing actions do not cause that information to become inaccurate while it is still possible for the user to cause the interface to alter the decision. For example, if the interface alterably decides to capitalize the word “friday” and saves the information that it alterably chose to capitalize the 100th character in the document the user was editing, then this is sufficient information for the interface to be able to alter that decision by locating that letter F and converting it back to lowercase, but if subsequently the user moves the input cursor to the beginning of the document and inserts additional text there, then that letter F is no longer the 100th character of the document, so the interface should either update its information or delete the alterable decision. Those of ordinary skill in the art will understand how to implement an embodiment that updates location data for alterable decisions when appropriate. However, in an alternative embodiment, the interface deletes every alterable decision whenever a user either performs an action that directly affects input that precedes the current input cursor location or performs an action that moves the input cursor to an earlier location; in such an alternative embodiment, it is thus impossible to edit a portion of the document that is prior to the location of any alterable decision without deleting the alterable decision, and so it may never be necessary to update location data for alterable decisions. Such an alternative embodiment may be somewhat less advantageous, but may still have advantages over prior art, and may take less effort to implement. The Alteration Key in Perspective The alteration key as described herein is an unusually efficient means of recovering from a mistake. Generally, prior art mistake recovery features require a user to perform at least two gestures: one gesture to indicate the location of the mistake and cause the interface to display one or more possible corrections, and another gesture to select the desired correction. A single actuation of the alteration key is faster than that two-step process; in fact, for a user who has become accustomed to the alteration key, a single actuation of the alteration key will probably be faster than the first step of that process in most cases. In many prior art interfaces, a user can fully recover from an undesired interface intervention by means of a single actuation of the Undo key in most cases provided that the user has not performed any other action since the undesired interface intervention occurred. However, if a user performs even one action before noticing an undesired interface intervention, then fully recovering from the mistake by means of the Undo key will require a three-step process: the user must undo everything he did after the undesired interface intervention occurred, and then must actuate the Undo key once more to undo the undesired interface intervention, and then must repeat everything he did after the undesired interface intervention occurred. It is quite common to not notice an undesired interface intervention right away, so in many cases the alteration key will be more efficient than prior art, even for correcting undesired interface interventions. The present specification includes many illustrative examples of altering decisions, and many of these examples refer to the alteration key. In particular, in various places, in order to show that it may be advantageous to make some particular type of interface decision be an alterable interface decision, the present specification provides an example scenario in which a user who makes a particular type of mistake can easily correct the mistake with just a single actuation of the alteration key. Such examples are not intended to imply that an embodiment must necessarily have an alteration key in order for it to be possible to alter decisions; on the contrary, other means of altering decisions are described below, including means that may be more convenient than the alteration key in certain circumstances. It is to be understood that it may be useful to make various types of interface decisions be alterable decisions even in an embodiment that does not have an alteration key. For that reason, the behavior that constitutes “altering a decision” is defined herein in terms that do not explicitly refer to the alteration key. Multi-Alternative Decisions In an embodiment, in certain circumstances, when the interface makes an alterable decision in which the interface selects among more than two relevant available options, the interface will save enough information to later replace the option that was selected with any of the plurality of options that were not selected. In such a case, the alterable decision will have more than one alternate option. In the present specification, an alterable interface decision that has more than one alternate option will be referred to as a “multi-alternative decision”; an alterable interface decision that has only one alternate option will be referred to as a “single-alternative decision.” (These terms thus refer to the number of alternate options of an alterable decision, not counting the default option.) In an embodiment, after the interface has made an alterable interface decision, in certain circumstances, the interface may later add more alternate options to that alterable interface decision or may later remove alternate options from that alterable interface decision. In such an embodiment, it may be possible for a single-alternative decision to later become a multi-alternative decision, or vice versa. In an embodiment, when the interface first alters a multi-alternative decision, in addition to selecting an alternate option of the decision, the interface will create a list of alternate options of that decision that have been selected. In an embodiment, when the interface alters a multi-alternative decision that the interface has previously altered, if an alternate option currently exists that has not been selected, then interface will replace the selected alternate option with an alternate option that has not yet been selected and will then add the newly selected alternate option to the list of alternate options that have been selected. For example, in a certain embodiment, when a user types the word “Tursday” and presses the space bar key the interface may make a decision between three options: leaving the word as “Tursday,” correcting the word to read “Tuesday,” or correcting the word to read “Thursday”; if the interface initially decides to correct the word to read “Tuesday,” such a correction may be a multi-alternative decision such that if the user then actuates the alteration key once the word will be changed to “Thursday” and if the user then actuates the alteration key a second consecutive time the word will be changed to “Tursday.” This example corresponds to Actuations3and4ofFIG.2A. Thus, in such an embodiment, if the interface responds to a first actuation of the alteration key by selecting an alternate option that is not the particular alternate option a user desired, then the user can simply continue to actuate the alteration key repeatedly until the desired alternate option becomes selected. Reverting Alterable Decisions In an embodiment, when the interface alters a single-alternative decision, if the decision already has its alternate option selected, then the interface reverts that decision to its default option. For example, if the currently alterable decision is the interface's decision to insert an apostrophe in the word “it's,” and if this decision is a single-alternative decision, then after the user actuates the alteration key once and thus causes the apostrophe to be deleted, actuating the alteration key a second consecutive time will cause the apostrophe to be inserted again, as is illustrated by Actuation2ofFIG.2A. Similarly, in an embodiment, when the interface alters a multi-alternative decision, if the decision has already been altered sufficiently many times that it is no longer possible for the interface to again replace the selected alternate option with a different alternate option that has not been selected yet, then the interface will instead revert that decision to its default option, as is illustrated by Actuation5ofFIG.2A. Thus, in such an embodiment, if in response to an actuation of the alteration key the interface actually introduces a new mistake by changing a decision that the user did not want to change, then the user can correct this mistake by continuing to actuate the alteration key repeatedly until the decision reverts to its default option. In an embodiment, once a user causes the interface to alter an alterable decision sufficiently many times that the decision reverts to its default option, the user has “completed review” of that alterable interface decision. When a user causes the interface to alter an alterable decision sufficiently many times that every alternate option of that decision has been selected at some point, the user still has not “completed review” of that decision until the user causes that decision to revert to its default option. In an embodiment, after the interface reverts a multi-alternative alterable decision to its default option, the interface will discard or empty its list of alternate options of that decision that have been selected, which means that the interface will subsequently regard each alternate option as though it had not been selected yet, and so continuing to repeatedly alter that particular decision will cause the interface to cycle through the various alternate options again. The Alteration Cycle In an embodiment, for purposes of the interface behaviors described in the following paragraphs, the alteration key is an “alteration-cycle-related key.” Other alteration-cycle-related keys are described in Chapter 9. In an embodiment, except as otherwise specified, whenever the user's most recent action was not an actuation of an alteration-cycle-related key, the currently alterable decision is the “most relevant” alterable interface decision as defined below. (In an embodiment, the most relevant alterable decision is often, but not always, the most recent alterable decision, as is explained in Chapter 3.) In an embodiment, when a user actuates the alteration key or some other alteration-cycle-related key, if the user's most recent previous action was not an actuation of an alteration-cycle-related key, then the interface will determine an “alteration cycle” that initially includes every alterable decision other than the currently alterable decision. The interface will continue to remember this alteration cycle until the user performs an action that is not an actuation of an alteration-cycle-related key, except as otherwise specified below. (Once the user performs an action that is not an actuation of an alteration-cycle-related key this alteration cycle is no longer relevant, so after that, the next time the user actuates an alteration-cycle-related key, the interface will determine a new, updated alteration cycle.) In an embodiment, when a user performs an actuation of the alteration key that causes the currently alterable decision to revert to its default option, the interface will “move on to the next decision in the alteration cycle.” When the interface moves on to the next decision in the alteration cycle, this means that if any decisions remain in the alteration cycle, then the most relevant decision that is in the alteration cycle will be removed from the alteration cycle and will become the new currently alterable decision. This is illustrated by Actuation2ofFIG.2A, which not only causes an alterable decision to revert to its default option but also causes a different alterable decision to become the new currently alterable decision. Thus, in an embodiment, if a user does not notice that a particular alterable decision had an undesired outcome until after the interface has made other alterable decisions, then as long as that particular decision is still alterable, it is still possible for the user to correct the mistake by means of the alteration key: the user can repeatedly press the alteration key sufficiently many times to cycle past any more relevant alterable decisions (which are, in most cases, the more recent alterable decisions) and then press the alteration key again to alter the decision that had the undesired outcome. In an alternative embodiment, when a user performs an actuation of the alteration key that causes the currently alterable decision to revert to its default option, if any decisions remain in the alteration cycle, then the interface will move on to the next decision in the alteration cycle as described in the preceding paragraph, and will also alter the new currently alterable decision (if any). (In such an embodiment, it may be particularly advantageous to have a No key or Escape key that can revert the currently alterable decision to its default option without affecting any other alterable decision, as is described in Chapter 9.) In an embodiment, if no decisions are in the alteration cycle when the interface moves on to the next decision in the alteration cycle, then there will cease to be a currently alterable decision and this alteration cycle will no longer be relevant, as is illustrated by Actuation5inFIG.2A. In an embodiment, after this happens, if the user's next action is an actuation of the alteration key, then that actuation of the alteration key will have the same effect as though it were a nonconsecutive actuation of the alteration key: the most relevant alterable decision will become the currently alterable decision, and the interface will determine a new, updated alteration cycle. This behavior is illustrated by Actuation6inFIG.2A, which returns the user's input to its initial state so that a seventh consecutive actuation of the alteration key would have the same effect that Actuation1had. Thus, in such an embodiment, by continuing to actuate the alteration key consecutively, a user may repeatedly cycle through alterable interface decisions. In an alternative embodiment, if no decisions are in the alteration cycle when the interface moves on to the next decision in the alteration cycle, then there will cease to be a currently alterable decision, and any further consecutive actuations of the alteration key will have no effect: there will be no currently alterable decision until after the user performs an action that is not an actuation of an alteration-cycle-related key. In another alternative embodiment, if no decisions are in the alteration cycle when the interface moves on to the next decision in the alteration cycle, then the most relevant alterable decision will become the new currently alterable decision (even if it was the previous currently alterable decision) and the interface will add every alterable decision other than that decision to the alteration cycle. In such an embodiment, by continuing to actuate the alteration key consecutively, a user may repeatedly cycle through alterable interface decisions without arriving at any intermediate state in which there is no currently alterable decision. In an embodiment that behaves as described in the preceding paragraphs, when a user is actuating the alteration key repeatedly, the alteration cycle contains all the alterable decisions that have not yet become the currently alterable decision. It will be evident to those of ordinary skill in the art that alternative embodiments can be constructed where the interface behaves the same way as an embodiment that has an alteration cycle as described above, but where this interface behavior is achieved by a slightly different means. For example, in one alternative embodiment, instead of keeping track of an alteration cycle that contains all the alterable decisions that have not yet become the currently alterable decision, the interface keeps track of a “used decision list” of all the alterable decisions that have already become the currently alterable decision, and when the interface “moves on to the next decision in the alteration cycle,” this means that first the currently alterable decision is added to the used decision list, and then if there are any remaining alterable decisions that are not in the used decision list then the most relevant remaining alterable decision becomes the new currently alterable decision. Specialized Alteration Cycles Farther below, certain user actions are described such that when a user performs the action, the interface will alter an alterable interface decision that fits certain criteria, in some embodiments. Unless otherwise specified, when performing an action is said to cause the interface to alter an alterable interface decision that fits certain criteria, this means that the action serves the purpose of an alteration key that affects only alterable decisions that fit the criteria, in an embodiment (regardless of whether the embodiment has an alteration key). In other words, if performing an action is said to cause the interface to alter an alterable interface decision that fits certain criteria, this means that in an embodiment, when the user performs such an action, if the user's previous action was not the exact same action, then the interface will alter the most relevant alterable decision that fits the criteria, and will determine and remember a specialized alteration cycle that includes any other alterable decisions that fit the same criteria. If the user then performs the exact same action several more times consecutively, then after the interface reverts the most relevant decision that fits the criteria to its default option, the interface will begin to cycle through the options of the other alterable decisions in the specialized alteration cycle. For example, in an embodiment that is described in Chapter 25, a spoken alteration command exists such that if a user says “Alter ‘its’” then the interface will alter an alterable interface decision pertaining to an occurrence of the word “its,” if any such alterable decision exists. Where that interface behavior is explained, it is not explicitly specified what will happen if a user says “Alter ‘its’” when more than one alterable decision exists that pertains to an occurrence of the word “its”; nevertheless, even though it is not explicitly specified below, in light of the above paragraphs it is to be understood that, in an embodiment, in such circumstances the interface will alter the most relevant alterable decision that pertains to an occurrence of the word “its,” and then if the user repeats the same spoken command sufficiently many times consecutively, the interface will eventually cycle through every alterable decision that pertains to an occurrence of the word “its.” Repeated Alteration Features in Perspective In some embodiments, a user may be able to correct a wide variety of mistakes with just a single actuation of the alteration key, with no need to explicitly specify what mistake to correct or what correction is desired, because in many cases the currently alterable decision will be the mistake the user wishes to correct and the first alternate option will be the desired correction. The alteration key may therefore be advantageous even in an embodiment where only a first actuation of the alteration key has any effect. However, the alteration key may be significantly more advantageous in an embodiment where additional consecutive actuations of the alteration key have additional effects as described above. In such an embodiment, when a first actuation of the alteration key makes a different change than the one a user desires, additional consecutive actuations of the alteration key may revert the undesired change and/or make the desired change. It is usually possible to actuate a single key multiple consecutive times relatively quickly, so actuating the alteration key several consecutive times in order to achieve a desired result may still be a fairly efficient way to achieve that result. Likewise, for similar reasons, various other means of alteration that are described below may be more advantageous in an embodiment where additional consecutive alterations may have additional effects. | 86,864 |
11861298 | DETAILED DESCRIPTION Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawing and disclosed herein. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. FIG.1shows a diagram of a data population and natural language processing system100that may be configured to perform one or more software processes that, when executed by one or more processors, perform methods consistent with disclosed embodiments. The components and arrangements shown inFIG.1are not intended to limit the disclosed embodiments, as the components used to implement the disclosed processes and features may vary. As shown inFIG.1, data population and natural language processing system100may include a facility server130, a computer terminal140, an administration terminal145, one or more user devices120, network server160, third party server170, and database180. The components of system100may communicate directly, through network150, through local network110, or through a combination of communications methods. In some embodiments, local network110, facility server130, computer terminal140, administrator terminal145, and at least one user device120may be physically disposed within a facility such as a hospital or office building (i.e. as facility system102) while at least one user device120, network150, network server160, third party server170, and database180may be external to the workplace. Other components known to one of ordinary skill in the art may be included in system100to perform tasks consistent with the disclosed embodiments. For example, in some embodiments, facility system102may include one or more sensor devices located throughout the facility to monitor one or more conditions such as occupancy, temperature, humidity, proximity, and other parameters indicative of a status or condition of a bed, room, area, equipment, or supplies. Additionally, in some embodiments facility system102may include one or more wireless receivers (not shown) configured to detect one or more wireless sensor or locating tags, to track a location of a tagged item and/or person, or a condition about the tagged item and/or person. Computer terminal140may be a standalone device disposed in an office, a room, an employee station, or an alternative central location in a workplace. In some embodiments, computer terminal140may be a desktop or notebook computer, a flat panel or projected display, or any other display. In some embodiments, computer terminal140may be associated with a particular room in a facility, such as a particular patient room, hotel room, conference room, or any other type of room. Thus, a message received from a computer terminal140may automatically associate the message with the room in which computer terminal140is installed. Administrator terminal145may include computer system or device associated with a user125that manages or oversees a portion of facility system102. For example, administrator terminal145may comprise a computer system located at a head nurse station, a housekeeping manager's office, or any other department manager's office or station. Users125may be one or more individuals, such as hospital employees and caregivers, associated with the patient. Users125may operate computer terminal140, user devices120, and/or another computer (not shown) to interact with system100. Users125may be individuals located within and/or outside of the facility system102. For example, users125may include physicians and nurses within the facility responsible for transferring the patients to different units. Users125may also include one or more individuals who are responsible for responding to task requests, such as cleaning and transportation of the patients. Users125may also include individuals outside of facility system102, such as people with personal relationships with the patients (e.g. family members) and referring individuals (e.g. outside physicians and medics). System100may be customizable and provide individualized access for each of the users125. For example, only certain users125, such as physicians and nurses, may be allowed to generate transfer requests. In some embodiments, one or more users125, such as the patient's primary physician, may be required to authorize all requests. Users125solely responsible for specific tasks may have access limited to perform their responsibilities. It is also contemplated that some users125, such as family members, may have read-only access. User devices120may be a personal computing device such as, for example, a general purpose or notebook computer, a mobile device with computing ability, a tablet, smartphone, wearable device such as Google Glass™ or smart watches, or any combination of these computers and/or affiliated components. In some embodiments, a user device120may be a computer system or mobile computer device that is operated by user125. In some embodiments, a user device120may be associated with a particular individual such as user125, such that messages and/or task assignments directed toward user125are sent to user device120. In some embodiments, user device120may communicate with facility server130and/or network server160via direct wireless communication links (not shown), or via a combination of one or more of local network110and/or network150. Facility server130may be operated by a facility such as a hospital. Facility server130may enable communication within a computer-based system including computer system components such as desktop computers, workstations, tablets, hand held computing devices, memory devices, and/or internal network(s) connecting the components. Thus, in some embodiments facility server130may operate as a centralized hub or station for receiving and processing data associated with disclosed methods and techniques, and for generating and sending transmissions associated with disclosed methods and techniques. Network150may comprise any type of computer networking arrangement used to exchange data. For example, network150may be the Internet, a private data network, virtual private network using a public network, and/or other suitable connection(s) that enables system100to send and receive information between the components of system100. Network150may also include a public switched telephone network (“PSTN”) and/or a wireless cellular network. Local network110may comprise any type of computer networking arrangement used to exchange data in a localized area, such as WiFi, Bluetooth™′ Ethernet, and other suitable short-range connections that enable computer terminal140and user device120to send and receive information between the components of system100. In some embodiments, local network110may be excluded, and computer terminal140and user device120may communicate with system100components via network150. In some embodiments, computer terminal140and/or user device120may communicate with one or more system100components via a direct wired or wireless connection. Network server160, third party server170, and database180may be one or more servers or storage services provided by an entity such as a provider of networking, cloud, or backup services. For example, in some embodiments, network server160may be associated with a cloud computing service such as Microsoft Azure™ or Amazon Web Services™. In such embodiments, network server160may comprise a plurality of geographically distributed computing systems executing software for performing one or more functions of the disclosed methods. Additionally, in some embodiments, third party server170may be associated with a messaging service, such as, for example, Apple Push Notification Service, Azure Mobile Services, or Google Cloud Messaging. In such embodiments, third party server170may handle the delivery of messages and notifications related to functions of the disclosed embodiments, such as task creation, task assignment, task alerts, and task completion messages and notifications. In some embodiments, system100may include configurations that vary from the example shown inFIG.1, which illustrates a facility system102working in concert with a cloud computing system including network server160, third party server170, and database180. As a first variation, system100may include only facility system102, and thus may exclude cloud computing components such as network server160, third party server170, and database180. In such embodiments, facility system102may handle substantially all operations and functions of the present embodiments. As a second variation, system100may exclude components of facility system102such as facility server130. In such embodiments, a cloud computing system including network server160, third party server170, and/or database180may handle some or all computing and message-related functions of the disclosed embodiments. FIG.2is a flowchart of an exemplary method200for automatically populating data in a data field within a conversation that is recorded as audio data or textual data, consistent with embodiments of the present disclosure. At step201, a processor (e.g., processor105) may be configured to receive a voice input. In some embodiments, voice input may be natural language. In other embodiments, the voice input may be human to human conversation. Conversation may be captured by a microphone (e.g., microphone101) and converted to electrical signal. The microphone (e.g. microphone101) can take a number of forms including, but not limited to, a microphone of a computer, a microphone of a portable smart device, a standalone microphone, or any other microphones suitable for receiving natural language audio. Furthermore, a filter may be applied to electrical signal, acquired natural language audio, or conversation before converted to electrical signal, in order to reduce noise. At step202, a processor (e.g., processor105) may be configured to analyze the voice input. Processor105may be configured to analyze the “timbre”, that is, the perceived quality of a sound, “pitch”, “loudness”, and “acoustic fingerprint” of a captured voice. Based on the analyzation, processor105may identify the source of voice. For example, a person's voice features, such as, timbre, pitch, acoustic fingerprint, voice pattern, etc., may be stored in a memory and/or a database (e.g., database109). Processor105may compare the voice input with the pre-stored voice features, and may identify a person's voice and a person's location. This may help building more accurate model and reduce errors in the following steps. After identifying the source of voice input, a processor (e.g., processor105) may identify relevant models in a database (e.g., database109) and further rank the models in an order of relevance. At step203, a processor (e.g., processor105) may be configured to generate a text based on the voice input by using the models stored in a database (e.g., database109). Processor105may compare the voice input against the models in a database in order and generate a text based on the comparison. Because processor105starts from the most relevant model, the efficiency is increased and errors in the generated text may be reduced. At step204, a processor (e.g., processor105) may be configured to compare a text against the models stored in a database (e.g., database109), to identify words in the text. A text may be the generated text based on the voice input or clinical notes, which will be further discussed inFIG.4A. At step205, a processor (e.g., processor105) may be configured to identify a data field in the text. For example, when “John S. Doe” is in the text, processor105may identify “John” as the first name, “Doe” as the last name, and “S” as the middle name. In addition, processor105may identify name entities, such as, names of medical facilities, names of diseases, diagnosis, and addresses, etc. Moreover, processor105may identify dates and times. For example, when the text contains “Oct. 17, 2017, 08:00 AM”, a processor (e.g., processor105) may identify “Oct. 17, 2017” as date and “08:00” as time. This can be achieved using the models stored in a database. The models may provide the pattern of name entities, dates, and times. For example, capitalized first letter of a word in a sentence may be identified as a pattern of names. And, a pattern of “_/_/__” may be identified as a pattern of dates. At step206, a processor (e.g., processor105) may be configured to select a form field in a hospital form based on the identified data field. For example, when a patient name is identified, then the field for patient name in a hospital form will be selected by a processor. Similarly, when a date is identified, then the field for date in a hospital form will be selected. At step207, a processor (e.g., processor105) may be configured to extract words from the generated text or the clinical notes. In a human to human language, a complete sentence, instead of fragmented phrases, is used to convey an idea. Thus, after identifying the meaning of each word in a text, a processor may extract words that are relevant to the selected form fields. For example, when “Patient John S. Doe is transferred to our hospital.” is in the text and the form field “patient name” is selected, a processor (e.g., processor105) may extract “John S. Doe” from the sentence for patient name. At step208, a processor (e.g., processor105) may be configured to populate a text in the selected form field. The text may be the extracted words. For example, “John S. Doe” will be populated in the form field “patient name” in the hospital form. Alternatively and concurrently, a processor (e.g., processor105) may also populate a text from algorithms, which will be further discussed in the following paragraphs. To identify words and meaning of texts within a conversation, processor105may employ one or more computer models (e.g., Lexical parser, gender recognizer, part of speech tagger, conference resolution mapper, named entity recognizer, and natural language processing model, word stemming, etc.) stored in a database (e.g., database109) for comparing sounds in the received audio information to interpret the audio and to generate text data corresponding to the audio. Different words or phrases with same semantic meaning can be converted or may result in populating a same text in the corresponding form field. For example, English natural language phrases may be: “Patient John S. Doe has been admitted to Stone Oak from Methodist.” “Patient John S. Doe has been transferred from Methodist to Stone Oak.” “PT (P.T.) John S. Doe has been transferred from Methodist to Stone Oak.” “Stone Oak admitting PT. John S. Doe from Methodist.” “PT John S. Doe will arrive at Stone Oak from Methodist.” And, the corresponding results may be: {$patient_name=“John S. Doe”, $admitted_hospital=“Stone Oak”, $transferred_from =“Methodist”} a processor (e.g., processor105) may also populate a text from algorithms using the stored models. For example, the text may include “PT John S. Doe is coming in tomorrow.” “PT John S. Doe was admitted last month.” The corresponding date will be generated by using the algorithm in the models. For example, if it is Jul. 17, 2017 today, then the corresponding result for the date may be: {$date_admitted=“Jul. 18, 2017”}, and {$date_admitted=“Jun. 17, 2017”}, respectively. In some embodiments, the processor (e.g. processor105) may populate a text from stored models. The stored models may include, but are not limited to: speech tagger, conference resolution mapper, named entity recognizer, and natural language processing model, etc. There are several algorithms that may be used in conjunction to generate these models. Lexical parser may be a word tokenizer. Gender classification may be done using pronouns and nouns that associate with an entity. Identifying nouns, pronouns, prepositions, etc. may be done using part of speech tagger. The part of speech tagger may utilize Hidden Markov Models, Dynamic Programming, Supervised Learning and Transformation Based, among others. Nouns can be classified into categories using named entity recognizer (e.g. supervised learning). Coreference Resolution Mapping may be used to associate pronouns with the appropriate entities. Conference Resolution Mapping may utilize Recurrent Neural Networks and Long Short-Term Memory Units. It is helpful in finding the association between entities and when those entities are being referenced. Other algorithms that may be implemented include: N-gram, TF-IDF, word to vector, pairwise ranking, word stemming). Gender classification may be based on using an ensemble of above methods and algorithms. To extract words from a text, a processor (e.g., processor105) may compare each word in a text against the models in a database (e.g., database109) and assign a value to each word. For example, a sentence in a conversation or clinical note may be “John S. Doe has been admitted to Stone Oak.”, then a sentence may be tokenized into “John” “S” “.” “Doe” “has” “been” “admitted” “to” “Stone” “Oak” “.” And, values will be assigned to each token. “John” will be assigned ‘first name’. “S” and “.” will be assigned “middle name”. “Doe” will be assigned “last name”. “has”, “been”, “admitted”, and “to” will be assigned ‘0’. “Stone” and “Oak” will be assigned “hospital name”. According to the assigned value, the token with value that is not 0 will be extracted, and then populate in corresponding form fields. The sentence may be tokenized as described here using any of the algorithms or methods discussed herein. Natural language input such as speech input may be converted to text using an application programming interface (API). Natural Language Processing may be applied to identify data fields in the text including, but not limited to: Patient Name, Location, Physician Name, Diagnoses, Facilities the Patient is being transferred from and to, and other fields discussed herein. Additionally, a machine learning system may be implemented and utilized to improve the accuracy of the text population into the form fields. In some embodiments, the accuracy of what was recognized as the correct data fields to the form fields after the text was generated using Natural Language Processing Models that were trained by the machine learning system on Healthcare text/speech data in healthcare. The machine learning system may store a computer model that, when applied to the natural language input, populates the text into form fields. The model may be continuously improved when encountered with additional natural language inputs. Additionally, the model may have previously been exposed to an abundance of natural language inputs training the machine learning system and the associated model to match the natural language inputs with form fields based on the text characterization. The model may have been trained by supplying training data sets, validation data sets, and testing data sets. For example, the model may have been created using the training data sets and machine learning algorithms including a corpus of medical terminology including acronyms and short-hand notations for medical terms and form field cues. The generated model may be built using a neural network. FIG.2Ais a flowchart of an exemplary method200for automatically populating data in a data field within a conversation that is recorded as audio data or textual data, consistent with embodiments of the present disclosure. At step221a voice input may be obtained such as a natural language of a user. The voice input from step221may be transmitted to an API at step222. The natural language input such as speech input may be converted to text by the API. The API may provide a text output at step223. At step224, Natural Language Processing may be applied to identify data fields in the text including, but not limited to: Patient Name, Location, Physician Name, Diagnoses, Facilities the Patient is being transferred from and to, and other fields discussed herein. Step224may apply a number of stored models to the text output from step223. As discussed herein, the stored models may include, but are not limited to: speech tagger, conference resolution mapper, named entity recognizer, and natural language processing model, etc. There are several algorithms that may be used in conjunction to generate these models. Lexical parser may be a word tokenizer. Gender classification may be done using pronouns and nouns that associate with an entity. Identifying nouns, pronouns, prepositions, etc. may be done using part of speech tagger. The part of speech tagger may utilize Hidden Markov Models, Dynamic Programming, Supervised Learning and Transformation Based, among others. Nouns can be classified into categories using named entity recognizer (e.g. supervised learning). Coreference Resolution Mapping may be used to associate pronouns with the appropriate entities. Conference Resolution Mapping may utilize Recurrent Neural Networks and Long Short-Term Memory Units. It is helpful in finding the association between entities and when those entities are being referenced. Other algorithms that may be implemented include: N-gram, TF-IDF, word to vector, pairwise ranking, word stemming). Gender classification may be based on using an ensemble of above methods and algorithms. Based on the Natural Language Processing, the text from the text output may be populated into form fields at step225. FIG.3illustrates an exemplary user interface, consistent with disclosed embodiments, for displaying the automatically populated patient information in a hospital form. InFIG.3, an exemplary hospital form300may include a plurality of form fields. For example, form field302represents where the patient's first name should be filled in, form field304represents where the patient's last name should be filled in, form field306represents where the patient's gender should be filled in, and so on. FIG.4Ais an example of clinical note. And, corresponding to the clinical note inFIG.4A,FIG.4Billustrates the automatically populated patient information in a hospital form. InFIG.4A, date, time, and patient information are written in English natural language in the clinical note. The clinical note is then read into a system for automatically populating data within a conversation, consistent with the disclosed embodiments. Using the method described above, a form inFIG.4Bis automatically filled out corresponding to the information provided in the clinical note. For example, in hospital form400, form field for “First name”402is filled with John, corresponding to the text in the clinical note inFIG.4A; form field for “Last name”404is filled with Doe, corresponding to the text in the clinical note inFIG.4A. Additionally, form field for “Date for discharge” is filled with Sep. 15, 2017, even if the “date for discharge” is not directly disclosed in the clinical note. Based on “The patient will be discharged tomorrow.” which is written in the clinical note inFIG.4A, a processor (e.g., processor105) may identify “tomorrow” as a phrase for a date. After calculation, processor105may populate “Sep. 15, 2017” in the form field for “Date for discharge”, based on “Sep. 14, 2017”, which is the date written in the clinical note. FIG.5illustrates an exemplary method500for automatically populating data in a data field within a conversation that is recorded as audio data or textual data, consistent with embodiments of the present disclosure. At step501a voice input may be obtained such as a natural language of a user. The voice input from step501may be transmitted to an API at step502. The natural language input such as speech input may be converted to text by the API. The API may provide a text output at step503. Step503provides a non-limiting example of a text output from the API, “John S. Doe is going to XYZ facility from ABC facility. He is diagnosed with Whooping Cough”. This text was generated as a result of the natural language input to the API. At step504, Natural Language Processing may be applied to identify data fields in the text including, but not limited to: Patient Name, Location, Physician Name, Diagnoses, Facilities the Patient is being transferred from and to, and other fields discussed herein. Step504may apply a number of stored models to the text output from step223. As discussed herein, the stored models may include, but are not limited to: speech tagger, conference resolution mapper, named entity recognizer, and natural language processing model, etc. There are several algorithms that may be used in conjunction to generate these models. Lexical parser may be a word tokenizer. Gender classification may be done using pronouns and nouns that associate with an entity. Identifying nouns, pronouns, prepositions, etc. may be done using part of speech tagger. The part of speech tagger may utilize Hidden Markov Models, Dynamic Programming, Supervised Learning and Transformation Based, among others. Nouns can be classified into categories using named entity recognizer (e.g. supervised learning). Coreference Resolution Mapping may be used to associate pronouns with the appropriate entities. Conference Resolution Mapping may utilize Recurrent Neural Networks and Long Short-Term Memory Units. It is helpful in finding the association between entities and when those entities are being referenced. Other algorithms that may be implemented include: N-gram, TF-IDF, word to vector, pairwise ranking, word stemming). Gender classification may be based on using an ensemble of above methods and algorithms. Additionally, a machine learning system505may be implemented and utilized to improve the accuracy of the text population into the form fields. In some embodiments, the accuracy of what was recognized as the correct data fields to the form fields after the text was generated using Natural Language Processing Models that were trained by the machine learning system on Healthcare text/speech data in healthcare. The machine learning system may store a computer model that, when applied to the natural language input, populates the text into form fields. The model may be continuously improved when encountered with additional natural language inputs. Additionally, the model may have previously been exposed to an abundance of natural language inputs training the machine learning system and the associated model to match the natural language inputs with form fields based on the text characterization. The model may have been trained by supplying training data sets, validation data sets, and testing data sets. For example, the model may have been created using the training data sets and machine learning algorithms including a corpus of medical terminology including acronyms and short-hand notations for medical terms and form field cues. The generated model may be built using a neural network. Based on the Natural Language Processing, the text from the text output may be populated into form fields such as the first name field506and the last name filed507. Additional fields may be filled depending on the text input provided. In this example, the Natural Language Processing may result in the gender field, the previous hospital field, the diagnosis field, and the destination facility field being filled. Based on above, the disclosed system and method populates information in a hospital form automatically. A user may either speak to the disclosed system or enter a clinical note into the disclosed system, and the system may automatically populate information in the corresponding form fields in a hospital form. This may reduce time and effort for a healthcare provider to fill in the patient information in hospital forms. And, in turn, a healthcare provider may concentrate on more important tasks; thus, increases the quality of service. | 27,949 |
11861299 | DETAILED DESCRIPTION OF THE DRAWINGS In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be appreciated, however, by those having skill in the art, that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other cases, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention. It should also be noted that the methods and systems disclosed herein are also suitable for applications unrelated to mobile applications. FIG.1shows an illustrative example of an overlay application generating an icon during transitions between native applications, in accordance with one or more embodiments. For example, user interface100includes application102and application104as well as icon106, which may be generated by as an application overlay by an overlay application. As referred to herein, an “application overlay” may comprise a screen overlay generated by an overlay application (e.g., an application configured to generate screen overlay). A screen overlay allows an application to display content over another application running on a device and/or using an operating system. Each overlay application may be specific to a given operating system and may generate the screen overlay using permission specific to the device and/or operating system. For example, in some operating system a screen overlay is generated using a “SYSTEM_ALERT_WINDOW” permission. For example, the permission allows an application to display content on the screen of the device in response to a triggering event. In some embodiments, a triggering event for the application overlay may be the detection of a third-party application for which supplemental content may be provided by the overlay application. For example, the system (e.g., the overlay application) may determine that a currently displayed third-party application (or currently displayed fields of the third-party application) may be automatically populated by the overlay application. For example, the system may detect that the third-party application corresponds to a third-party for which the overlay application may provide supplemental content (e.g., a virtual account number). In response, the system may generate an icon (e.g., icon106) on user interface100. Icon106may indicate that supplemental content and/or auto-population of application-specific data is available. In some embodiments, a triggering event for the application overlay may be the launching of the overlay application and/or the powering on of a device upon which the overlay application is implemented. For example, icon106may appear automatically upon a user powering on the device, launching the overlay application, and/or launching another application. The system may then wait for a user input (e.g., selecting icon106). Upon receiving the user input, the system (e.g., the overlay application) may determine that the currently displayed third-party application (or currently displayed fields of the third-party application) and may attempt to automatically populated information for the third-party application. For example, the system may detect that the third-party application corresponds to a third-party for which the overlay application may provide supplemental content (e.g., a virtual account number). In response, the system may transmit a request for the supplemental content. Alternatively or additionally, the system may collect information about the third-party application and/or a currently display field and transmit that information to a remote source. The remote source may then determine whether or not supplemental content may be automatically populated. For example, the remote source may identify the third-party application and determine supplemental content for the application. In some embodiments, the system may detect one or more features or characteristics of application102, application104, or a device generating user interface100. As described herein, a feature may include any visual characteristic, option, and/or functional capability provided to a user by software and/or hardware. For example, a feature may include a distinctive attribute or aspect of the software and/or hardware such as how data is displayed, where data is display, what type of data is display, what functions are available, etc. For example, in some embodiments, a feature may be an available function of an application, operating system, and/or device. In some embodiments, the feature may be provided as part of an application and/or may be provided as a plug-in, applet, browser extension, and/or other software component for an existing application. For example, the feature may be part of an application and/or other program that may be toggled on or off. In another example, the feature may be a software component that may be added and/or removed from an application. In some embodiments, the feature may be a conceptual data model of the application and/or one or more fields of the application (e.g., the fields currently displayed by the application). For example, the conceptual data model may be a representation of data objects, the associations between different data objects, and/or the rules of the application. In some embodiments, the system may determine a visual representation of the data and apply consistent naming conventions, default values, and semantics to one or more fields in the model. These naming conventions, default values, and semantics of the one or more fields in the model may then be used by the system to generate an application identification number for the application. The system may use the application identification number to identify the application. For example, the system may compare the application identification number to an application identification number database (e.g., a look up table database listing application identification numbers and the entity (e.g., developers, sources, content provider, app provider, etc.) corresponding to each) to identify the entity corresponding to the application. Each application may display particular information and/or information of a particular type. Alternatively or additionally, each application may provide a given function. This function may be a locally performed function (e.g., a function performed on a local device) or this function may be a remotely-executed function. In some embodiments, the application may include a link to additional information and/or other applications, which may be accessed and/or available locally or remotely. In some embodiments, the application may be represented by textual and/or graphical information. For example, an application may comprise a purchasing function through which a user may enter information (e.g., user credential and/or payment account information) that when transmitted may cause a purchase to occur. The system may identify these characteristics and application features for use in generating the conceptual data model. In some embodiments, the system may detect information about a feature of an application (e.g., metadata or other information that describes the feature). For example, the information may describe a purpose, functions, origin, creator, developer, a system requirement (including required formats and/or capabilities), author, recommended use and/or approved user. The information may be expressed in a human and/or computer readable language or may not be perceivable to a user viewing user interface100. The information may also include a reference or pointer to user profile information that may be relevant to the selection and/or use of the feature. The system may retrieve this information and/or compare it to the description in order to verify, select and/or use the feature. For example, the description may indicate that the feature uses a particular format, relates to a particular user, user device, and/or user account. The system may access a user profile. The user profile may be stored locally on a user device (e.g., user device322(FIG.3)) and/or a remote source (e.g., a component of system300(FIG.3)). The user profile may include information about a user and/or device of a user. The information may be generated by actively and/or passively monitoring actions of the user. The user profile may also include information aggregated from one or more sources (including third-party sources). The information in the user profile may include personally identifiable information about a user and may be stored in a secure and/or encrypted manner. The information in the user profile may include information about user settings and/or preferences of the user, activity of the user, demographics of the user, and/or any other information used to target a feature towards a user and/or customize features for a user. In some embodiments, the system may pre-fetch supplemental content as a user navigates and/or user one or more application. The system may pre-fetched this information based on information in the user profile (e.g., a user preference or setting), a predetermined or standard application feature selection (e.g., by the application), a previously selected feature when the application was last used, and/or other criteria. For example, the system may continuously, and in real-time, pre-fetch (or request) supplemental content for automatically populating application-specific information. The system may continuously pre-fetch this information and/or may push this information to a local user device and/or edge server for immediate use if an application is activated. Accordingly, the system may minimize delays attributed to populating application-specific data and attributed to processing time needed by a remote source. Icon106may include a first link. For example, the link may include a hyperlink. For example, the link may include a link from a hypertext file or document to another location or file, typically activated by clicking on a highlighted word or image on the screen. The link may be an inline link that displays remote content without the need for embedding the content. For example, the inline link may display a modified version of accessible content (e.g., an image, a thumbnail, low resolution preview, cropped section, or magnified section of the accessible content). Alternatively the link may be an anchor link and/or fat link. In some embodiments, the first link may comprise a push notification. For example, the push notification may have been generated in real-time based on a determination by the system (e.g., by machine learning model302(FIG.2)) that application-specific data may be needed. In response to a user selection of icon106in user interface100, the system may transmit a request to a remote source (e.g., web server310(FIG.3)). Alternatively or additionally, the system may generate a new icon or an icon with additional information as shown inFIG.2. FIG.2shows an illustrative example of an application overlay generating a prompt for automatically populating application-specific information, in accordance with one or more embodiments. For example, user interface200may include icon202(which may have replaced icon106(FIG.1)). Icon202may include user prompts for initialing a request (e.g., user prompt204). In response to a selection of user prompt204, the system may generate a request for application-specific data (e.g., to populate fields206and208). Alternatively or additionally, in response to a user selection of prompt204, the system may identify an application shown in user interface200and determining whether a field (e.g., field206and208) currently displayed in the user interface corresponds to a predetermined field that is automatically populated by the first application. For example, the system may retrieve metadata used to determine a type of field and compare the type to a predetermined type of field that is automatically populated by an overlay application. In response to determining that the field corresponds to a predetermined field, the system may transmit, to a remote source (e.g., web server310(FIG.3)), a request for supplemental content for populating the field. The request may comprise an API request (or call) from one application (e.g., an overlay application implemented on a local device) to an application on a server (e.g., a server implementing web server310(FIG.3)). The request may include one or more types of information that may be used by the web server to respond to the request. For example, the request may include information used to select application-specific data, identify and application, and/or determine field for populating. For example, in some embodiments, the application overlay may create a library to simplify communicating using API requests and managing user, application, and session data. The system may therefore support multiple data providers and federated routing development, including better management of application/sub-application routing, consistent capture of data, and/or identification of fields. For example, a third-party application may have a field called “paymenttype”, the system may have data for populating payment type information in a record labeled “payTP”. Using the library the API request may normalize the format in the request. FIG.3shows an illustrative system for populating application-specific information using overlay applications, in accordance with one or more embodiments. As shown inFIG.3, system300may include user device322, user device324, and/or other components. Each user device may include any type of mobile terminal, fixed terminal, or other device. For example, each of these devices may comprise one or more of the devices shown inFIG.1. Each of these devices may receive content and data via input/output (hereinafter “I/O”) paths and may also include processors and/or control circuitry to send and receive commands, requests, and other suitable data using the I/O paths. The control circuitry may be comprised of any suitable processing circuitry. Each of these devices may also include a user input interface and/or display for use in receiving and displaying data (e.g., user interface100(FIG.1)). By way of example, user device322and user device324may include a desktop computer, a server, or other client device. Users may, for instance, utilize one or more of the user devices to interact with one another, one or more servers, or other components of system300. It should be noted that, while one or more operations are described herein as being performed by particular components of system300, those operations may, in some embodiments, be performed by other components of system300. As an example, while one or more operations are described herein as being performed by components of user device322, those operations may, in some embodiments, be performed by components of user device324. System300also includes machine learning model302, which may be implemented on user device322and user device324, or accessible by communication paths328and330, respectively. It should be noted that, although some embodiments are described herein with respect to machine learning models, other prediction models (e.g., statistical models or other analytics models) may be used in lieu of, or in addition to, machine learning models in other embodiments (e.g., a statistical model replacing a machine learning model and a non-statistical model replacing a non-machine learning model in one or more embodiments). Each of these devices may also include memory in the form of electronic storage. The electronic storage may include non-transitory storage media that electronically stores information. The electronic storage of media may include (i) system storage that is provided integrally (e.g., substantially non-removable) with servers or client devices and/or (ii) removable storage that is removably connectable to the servers or client devices via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storages may include optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storages may include virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storage may store software algorithms, information determined by the processors, information obtained from servers, information obtained from client devices, or other information that enables the functionality as described herein. FIG.3also includes communication paths328,330, and332. Communication paths328,330, and332may include the Internet, a mobile phone network, a mobile voice or data network (e.g., a 4G or LTE network), a cable network, a public switched telephone network, or other types of communications network or combinations of communications networks. Communication paths328,330, and332may include one or more communications paths, such as a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. The computing devices may include additional communication paths linking a plurality of hardware, software, and/or firmware components operating together. For example, the computing devices may be implemented by a cloud of computing platforms operating together as the computing devices. As an example, with respect toFIG.3, machine learning model302may take inputs304and provide outputs306. The inputs may include multiple data sets such as a training data set and a test data set. In some embodiments, outputs306may be fed back to machine learning model302as input to train machine learning model302(e.g., alone or in conjunction with user indications of the accuracy of outputs306, labels associated with the inputs, or with other reference feedback information). In another embodiment, machine learning model302may update its configurations (e.g., weights, biases, or other parameters) based on the assessment of its prediction (e.g., outputs306) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another embodiment, where machine learning model302is a neural network, connection weights may be adjusted to reconcile differences between the neural network's prediction and the reference feedback. In a further use case, one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to them to facilitate the update process (e.g., backpropagation of error). Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the machine learning model302may be trained to generate better predictions. In some embodiments, machine learning model302may include an artificial neural network. In such embodiments, machine learning model302may include input layer and one or more hidden layers. Each neural unit of machine learning model302may be connected with many other neural units of machine learning model302. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. In some embodiments, each individual neural unit may have a summation function which combines the values of all of its inputs together. In some embodiments, each connection (or the neural unit itself) may have a threshold function such that the signal must surpass before it propagates to other neural units. Machine learning model302may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs. During training, an output layer of machine learning model302may correspond to a classification of machine learning model302and an input known to correspond to that classification may be input into an input layer of machine learning model302during training. During testing, an input without a known classification may be input into the input layer, and a determined classification may be output. In some embodiments, machine learning model302may include multiple layers (e.g., where a signal path traverses from front layers to back layers). In some embodiments, back propagation techniques may be utilized by machine learning model302where forward stimulation is used to reset weights on the “front” neural units. In some embodiments, stimulation and inhibition for machine learning model302may be more free-flowing, with connections interacting in a more chaotic and complex fashion. During testing, an output layer of machine learning model302may indicate whether or not a given input corresponds to a classification of machine learning model302). Machine learning model302may be used for populating application-specific information using overlay applications or for determining whether or not an application corresponds to a current version of an application. FIG.4shows a flowchart of the steps involved in populating application-specific information using overlay applications, in accordance with one or more embodiments. For example, process400provides one embodiment of how to populate application-specific information using overlay applications, in accordance with one or more embodiments. At step402, process400generates (e.g., via one or more components ofFIGS.1-3) for display an application overlay. For example, the system may generate for display, on a mobile device, an application overlay, corresponding to a first application, wherein the application overlay overlays a user interface generated by a second application. For example, the system may generate an application overlay (e.g., icon106(FIG.1)) that overlays one or more applications (e.g., application102(FIG.1)). In some embodiments, the system may generate for display a selectable icon corresponding to the application overlay over the user interface. For example, as shown inFIGS.1-2, the system may comprise one or more icons that allow the user to manually request supplemental content that is application-specific to be automatically populated in application. For example, the system may determine that the application being displayed includes one or more payment fields that require payment information (e.g., a credit card, address, etc.). The system may in response to this determination (or the launching and/or display of the application itself) generate the overlay. In some embodiments, the system may determine a native overlay for an operating system for the mobile device. The system may then determine a display position for the native overlay; and selecting a position for selectable icon based on the display position. For example, the system may retrieve the native overlay settings for the operating system and/or current configuration of the device. The system may then place the application overlay (e.g., icon106(FIG.1)) in a location on the user interface (e.g., user interface100(FIG.1)) that does not conflict with a position or location in the native overlay. Accordingly, the system ensures that a user does not select the overlay application unintentionally and/or the icon does not interfere with native overlay of the device or operating system. At step404, process400receives (e.g., via one or more components ofFIGS.1-3) a user input selecting the application overlay. For example, the system may receive a user input selecting the application overlay while the user interface is displayed. In some embodiments, the system may automatically generate a request based on the selection. Alternatively, the system may generate an additional icon that include additional information and/or prompts (e.g., icon202(FIG.2)). At step406, process400identifies (e.g., via one or more components ofFIGS.1-3) an overlaid application. For example, the system may identify the second application in response to the user input. For example, the overlay application may identify the second application (e.g., an application currently displayed on a user interface (e.g., application102on user interface100(FIG.1)) in response to a user selection of the overlay application (e.g., icon106(FIG.1)). In some embodiments, identifying the second application may comprise determining an application identification number for the second application and comparing the application identification number to an application identification number database to identify the second application. For example, the system may store (e.g., at web server310(FIG.3)) a lookup table database of application identification numbers for all application (or application for which supplemental content is available). For example, the system may determine whether or not a virtual account number is available for the target application. If so, the system may retrieve the virtual account number and populate the application (or a field of the application) with the virtual account number. In some embodiments, identifying the second application may comprise determining an application identification number for the second application and comparing the application identification number to an application identification number database to identify an entity corresponding to the second application. The system may then query an entity for a current identification number for the second application, receive the current identification number for the second application from the entity, and compare the application identification number to the current identification number. In some embodiments, identifying the second application may comprise determining a mapping of the second application and comparing the mapping to an application mapping database to identify an entity corresponding to the second application. For example, as opposed to retrieving an application identification number, the system may retrieve a mapping of the fields currently displayed. The system may use this mapping as well as other metadata and other information to identify the application. For example, as opposed to retrieving an application identification number, which may be unauthorized, the overlay application may identify the application based on a mapping and/or other data (e.g., a conceptual data model). Accordingly, the system may identify fraudulent applications and/or out of date versions of the application, even if the application has a legitimate application identification number. At step408, process400determines (e.g., via one or more components ofFIGS.1-3) whether a field is automatically populated. For example, the system may determine whether a field currently displayed in the user interface corresponds to a predetermined field that is automatically populated by the first application in response to the user input. For example, the system may have a predetermined type and/or number of fields that may be populated (e.g., fields related to a virtual account number assigned by the overlay application provider to the application). The system may determine which fields, if any, correspond to one of these predetermined fields. In some embodiments, determining whether the field currently displayed in the user interface corresponds to the predetermined field that is automatically populated by the first application, may further comprise determining a conceptual data model for the second application, wherein the conceptual data model includes semantic information and determining the field based on the conceptual data model. For example, the system may determine the type and/or function of each field and the purpose for which value entered into the field are used. For example, in response to determining that the conceptual data model for the second application indicates that a field is related in transactions, the system may retrieve virtual account number information for that field. At step410, process400transmits (e.g., via one or more components ofFIGS.1-3) a request for supplemental content for populating the field. For example, the system may, in response to determining that the field corresponds to a predetermined field, transmit, to a remote source, a request for supplemental content for populating the field, wherein the supplemental content is selected from available supplemental content based on the second application and the field. For example, the system may transmit an API request that includes the identification of the application (or information used to identify the application) to a remote source (e.g., web server310(FIG.3)). For example, the supplemental content may be a virtual account number for a financial service provider. Additionally or alternatively, the virtual account number may correspond to an entity corresponding to the second application. For example, the virtual account number may be unique credit card numbers that allow the application to transact on the user's main credit card account without using—or exposing—the main credit card account number to the application. The issuer of the virtual account number may allow the user to lock or delete a particular virtual account number (e.g., due to fraudulent activity) and generate a new virtual account number, without affecting the status of the main credit card account. At step412, process400receives (e.g., via one or more components ofFIGS.1-3) the supplemental content. For example, the system may receive, from the remote source, the supplemental content. The supplemental content may in some embodiments be a virtual account number for the user associated with the target application. In some embodiments, the virtual account number have been uniquely generated for a particular transaction. The number may be linked to the transaction based on transaction details (e.g., price, date, time, etc.) and only used for that particular transaction. At step414, process400populates (e.g., via one or more components ofFIGS.1-3) the field with the supplemental content. For example, the system may populate the field in the second application with the supplemental content. The system may further populate the field while the values are obscured from a user (e.g., the virtual account number may not be perceivable by the users) in order to enhance security and prevent unauthorized users from observing the virtual account number in the user interface. It is contemplated that the steps or descriptions ofFIG.4may be used with any other embodiment of this disclosure. In addition, the steps and descriptions described in relation toFIG.4may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order or in parallel or substantially simultaneously to reduce lag or increase the speed of the system or method. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the steps inFIG.4. FIG.5shows flowchart of the steps involved in determining whether or not an application corresponds to a current version of an application, in accordance with one or more embodiments. At step502, process500determines (e.g., via one or more components ofFIGS.1-3) an application identification number. For example, the system may determine an application identification number for a second application. In some embodiments, the system may retrieve an application identification number and/or generate one based on retrieving mapping information from the application. At step504, process500determines (e.g., via one or more components ofFIGS.1-3) entity corresponding to application identification number. For example, the system may compare the application identification number to an application identification number database to identify an entity corresponding to the second application. In such cases, the system may compare the application identification number to a database listing application identification numbers unique to each entity. At step506, process500queries (e.g., via one or more components ofFIGS.1-3) an entity for a current identification. For example, the system may query an entity for a current identification number for the second application. For example, the system may query an entity and/or a network location associated with the entity for the current identification number for the application. At step508, process500receives (e.g., via one or more components ofFIGS.1-3) the current identification number. For example, the system may receive the current identification number for the second application from the entity. For example, in response to the query, the system may receive the current identification number and/or a verification that the application identification number is correct and/or authorized. At step510, process500determines (e.g., via one or more components ofFIGS.1-3) whether application identification number corresponds to the current identification number. For example, the system may compare the application identification number to the current identification number. For example, to ensure that the application identification number database is up-to-date, the system may query the entity to determine if the current application is the most up-to-date and/or is an authorized version of the application. If the entity responds indicating that the application identification number and/or the application is fraudulent, the system may cancel the request and/or alert the user. Such fraud prevention functionality is not available in conventional autocomplete features. It is contemplated that the steps or descriptions ofFIG.5may be used with any other embodiment of this disclosure. In addition, the steps and descriptions described in relation toFIG.5may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order or in parallel or substantially simultaneously to reduce lag or increase the speed of the system or method. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the steps inFIG.5. The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. The present techniques will be better understood with reference to the following enumerated embodiments:1. A method for populating application-specific information using overlay applications, the method comprising: generating for display, on a mobile device, an application overlay, corresponding to a first application, wherein the application overlay overlays a user interface generated by a second application; receiving a user input selecting the application overlay while the user interface is displayed; in response to the user input: identifying the second application; and determining whether a field currently displayed in the user interface corresponds to a predetermined field that is automatically populated by the first application; in response to determining that the field corresponds to a predetermined field, transmitting, to a remote source, a request for supplemental content for populating the field, wherein the supplemental content is selected from available supplemental content based on the second application and the field; receiving, from the remote source, the supplemental content; and populating the field in the second application with the supplemental content.2. The method of embodiment 1, wherein the supplemental content is a virtual account number for a financial service provider.3. The method of embodiment 2, wherein the virtual account number corresponds to an entity corresponding to the second application.4. The method of any one of embodiments 1-3, wherein identifying the second application comprises: determining an application identification number for the second application; and comparing the application identification number to an application identification number database to identify the second application.5. The method of any one of embodiments 1-4, wherein identifying the second application comprises: determining an application identification number for the second application; and comparing the application identification number to an application identification number database to identify an entity corresponding to the second application.6. The method of any one of embodiments 1-5, further comprising: querying an entity for a current identification number for the second application; receiving the current identification number for the second application from the entity; comparing the application identification number to the current identification number.7. The method of any one of embodiments 1-6, wherein identifying the second application comprises: determining a mapping of the second application; and comparing the mapping to an application mapping database to identify an entity corresponding to the second application.8. The method of any one of embodiments 1-7, further comprising generating for display a selectable icon corresponding to the application overlay over the user interface.9. The method of any one of embodiments 1-8, further comprising: determining a native overlay for an operating system for the mobile device; determining a display position for the native overlay; and selecting a position for selectable icon based on the display position.10. The method of any one of embodiments 1-9, wherein determining whether the field currently displayed in the user interface corresponds to the predetermined field that is automatically populated by the first application, further comprises: determining a conceptual data model for the second application, wherein the conceptual data model includes semantic information; and determining the field based on the conceptual data model.11. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-10.12. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-10.13. A system comprising means for performing any of embodiments 1-10. | 39,759 |
11861300 | DETAILED DESCRIPTION In systems configured to maintain multiple documents with various dependencies on each other, and particularly those with dozens of documents of different types, the accuracy of a report or displayed output that purports to capture a “snapshot” or “time slice” of the content of the documents may depend upon whether a change in one document has propagated to another document. In some scenarios, a user viewing several documents at the same time, but where those documents are only a subset of the entire set of documents, may not be able to view an accurate snapshot until the changes have been propagated across the entire set. As one example, when a cell in a spreadsheet is used as a “source” for content displayed in a “destination” 10-K financial document and also used in a destination Exhibit document, a change made to the source spreadsheet may be propagated to the 10-K document first (i.e., before the change has propagated to the Exhibit), so at a certain time slice, the 10-K document has been updated, but the Exhibit document has not yet been updated, and a user viewing both the 10-K document and the Exhibit document at the same time may become confused when entries between the destination documents, with purportedly the same values, do not match each other. Disclosed herein is a system for maintaining links and revisions for a plurality of documents. Various embodiments of the disclosure are implemented in a computer networking environment. The system is configured to receive requests that indicate revisions to be carried out on the plurality of documents where at least one of the requests corresponds to revisions for different documents of the plurality of documents. The plurality of documents may be referred to herein as a “workspace,” for example, a shared repository of a group of documents for a corporation or business unit. For each of the received requests, a workspace revision counter that is shared by the plurality of documents is incremented. The workspace revision counter indicates a revision state of the plurality of documents. In other words, the workspace revision counter indicates a revision state of the documents as an integral data unit, as opposed to separate data units for each document with respective document revision counters. A revision indicated by a request is caused to be performed on one or more documents that correspond to the request. In some scenarios, a single request indicates changes to multiple documents, for example, a request to update a link between a source element and a destination element. Turning toFIG.1, an example of a computer networking environment in which various embodiments of the disclosure may be implemented is shown. A first computing device100is communicatively linked to a network102. Possible implementations of the network102include a local-area network, a wide-area network, a private network, a public network (e.g., the Internet), or any combination of these. The network102may include both wired and wireless components. Also communicatively linked to the network102are a second computing device104a, a third computing device104b, a fourth computing device104c, and a fifth computing device106. The fifth computing device106is communicatively linked to a media storage device108(e.g., a redundant array of independent disks). For the sake of example, it is assumed that a first user120operates the second computing device104a, a second user122operates the third computing device104b, and a third user124operates the fourth computing device104c. Each of the computing devices104a,104b, and104cexecutes client software (reference numerals105a,105b, and105c, respectively). One possible implementation of the client software is a web browser. Residing within the media storage device108is a database108acontaining multiple documents, three of which are depicted inFIG.1: a first document114, a second document116, and a third document118. The first computing device100and the fifth computing device106are depicted as rack-mounted servers, while the second, third, and fourth computing devices104a,104b, and104care depicted as a notebook computers. However, the computing devices depicted inFIG.1are merely representative. Other possible implementations of a computing device include a desktop computer, a tablet computing, and a smartphone. Furthermore, although the first, second, and third documents114,116, and118are depicted as being stored in a single device, they may, in fact, be stored on multiple storage devices (e.g., sharded into multiple physical chunks) of a cloud storage service. Finally, there may be more than or fewer than the first, second, and third documents114,116, and118residing on the media storage device108. In various embodiments, at least some documents are stored using a suitable data structure configured to maintain links and references between cells, tables, paragraphs, sections, or other suitable portions of a document. In an embodiment, documents are stored using an RTree data structure. In another embodiment, documents are stored using a causal tree data structure. In an embodiment, the system includes a computing device that configures the computer memory according to a causal tree (a type of logic tree) representing a structure of a document. The computer memory may be internal to or external to the computing device. Causal tree structures are useful representations of how content and metadata associated with the content are organized. For example, a document may be represented by a single causal tree structure or a bounded set of causal tree structures. The causal tree structure is useful in efficiently tracking and storing changes made in the document. A typical causal tree structure includes nodes of the editing instructions in the document, and each editing instruction has a unique identifier or ID. The editing instructions include, for example, text characters, insertion of text characters, deletion of text characters, formatting instructions, copy and paste, cut and paste, etc. In other words, a causal tree structure is a representation of all the instructions (regardless of type) that compose a document. The causal tree structure starts with a root node and a collection of observation instances, from which all other instruction nodes branch. Except for the root node and observations, each editing instruction in the document is caused by whichever editing instruction that came before it. Every editing instruction is aware of the ID of its parent instruction, i.e., the instruction that “caused” it. In an embodiment, each instruction (other than the root node and observations) in the document may be represented as a 3-tuple: ID (ID of the instruction), CauseID (ID of the parent instruction), and Value (value of the instruction). Observations have a 3-tuple: ID (ID of the instruction), Start ID (ID of the first character in a range), and Stop ID (ID of character immediately after the last character in a range unless the same as the Start ID which indicates only a single character is to be observed). Additional instructions may be added to an observation to provide additional information or to modify the range being observed. Examples of observations are discussed in U.S. patent application Ser. No. 16/871,512. In an embodiment, the system includes a computing device that configures the computer memory according to an RTree (a type of logic tree) representing a structure of a spreadsheet or other document. The computer memory may be internal to or external to the computing device. In an embodiment, the RTree has a plurality of nodes, at least some of which contain one or more minimum bounding rectangles. Each minimum bounding rectangle (“MBR”) encompasses cells of the spreadsheet from a different one of a plurality of columns of the spreadsheet, but does not encompass cells of any of the other columns of the plurality of columns. A node of the RTree may hold multiple MBRs or a single MBR. For convenient reference, the first computing device100will also be referred to as a “productivity server100” and the fifth computing device106will be also be referred to as a “database server106.” Although depicted inFIG.1as separate devices, in some embodiments, the functionality of the productivity server100and the database server106are on the same device. The productivity server100executes productivity software101to provide document collaboration services. The database server106executes Software-as-a-Service (“SaaS”) platform software107to provide database services to the productivity software101, such as maintaining the contents of the database108aand providing a programming platform for various processes launched by the productivity software (e.g., to manipulate, store, and retrieve documents and other information from the database108a). Under the control of the productivity software101, the productivity server100interacts with the database server106(which operates under the control of the SaaS platform software107) and the computing devices104a,104b, and104c(also referred to as “client devices”) to allow the computing devices to access the first document114, the second document116, and the third document118so that the first user120, the second user122, and the third user124can collaborate in editing the documents (e.g., moving sections around in a particular document). In an embodiment, documents maintained on the media storage device108may be organized into sections, with each section (e.g., the contents of the section) being maintained in its own separate data structure referred to as a “section entity.” For example, the first document114inFIG.1has a first section represented by a first section entity130, a second section represented by a second section entity132, and a third section represented by a third section entity134. The productivity software101uses an outline entity136(also stored on the media storage device) to determine how the sections are organized. FIG.2is a block diagram of a computing device200, according to an embodiment. One or more of the computing devices ofFIG.1(including the media storage device108) have the general architecture shown inFIG.2, in various embodiments. The device depicted inFIG.2includes a processor152(e.g., a microprocessor, controller, or application-specific integrated circuit), a primary memory154(e.g., volatile memory, random-access memory), a secondary memory156(e.g., non-volatile memory, solid state drive, hard disk drive), user input devices158(e.g., a keyboard, mouse, or touchscreen), a display160(e.g., an organic, light-emitting diode display), and a network interface162(which may be wired or wireless). The memories154and156store instructions and data. The processor152executes the instructions and uses the data to carry out various procedures including, in some embodiments, the methods described herein. Each of the elements ofFIG.2is communicatively linked to one or more other elements via one or more data pathways163. Possible implementations of the data pathways163include wires, conductive pathways on a microchip, and wireless connections. In an embodiment, the processor152is one of multiple processors in the computing device, each of which is capable of executing one or more separate threads. In an embodiment, the processor152communicates with other processors external to the computing device in order to initiate the execution of different threads on those other processors. The term “local memory” as used herein refers to one or both of the memories154and156(i.e., memory accessible by the processor152within the computing device). In some embodiments, the secondary memory156is implemented as, or supplemented by an external memory156A. The media storage device108is a possible implementation of the external memory156A. The processor152executes the instructions and uses the data to carry out various procedures including, in some embodiments, the methods described herein, including displaying a graphical user interface169. The graphical user interface169is, according to one embodiment, software that the processor152executes to display a report on the display device160, and which permits a user to make inputs into the report via the user input devices168. The computing devices ofFIG.1(i.e., the processor152of each of the computing devices) are able to communicate with other devices ofFIG.1via the network interface162over the network152. In an embodiment, this communication takes place via a user interface that the productivity server150provides to the computing devices154a,154b, and154c. The specific nature of the user interface and what the user interface shows at any given time may vary depending on what the user has chosen to view. Also, multiple users may interact with different instances of the user interface on different devices. In some embodiments, the productivity server150carries out calculations to determine how content is to be rendered on a computing device, generates rendering instructions based on those calculations, and transmits those rendering instructions to the computing device. Using the received instructions, the computing device renders the content on a display. In other embodiments, the productivity server150transmits instructions regarding an asset to a computing device. In carrying out the received instructions, the computing device performs the appropriate calculations locally to render the content of the asset on a display. FIG.3is a block diagram of an example database300configured to store workspaces with separate revision counters using the computing device ofFIG.2. In the embodiment shown inFIG.3, the database300generally corresponds to the database108aand includes the first document114, the second document116, and the third document118. In other embodiments, the database300includes one, two, four, or more documents. In various embodiments, the database300includes a first workspace310having a document table320, a workspace revision queue330, and a workspace revision counter340. The first workspace310represents a shared repository of a plurality of documents. In some scenarios, the repository is associated with a corporation, business unit, user group, or other entity. The plurality of documents may be of the same or different types in various embodiments, for example, spreadsheet documents, text documents, presentation documents, or other suitable document types. In an embodiment, the workspace310is configured to store the plurality of documents (i.e., documents114,116, and118), or suitable data structures associated with the documents, in the document table320. The workspace revision counter340(or “workspace level revision counter”) is configured to be shared by the plurality of documents and indicates a revision state of the plurality of documents at any given point in time. In other words, the workspace revision counter340indicates a revision state of the plurality of documents as an integral data unit, as opposed to separate document revision counters for individual documents (“document level revision counters”). The workspace revision counter340is a workspace level revision for grouping the revision of all workspace content at any given point in time within a workspace. By sharing the workspace revision counter340among the plurality of documents, a change or revision to any single document causes an increment to the workspace revision counter340. As an example, when a first change to a first document in the workspace310increments the workspace revision counter from 7 to 8, then a second change to a second document in the workspace310occurring after the first change increments the workspace revision counter340from 8 to 9. In a further example, the workspace revision counter340is incremented from 9 to 10 when a third change to the first document is requested. The workspace revision queue330is configured to store revisions to the plurality of documents, more specifically, requests for revisions. The workspace revision queue330is shared by the plurality of documents and stores revisions to different documents of the plurality of documents. In various embodiments, the workspace revision queue330is a queue for ordering requests for revisions in an linear fashion across the entire workspace. In the embodiment shown inFIG.3, using the above example, the first change to the first document, the second change to the second document, and the third change to the first document are queued as revisions332,334, and336. In an embodiment, the computing device200processes or performs the revisions in the workspace revision queue330in a first in, first out (FIFO) manner. In other embodiments, the computing device200prioritizes at least some of the revisions, for example, based on a priority level of the corresponding document to be revised, a priority level of a user that requested the revision, or other suitable criteria. In some embodiments, the computing device200groups at least some of the revisions in the workspace revision queue330, for example, according to whether the revisions can be performed in parallel. In the embodiment shown inFIG.3, the database300also includes a second workspace350having a document table370, a workspace revision queue380, and a workspace revision counter390(analogous to the document table320, the workspace revision queue330, and the workspace revision counter340). In some embodiments, the database300is configured to provide a separate workspace for different pluralities of documents, for example, for different corporations, business units, user groups, or other entities. In some embodiments, the database300includes a document revision queue for one or more of the plurality of documents. The document revision queue is configured to store temporary copies of revision and is not shared among the plurality of documents, but is instead specific to a particular document. In an embodiment, for example, the first document114includes a document revision queue314. The document revision queue allows for separate versions or branches of a document to be maintained concurrently, as described herein. In an embodiment, the document revision queue is specific to a locked section of a document where the locked section is a section of the document that is restricted from editing by users outside of an editing group. FIGS.4A to4EandFIGS.5A to5Eare diagrams showing a sequence of timeslices for revisions to spreadsheets with cells having formula dependencies and linking dependencies using document revision counters. In the embodiment shown, the sequence shows revisions to a first spreadsheet document (referred to herein as “Sheet1”) and a second spreadsheet document (“Sheet2”) with versions indicated as “v1”, “v2”, and so on. Notably, the version numbers of the documents are independent of each other. For ease of description, only two columns (“A” and “B”) and two rows (“1” and “2”) are shown inFIGS.4A to4EandFIGS.5A to5E. FIG.4Ashows an initial state of the documents with both the first document and the second document at version 1 (“Sheet1_v1” and “Sheet2 v1”) with empty cells. AtFIG.4B, Sheet2 has been modified and advances to revision 2 (“v2”) to include a formula in cell B2, specifically, a summation of the values in column A (“=SUM(A)=0”). Since cells A1 and A2 are empty, the summation of cell B1 of Sheet2 inFIG.4Bis zero. AtFIG.4C, Sheet2 has been modified and advances to revision 2, where cell A2 of Sheet2 contains a link to cell B1 of Sheet2 (the link is represented by “S1B1”) and cell A2 contains a formula that relies upon cell A1 (“=A1*3=0”). The link indicates that cell B1 of Sheet1 is a source element for cell A2 of Sheet2, which is a destination element. AtFIG.4D, Sheet1 has been modified and advances to version 3 (“v3”), where cell B2 contains a link to cell A2 of Sheet2 (the link is represented by “S2A2”). In other words, cell A2 of Sheet2 is the source of the link, and cell B2 of Sheet1 the destination of the link. As used herein, a link is a reference, pointer, or data structure that refers to linked content (or the location of the linked content), while linked content is a set of content, for example, a set of one or more characters or numbers, a set of one or more sentences, a set of one or more paragraphs, a set of one or more cells within a spreadsheet, a set of one or more images, or various combinations thereof. For example, inFIG.4C, the value 0 in cell A1 of Sheet2 is the linked content, and “S1B1” is a representation that indicate that cell A1 of Sheet2 contains a link. Although “S1B1” and “52A2” are used to represent links inFIGS.4C to4E and5A to4E, the user interface may not display these representations. In various implementations, no visual indicator or different visual indicators (e.g., icons, underlining, different font color or font face, different background color, a box that surrounds the link, etc.) may be used to indicate the existence of a link, the source of a link, or the destination of a link. In other embodiments, a user may need to perform another gesture on the user interface (e.g., hover, right click, double click, etc.) to trigger the display of the source(s) or destination(s) of a link (e.g., via a pop-up panel or side panel). In an embodiment, the linked set of content contains a plurality of elements (i.e., characters, cells, paragraphs, etc.) that appear consecutively within a document, for example, cells A4 through A7 of a spreadsheet or sentences one through five of a text document. In another embodiment, the linked set of content contains a plurality of elements that do not appear consecutively, for example, cells B18:C20 of a spreadsheet (i.e., cells B18, B19, B20, C18, C19, and C20). AtFIG.4E, Sheet 1 has been modified and advances to version 4 (“v4”), where cell A1 has a value of 1 and cell B1, based on its formula, has its displayed value changed to 1. In some scenarios, the link of cell A1 in Sheet2 is not immediately updated, for example, due to processing delays associated with identifying when a source element has changed. Accordingly, at the timeslice shown inFIG.4E, Sheet2 has not yet been updated to a new version. AtFIG.5A, the link of cell A1 in Sheet2 has been updated to include the appropriate value from source element B1 of Sheet1 (“1”), cell A2 in Sheet2 is being processed to calculate its formula, and Sheet2 advanced to version 3. In some scenarios, the formula in cell A2 is relatively complex and may have a long processing time (e.g., several minutes or more) before its value has been determined. In other scenarios, the formula may refer to an external source (e.g., a document outside of the workspace310) that may have reduced availability or delayed updates, for example, by being stored on a remote computer. In still other scenarios, the formula may include a link to another “busy” document that is being used by many other users so that access to its data is delayed. AtFIG.5B, the formula in cell A2 of Sheet2 has been calculated, a value of “2” has been inserted in cell A2 of Sheet1, the formula in cell B1 of Sheet1 is updated to a value of 3, and Sheet1 has advanced to version 4 (“v5”), but the link in cell B2 of Sheet1 has not yet been updated with the result of the formula in cell A2 of Sheet2. At this timeslice, Sheet1 is inconsistent with itself because the value of cell A2 in Sheet2 has not propagated to cell B2 of Sheet1. Moreover, Sheet2 is not consistent with Sheet1 because cell A1 of Sheet2 has not been updated with the updated value (“3”) of cell B1 of Sheet1. AtFIG.5C, cell B2 of Sheet1 has been updated to the most recent confirmed value of its link to cell A2 of Sheet2 and Sheet1 advances to version 6 (“v6”). Additionally, cell A1 of Sheet2 is updated to the most recent value of source cell B1 and Sheet2 advances to version 4 (“v4”). AtFIG.5D, cell A2 of Sheet2 has been calculated, but the value is not propagated to cell B2 of Sheet1 untilFIG.5E. One solution to the problem of propagating values, either through formulas or links, is to utilize the workspace revision counter340. Although the workspace revision counter340may be incremented more often and more quickly than individual document revision counters, the workspace revision counter340provides a single value that can be referenced to refer to a single timeslice for all documents in the workspace310where all values have been propagated. FIG.6is a flow diagram showing a sequence600of revisions to documents using a workspace revision counter, for example, the workspace revision counter340, according to an embodiment. In the embodiment shown inFIG.6, first and second documents (“Doc1” and “Doc2”) are provided for editing to various clients (including Users 1, 2, 3, and 4) by a frontend user interface (“frontend”). In some embodiments, the frontend user interface is provided by the first computing device100, the fifth computing device106, or another suitable computing device. In some embodiments, the clients utilize respective ones of the computing devices104a,104b,104c. In the embodiment shown inFIG.6, User1 and User3 modify the first document, while User2 and User4 modify the second document, via respective user interfaces. Although only two documents and four clients are shown, in other embodiments, the frontend may provide hundreds of documents to hundreds of clients concurrently. During block610, User1 sends a request for a revision to the first document (“EditDoc(doc1, . . . )”) and the request is received by the frontend. In some scenarios, the request includes one, two, three, or more revisions. The frontend causes the revision to be performed on the first document, for example, by updating the first document within the database108a, and increments a document revision counter (“Doc1.revision+1”). The frontend provides the updated document revision counter (“2”) to the User1. During block615, the frontend increments the workspace revision counter340, resulting in a new value of “75”. Although the most recent revision incremented the document revision counter of the first document to “2”, the workspace revision counter340is utilized for each document in the workspace310, so its value is higher than the document revision counter. During block620, User2 sends a request for a revision to the second document (“EditDoc(doc2, . . . )”) and the request is received by the frontend. The frontend causes the revision to be performed on the second document, for example, by updating the first document within the database108a, and increments a document revision counter (“Doc2.revision+1”). The frontend provides the updated document revision counter (“12”) to the User2. During block625, the frontend increments the workspace revision counter340, resulting in a new value of “76”. Notably, revisions to both the first document and the second document result in updates to the same counter, specifically, the workspace revision counter340. Subsequent revisions to the first document at block630and to the second document at block640include increments to the respective document revision counters and are also followed by updates to the workspace revision counter340at blocks635and645. In another embodiment, if a first document contains the source element of a link and a second document contains the destination element of the link, then when a user sends a request to edit the source element of the link (e.g., linked content or other properties of the link) in the first document, the request will also trigger a request to edit the destination element of the link in the second document. In other words, when a user makes a revision to the source element of the link in the first document, the revision is propagated to the destination element of the link in the second document. In this instance, the document revision counter of the first document will increment by 1, the document revision counter of the second document will increment by 1, and the workspace level counter will also increment by 1. Cloud-based document collaboration platforms tend to be fully open and collaborative. That is, all users who are invited to edit a document (e.g., text document, graphics-based document, spreadsheet, or a hybrid of one or more of the foregoing) are able to see one another's edits in real time or nearly real time. However, there are many scenarios in which one or more users would prefer not to share their draft work product with other collaborators. In these scenarios, the user (or group of users) may create a branch of the document, or a branch of a portion thereof (e.g., a section of a document), where read and/or write access to the branch is limited to themselves only (a “private user”) or to themselves and any additional users (a “private group”). Once a section becomes private, users other than the private user or those not within the private group will not be able to see additional edits being made but will only see the state of the section as it was just prior to being taken private. The private user or a user within the private group (assuming they have sufficient permission) can choose to make the edits public, which unlocks the private section and allows the rest of the collaborators to view the changes and to make their own edits to the section if desired. In an embodiment, edits to the document are managed through the use of a causal tree or causal graph, and when a section of the document is taken private, the document collaboration system creates a copy of the relevant segment or segments of the causal tree or causal graph, uses the segment or segments to keep track of the edits and, when the section is subsequently made public, merges the segment or segments into the original causal graph. In another embodiment, edits to the document are managed through the use of an Rtree (also referred to herein as “R-Tree”), and when a section of the document is taken private, the document collaboration system creates a copy of the relevant segment or segments of the Rtree, uses the segment or segments to keep track of the edits and, when the section is subsequently made public, merges the segment or segments into the original Rtree. FIG.7is a flow diagram showing a sequence700of revisions to documents having separate branches using a workspace revision counter and document revision counters, for example, the workspace revision counter340. The embodiment shown inFIG.7is similar to that ofFIG.6, where first and second documents (“Doc1” and “Doc2”) are provided for editing to various clients (including Users 1, 2, 3, and 4) by a frontend user interface (“frontend”). In the embodiment ofFIG.7, the revisions to the first and second documents are initially stored in a separate branch that may be combined with a main branch at a later time, discarded, or maintained separately from one or more other branches. As an example, a secondary branch of the first document114may be edited and reviewed by a user and changes by the user may be stored and managed in the document revision queue314without affecting a main branch of the first document114. When the changes from the user are to be finalized and incorporated into the main branch (e.g., to publish an update to a publicly available document), the changes to the document may be incorporated into the main branch, for example, by merging or rebasing. In various embodiments, the main branch and any secondary branches are identified by respective branch identifiers (“branch IDs”), for example, a unique identifier, that allow revisions in a secondary branch to be incorporated into a main branch, revisions in a main branch to be incorporated into a secondary branch, etc. Merging generally corresponds to a process of comparing a secondary branch to a main branch and making any needed changes to the main branch to be consistent with the secondary branch. Rebasing generally corresponds to a process of making the changes that were made on the secondary branch (relative to a common earlier base), but instead using a “sibling” branch as the new base to be modified. In other words, rebasing effectively “replays” changes from the secondary branch (e.g., stored in the document revision queue314) onto another branch sequentially in the order they were introduced, whereas merging takes the endpoints of the branches and simply merges them together. In the embodiment shown inFIG.7, the first document and the second document have their own respective secondary branches (“Doc1 Draft Branch” and “Doc2 Draft Branch”). However, in other embodiments, two or more documents within a workspace are part of a same branch. In some embodiments, a branch for an entire workspace is created and later merged or rebased with another branch, or maintained separately. At block710and block730, respectively, User2 and User2 request revisions to the first document, analogously to blocks610and630. Similarly, at blocks720and740, User2 and User4 request revisions to the second document, analogously to blocks620and640. The revisions corresponding to the first document are stored in the document revision queue314, in an embodiment, and the revisions corresponding to the second document are stored in a corresponding document revision queue (not shown). In some other embodiments, the document revisions for the first document and the second document are stored in a same database or central repository, but are flagged as being limited to a particular branch, for example, using a branch identifier that uniquely identifies the branch. At block750, User1 requests a merge of the secondary branch of the first document with the main branch and the revisions stored in the document revision queue314are merged or rebased with those in the main branch. At block755, the frontend increments the workspace revision counter340. In this embodiment, the separate revisions of the first document at blocks710and730are combined into a same request for a revision and correspond to a same revision number (“75”) for the workspace310. Similarly, the separate revisions of the second document at blocks720and740are combined into a same request (block760) for a revision and correspond to a same revision number (“76”, block765) for the workspace310. The requests at blocks750and760identify the revisions to be incorporated into the main branch by using a branch identifier that corresponds to the branch. FIG.8is a flow diagram showing a sequence800of revisions to documents and integration of those revisions into other branches using the workspace revision counter340, according to an embodiment. In the embodiment shown inFIG.8, a first document (“Doc1”) is provided for editing to various clients (including Users 1 and 2) by a frontend user interface (“frontend”). In some embodiments, the frontend user interface is provided by the first computing device100, the fifth computing device106, or another suitable computing device. In some embodiments, the clients utilize respective ones of the fourth computing devices104a. In the embodiment shown inFIG.8, User1 and User2 modify the first document via respective user interfaces. Although only one documents and two clients are shown, in other embodiments, the frontend may provide hundreds of documents to hundreds of clients concurrently. At block810, the first user (User1) makes revisions to a secondary branch of the first document (e.g., a “private” branch) that are stored separately from other revisions by the second user (User2), which are performed at block820. At block830, the first user requests that the changes from their secondary branch be incorporated into the main branch in a manner similar to that described above with respect to block750. At block840, the frontend increments the workspace revision counter340. In contrast to the merging of a secondary branch into the main branch (e.g., a “fan-in” action), at block850, the revisions to the main branch that were fanned in are “fanned out” to the secondary draft of the second user. In various embodiments, the fanning out process is a merge process or a rebase process, as described above. At block860, the second user (User2) makes revisions to a secondary branch of the first document that are stored separately from the revisions by the first user. At block870, the second user incorporates the changes from their secondary branch into the main branch in a manner similar to that described above with respect to block830. At block880, the frontend increments the workspace revision counter340. FIG.9is a flow diagram showing a sequence900of revisions to documents using a workspace revision counter and workspace revision queue where temporary revisions are displayed, according to an embodiment. In some scenarios, utilization of the workspace revision queue330reduces performance (e.g., longer processing times, longer queue times before a revision is performed) due to higher memory requirements for data structures associated with the workspace310. In an embodiment, for example, a single RTree or causal tree is shared for the plurality of documents in the workspace310and has a larger size than separate RTrees for the documents. Additionally, contention for access to the RTree by different documents being revised at the same time may increase the queue times for a revision to be processed. In the embodiment shown inFIG.9, the computing device200is configured to perform “optimistic” revisions at the document level, but identify those revisions as being “inconsistent” within the user interface until the revision has been processed and determined to be consistent at the workspace level. The optimistic revisions are revisions that are received from a user for a displayed document (e.g., a secondary branch displayed on a user interface104a), performed for the displayed document and updated on the user interface104a, but without fully updating formulas or links in the displayed document that refer to other documents, other sections of documents, or external sources. Optimistic revisions provide improved feedback to the user (i.e., near real-time, without having to wait for changes to propagate through the workspace revision queue), but may be incorrect if they rely on the results of a formula calculation or link that has not completed. As one example, a cell B1 in a first sheet (S1B1) and a cell B3 of a second sheet (S2B3) contains formulas as follows: S1B1=SUM(S1A1, S1A2, S2B3) S2B3=S1A1*3 where S1A1 corresponds to a cell A1 of the first sheet with an initial value of “2”, S1A2 corresponds to a cell A2 of the first sheet having an initial value of “5”. In this example, the cell S2B3 has an initial value of “6” (2*3) and the cell S1B1 has an initial value of “13” (2+5+6). When the user revises cell S1A1 to a value of “4”, an optimistic revision indicates a new value of “15” (4+5+6), using the updated value of cell S2A1 but without an update to the value referenced in the second sheet (S2B3). In this example, the value of “15” is shown, but with a temporary identification on the displayed document that indicates that the value is a temporary revision, not a final revision (i.e., with an updated value from cell S2B3). Once the final revision has been propagated, where S2B3 is updated to “12” (4*3) and S1B1 is updated to 21 (4+5+12), the temporary identification is removed. Examples of a temporary identification include a different font color or font face, a different background color, a box that surrounds the value, underlining, or other suitable visual indication. At blocks910,920,930, and940, various users revise first and second documents and send requests for the revisions to the frontend, in a manner similarly to that described above with respect to blocks710,720,730, and740. In the embodiment ofFIG.9, however, the revisions at blocks910,920,930, and940are optimistic or temporary until the computing device200has finalized the revisions, for example, by updating formulas and links contained within an RTree for the workspace310. At blocks910,920,930, and940, the temporary revisions are marked as “inconsistent,” as discussed above. Moreover, updates to the workspace revision queue330are marked as inconsistent until the revisions have been finalized. In some embodiments, a separate process is performed for finalizing the revisions using the workspace revision queue, for example, a write-behind consistency process. The write-behind consistency process traverses the entirety of the RTree for the workspace310and updates formulas, links, or both formulas and links. In an embodiment, the frontend is provided by the productivity server100and the write-behind consistency process is performed by the database server106. When the write-behind process is complete, the database server106marks the workspace revision queue330, or a particular revision therein, as being consistent. In the embodiment shown inFIG.9, the write-behind consistency process is shown performing separate final revisions for blocks910,920,930, and940at blocks950,960,970, and980, respectively. In some embodiments, causing the revision to be performed includes queuing a temporary copy of the revision in a document revision queue that is specific to the document corresponding to the revision. In an embodiment, for example, the document revision queue corresponds to the document revision queue314. A temporary revision is performed on a computing device that displays a secondary branch of the document corresponding to the revision, without performing a revision on a corresponding main branch of the document. In an embodiment, for example, the productivity server100performs the temporary revision on a branch of the first document at block910, without performing a final revision at block950(i.e., before the final revision has been performed). In other embodiments, the temporary revision corresponds to the blocks920,930, or940ofFIG.9. The revision is queued as a final revision in the workspace revision queue330and performed on the main branch, for example, corresponding to blocks950,960,970, or980ofFIG.9. In some embodiments, a received request for a revision indicates a revision to two or more documents. In an embodiment, for example, the request is for a revision to a link where the revision corresponds to a source element within a first document and a destination element within a second document. The link revision is initially queued in the first document revision queue that is specific to the document containing the source element of the link (e.g., the document being edited by the user that makes the request). In an embodiment, this document revision queue is processed by the frontend provided by the productivity server100. The link revision is initially identified as being “inconsistent” until the write-behind consistency process, performed by the database server106, further processes the revision and determines that the revision is consistent with other revisions, links, and/or formulas. In an embodiment, the link revision is queued in the workspace revision queue, the write-behind consistency process traverses the RTree for the workspace310for the link revision, and queues the link revision in a document revision queue that is specific to the second document containing the destination element. In some embodiments, revisions or updates to the workspace310that originate outside of the workspace310are also handled using the write-behind consistency process. In this way, an update to an external document (e.g., outside of the workspace310) that is relied upon by a document within the workspace310is associated with a final revision and reference number for the workspace revision counter340. In various embodiments, the external document is located on a remote server, cloud service, in a different workspace (e.g., in the workspace350), or other suitable location. As discussed above, in some embodiments, the computing device200utilizes an RTree as a data structure to store electronic documents of the workspace310. In an embodiment, the computing device200utilizes the RTree for maintaining formulas that reference different cells. In another embodiment, the computing device200utilizes the RTree for maintaining both formulas and links to different cells. In this embodiment, a single RTree is utilized for maintaining formulas and links throughout the plurality of documents of the workspace410. This approach improves detection of circular references across all documents within the workspace310and also improves the flow of values from one document to another document over links and formulas. In some embodiments, the computing device200maintains separate RTrees (e.g., one or more RTrees per document), but links the RTrees by utilizing a common reference time. FIG.10is a flowchart illustrating an example method, implemented on a server, for maintaining links and revisions for a plurality of documents, according to an embodiment. In some embodiments, the method1000is implemented by the productivity server100ofFIG.1, which interacts with the database server106and the client devices104.FIG.10is described with reference toFIG.1for explanatory purposes. In other embodiments, however, the method1000is implemented by another suitable computing device. At block1002, requests are received that indicate revisions to be carried out on the plurality of documents. In an embodiment, the plurality of documents corresponds to the plurality of documents in the document table320(FIG.3). In some embodiments, at least one of the requests correspond to revisions for different documents of the plurality of documents, for example, the first document114and the second document116. In various embodiments, the requests correspond to blocks610,620,630, or640ofFIG.6, blocks710,720,730,740,750, or760ofFIG.7, blocks810,820,830,850,860, or870ofFIG.8, or blocks910,920,930, or940ofFIG.9. At block1004, a workspace revision counter that is shared by the plurality of documents is incremented. In an embodiment, the workspace revision counter indicates a revision state of the plurality of documents. In some embodiments, the workspace revision counter corresponds to the workspace revision counter340. In various embodiments, incrementing the workspace revision counter340corresponds to blocks615,625,635, or645ofFIG.6, blocks755or765ofFIG.7, blocks840or880ofFIG.8, or blocks915,925,935, or945ofFIG.9. At block1006, the revision is queued in a workspace revision queue that is shared by the plurality of documents. In an embodiment, the workspace revision queue corresponds to the workspace revision queue330. At block1008, the revision indicated by the request is caused to be performed on one or more documents of the plurality of documents that correspond to the request. In some embodiments, the method1000further includes displaying a temporary identification that corresponds to the temporary revision on the displayed document and indicates that the temporary revision is not the final revision. The temporary identification is removed from the displayed document when the final revision has been performed. In an embodiment, for example, a temporary revision is shown on a computing device using a different font color or font face, a different background color, a box that surrounds the value, underlining, or other suitable visual indication as the temporary identification at block910, and the temporary identification is removed at block950. In some embodiments, at least some user interface features of a user interface on which the document is displayed are disabled while at least some temporary identifications are displayed. In an embodiment, for example, user interface features such as generating a report based on the plurality of documents, exporting the plurality of documents, or other actions are temporarily disabled until the revisions have been finalized. In an embodiment, the method1000further includes receiving a revision for data that is external to the plurality of documents and linked from at least one of the plurality of documents. In an embodiment, the external data corresponds to data from an external workspace, for example, the workspace350. In another embodiment, the external data corresponds to data from a remote server, cloud service, or other suitable location. The workspace revision counter is incremented based on the revision for the external data. The revision for the external data is queued in the workspace revision queue, i.e., the workspace revision queue330. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. For the purposes of promoting an understanding of the principles of the disclosure, reference has been made to the embodiments illustrated in the drawings, and specific language has been used to describe these embodiments. However, no limitation of the scope of the disclosure is intended by this specific language, and the disclosure should be construed to encompass all embodiments that would normally occur to one of ordinary skill in the art. The terminology used herein is for the purpose of describing the particular embodiments and is not intended to be limiting of exemplary embodiments of the disclosure. In the description of the embodiments, certain detailed explanations of related art are omitted when it is deemed that they may unnecessarily obscure the essence of the disclosure. The apparatus described herein may comprise a processor, a memory for storing program data to be executed by the processor, a permanent storage such as a disk drive, a communications port for handling communications with external devices, and user interface devices, including a display, touch panel, keys, buttons, etc. When software modules are involved, these software modules may be stored as program instructions or computer readable code executable by the processor on a non-transitory computer-readable media such as magnetic storage media (e.g., magnetic tapes, hard disks, floppy disks), optical recording media (e.g., CD-ROMs, Digital Versatile Discs (DVDs), etc.), and solid state memory (e.g., random-access memory (RAM), read-only memory (ROM), static random-access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, thumb drives, solid state drives, etc.). The computer readable recording media may also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. This computer readable recording media may be read by the computer, stored in the memory, and executed by the processor. Also, using the disclosure herein, programmers of ordinary skill in the art to which the disclosure pertains may easily implement functional programs, codes, and code segments for making and using the disclosure. The disclosure may be described in terms of functional block components and various processing steps. Such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. For example, the disclosure may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the disclosure are implemented using software programming or software elements, the disclosure may be implemented with any programming or scripting language such as C, C++, JAVA®, assembler, or the like, with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements. Functional aspects may be implemented in algorithms that execute on one or more processors. Furthermore, the disclosure may employ any number of conventional techniques for electronics configuration, signal processing and/or control, data processing and the like. Finally, the steps of all methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. For the sake of brevity, conventional electronics, control systems, software development and other functional aspects of the systems (and components of the individual operating components of the systems) may not be described in detail. Furthermore, the connecting lines, or connectors shown in the various figures presented are intended to represent exemplary functional relationships and/or physical or logical couplings between the various elements. It should be noted that many alternative or additional functional relationships, physical connections or logical connections may be present in a practical device. The words “mechanism”, “element”, “unit”, “structure”, “means”, and “construction” are used broadly and are not limited to mechanical or physical embodiments, but may include software routines in conjunction with processors, etc. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. Numerous modifications and adaptations will be readily apparent to those of ordinary skill in this art without departing from the spirit and scope of the disclosure as defined by the following claims. Therefore, the scope of the disclosure is defined not by the detailed description of the disclosure but by the following claims, and all differences within the scope will be construed as being included in the disclosure. No item or component is essential to the practice of the disclosure unless the element is specifically described as “essential” or “critical”. It will also be recognized that the terms “comprises”, “comprising”, “includes”, “including”, “has”, and “having”, as used herein, are specifically intended to be read as open-ended terms of art. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosure (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless the context clearly indicates otherwise. In addition, it should be understood that although the terms “first”, “second”, etc. may be used herein to describe various elements, these elements should not be limited by these terms, which are only used to distinguish one element from another. Furthermore, recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. | 55,203 |
11861301 | DETAILED DESCRIPTION The illustrative embodiments recognize and take into account one or more different considerations. For example, with hundreds of thousands of parts, sorting these parts into groups is a tedious and time-consuming process that is currently performed by human operators. A human operator can review the computer-aided designs (CAD) for each part in attempting to sort or group parts. However, this process is subjective and can result in different groupings based on the subjective decisions of the human operators. The illustrative embodiments recognize and take into account that another approach involves sorting parts into groups based on a similarity of part designs. This similarity can be determined using a part list. By analyzing part lists, automated sorting or categorizing of parts into groups can be performed. A cost analysis and other analysis can be made with respect to the grouping of parts. However, grouping parts based on information from part lists can be tedious and time-consuming. The raw text in the part lists contains information that is not useful for grouping parts. With millions of part lists, the analysis and grouping of parts can use much more processing resources and time than desired. Thus, the illustrative embodiments provide a method, apparatus, system, and computer program product grouping parts. In one illustrative example, sets of unigrams are generated from text in part lists for parts in a family of parts using a natural language processing method. A set of unigrams in the sets of unigrams represent components for a part in the parts. A document term matrix using the sets of unigrams is created. The document term matrix describes a presence of components in the parts. A number of unigrams from the document term matrix that have occurrences in the document term matrix that are greater than a common design threshold is removed. Removing the number of unigrams from the document term matrix forms a processed document term matrix. The common design threshold identifies a level of occurrence not useful in differentiating the parts from each other. As used herein, “a number of” when used with reference items means one or more items. For example, a number of unigrams is one or more unigrams. Additionally, as used herein, “a set of” when used with reference items means one or more items. For example, a set of unigrams is one or more unigrams. The parts are clustered into groups using the processed document term matrix. Each of these groups can represent one or more parts in a family of parts. Each group of parts can be grouped based on the components that form the parts. For example, when the family of parts is valves, then each group can represent particular types of valves that that grouped in using a clustering process. The grouping of the valves into groups is performed based on the components that form the valves. the components for analysis can be selected as component that represent design features that can distinguish different types of valves from each other. This process can be performed for each family of parts. For example, the grouping can be performed for valves. The process can then be performed again for display panels. With this ability to more quickly and accurately group parts, groupings can be displayed and analyzed. The analysis can be performed to reduce costs in managing parts. This reduction can include managing inventories and purchasing parts based on the analysis of the grouping of parts in a family of parts. With reference now to the figures and, in particular, with reference toFIG.1, a pictorial representation of a network of data processing systems is depicted in which illustrative embodiments may be implemented. Network data processing system100is a network of computers in which the illustrative embodiments may be implemented. Network data processing system100contains network102, which is the medium used to provide communications links between various devices and computers connected together within network data processing system100. Network102may include connections, such as wire, wireless communication links, or fiber optic cables. In the depicted example, server computer104and server computer106connect to network102along with storage unit108. In addition, client devices110connect to network102. As depicted, client devices110include client computer112, client computer114, and client computer116. Client devices110can be, for example, computers, workstations, or network computers. In the depicted example, server computer104provides information, such as boot files, operating system images, and applications to client devices110. Further, client devices110can also include other types of client devices such as mobile phone118, airplane120, and airplane122. In this illustrative example, server computer104, server computer106, storage unit108, and client devices110are network devices that connect to network102in which network102is the communications media for these network devices. Some or all of client devices110may form an Internet of things (IoT) in which these physical devices can connect to network102and exchange information with each other over network102. Client devices110are clients to server computer104in this example. Network data processing system100may include additional server computers, client computers, and other devices not shown. Client devices110connect to network102utilizing at least one of wired, optical fiber, or wireless connections. Program instructions located in network data processing system100can be stored on a computer-recordable storage medium and downloaded to a data processing system or other device for use. For example, program instructions can be stored on a computer-recordable storage medium on server computer104and downloaded to client devices110over network102for use on client devices110. In the depicted example, network data processing system100is the Internet with network102representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers consisting of thousands of commercial, governmental, educational, and other computer systems that route data and messages. Of course, network data processing system100also may be implemented using a number of different types of networks. For example, network102can be comprised of at least one of the Internet, an intranet, a local area network (LAN), a metropolitan area network (MAN), or a wide area network (WAN).FIG.1is intended as an example, and not as an architectural limitation for the different illustrative embodiments. As used herein, “a number of” when used with reference to items, means one or more items. For example, “a number of different types of networks” is one or more different types of networks. Further, the phrase “at least one of,” when used with a list of items, means different combinations of one or more of the listed items can be used, and only one of each item in the list may be needed. In other words, “at least one of” means any combination of items and number of items may be used from the list, but not all of the items in the list are required. The item can be a particular object, a thing, or a category. For example, without limitation, “at least one of item A, item B, or item C” may include item A, item A and item B, or item B. This example also may include item A, item B, and item C or item B and item C. Of course, any combinations of these items can be present. In some illustrative examples, “at least one of” can be, for example, without limitation, two of item A; one of item B; and ten of item C; four of item B and seven of item C; or other suitable combinations. In this illustrative example, human operator130can sort parts for aircraft such as airplane120and airplane122. In these examples, airplane120and airplane122are commercial airplanes and can have millions of parts. As depicted, the sorting can be performed by human operator130using client computer112. As depicted, human operator130can send request132to part analyzer134located in server computer104over network102from client computer112to group parts. In this example, request132can be a request to sort all parts in airplane120into groups. In another example, request132can be to sort a particular family of parts used in airplane120into groups. For example, a family of parts can be a valve, a flight entertainment system, a pump, or some other family of parts. In yet another illustrative example, request132can be a request to sort parts for a particular model of aircraft into groups. In yet another illustrative example, request132can be performed to group parts for aircraft operated by an airline. As depicted in this example, part analyzer134uses part lists136in response to receiving request132. A part list in part lists136can be, for example, a bill of materials. These part lists can be obtained from a database or other repository of part lists such as storage unit108. In other illustrative examples, these part lists can be located in distributed locations across different storage units at location such as part supplier150and maintenance facility152. These part lists can be created and managed by at least one of an aircraft design system, an inventory management system, a materials requirement planning system, a parts ordering system, or other system that can maintain part lists for use in grouping parts for aircraft. As depicted, part analyzer134generates sets of unigrams138from the text in part lists136. These unigrams can be processed to create consistency and reduce unnecessary information. In other words, unigrams138can be processed to refer to the same components using the same words or acronyms. Further, the consistency can also be obtained by removing spelling errors and incorrect concatenations. Additionally, unigrams138can be processed to remove words from part lists136that do not contribute to distinguishing one part from another part. In other words, unigrams138can be processed to remove common words such as “the”, “a” “and”, and other terms that do not describe components in part lists136. In this illustrative example, part analyzer134creates document term matrices140from unigrams138. A document term matrix is created for each family of parts in this example. For example, a document term matrix can be created for valves. Another document term matrix can be created for pumps, and yet another document term matrix can be created for actuators. In this example, each part list in part lists136identifies one or more parts. The part lists can also identify what family of parts that a part belongs to. The components represented by these unigrams are considered design features for the parts. However, many of these design features may be so common that these they are not considered to be sufficiently important to compare parts to each other in determining whether the parts should be placed in the same group. Thus, some components for design feature may not be design differentiating features. In this example, part analyzer134can remove unigrams in document term matrices140that have an occurrence that is greater than a common design threshold. This threshold is selected to indicate when the level of occurrence is high enough that that the unigrams for particular component is not useful in distinguishing parts from each other. As result, the processing of document term matrices140by part analyzer134results in document term matrices140that contain unigrams138that have differentiating design features. The removal of unigrams138that do not have differentiating design features can serve to remove noise from unigrams138. This removal of noise can reduce the amount of processing resources and time needed to sort parts into groups. As result, part analyzer134in server computer104causes server computer104to operate as an improved computer as compared to other computers that do not use part analyzer134. After removing unigrams from document term matrices140, part analyzer134can cluster the parts into groups142using document term matrices140. Part analyzer134can return result144to client computer112over network102for display to human operator130and client computer112. In this example, the display of result144can be in a graphical form on a graphical user interface146for client computer112. In this example, human operator130can use result144to perform various analysis to manage parts for airplane120as well as other airplanes. For example, human operator130can use result144to perform part cost modeling for the parts in airplanes such as airplane120in a fleet of aircraft. Result144can be used for analysis and analytics related to procurement cost avoidance by maintenance facility152. Additionally, result144can also be used to determine when more diversity in vendors for a particular part group may be needed. For example, if a particular part model of a check valve is only supplied by a single vendor. The grouping of parts can also be used to identify other vendors that may manufacture or supply check valves. A request can be made to these vendors to also supply that model of the check valve such that a more diverse group of vendors are available to supply that particular part. With reference now toFIG.2, a block diagram of a parts environment is depicted in accordance with an illustrative embodiment. In this illustrative example, part environment200includes components that can be implemented in hardware such as the hardware shown in network data processing system100inFIG.1. In this illustrative example, part management system202can be used to manage parts204for platform206. In this example, platform206is aircraft208. As depicted, part management system202comprises computer system212and part analyzer214. Part analyzer214can be implemented in software, hardware, firmware or a combination thereof. When software is used, the operations performed by part analyzer214can be implemented in program instructions configured to run on hardware, such as a processor unit. When firmware is used, the operations performed by part analyzer214can be implemented in program instructions and data and stored in persistent memory to run on a processor unit. When hardware is employed, the hardware may include circuits that operate to perform the operations in part analyzer214. In the illustrative examples, the hardware may take a form selected from at least one of a circuit system, an integrated circuit, an application specific integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware configured to perform a number of operations. With a programmable logic device, the device can be configured to perform the number of operations. The device can be reconfigured at a later time or can be permanently configured to perform the number of operations. Programmable logic devices include, for example, a programmable logic array, a programmable array logic, a field programmable logic array, a field programmable gate array, and other suitable hardware devices. Additionally, the processes can be implemented in organic components integrated with inorganic components and can be comprised entirely of organic components excluding a human being. For example, the processes can be implemented as circuits in organic semiconductors. Computer system212is a physical hardware system and includes one or more data processing systems. When more than one data processing system is present computer system212, those data processing systems are in communication with each other using a communications medium. The communications medium may be a network. The data processing systems can be selected from at least one of a computer, a server computer, a tablet, or some other suitable data processing system. As depicted, computer system212includes a number of processor units216that is capable of executing program instructions218implementing processes in the illustrative examples. In other words, program instructions218are computer readable program instructions. As used herein, a processor unit in the number of processor units216is a hardware device and is comprised of hardware circuits such as those on an integrated circuit that respond to and process instructions and program code that operate a computer. When the number of processor units216executes program instructions218for a process, the number of processor units216can be one or more processor units that are on the same computer or on different computers. In other words, the process can be distributed between processor units216on the same or different computers in a computer system212. Further, the number of processor units216can be of the same type or different type of processor units. For example, a number of processor units216can be selected from at least one of a single core processor, a dual-core processor, a multi-processor core, a general-purpose central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or some other type of processor unit. In one illustrative example, part analyzer214can group parts204for family of parts215in families of parts217. In this illustrative example, a family of parts is a collection of similar parts that can have different values for different design features. The identification of a family of parts can be based on function. For example, parts in a family of parts can be parts that have the same function. In addition, these parts in the family of parts can also have similar features. Further, similar features can be features that provide the same function in parts in the same family of parts. For example, all of the parts in a family may have a housing although the housing may be of different sizes and materials. In one illustrative example, family of parts215excludes structural parts such as a fuselage, a fuselage barrel, a wing box, or other structural parts for aircraft208. Family of parts215can be, for example, line replaceable units, environmental control units, displays, antennas, actuators, pumps, valves, and other types of parts. As depicted, part220in parts204is comprised of components222. These components are assembled in assembly224to form part220. In one example, family of parts215can be valves. With this example, parts204in family of parts215identified as valves in which types of valves can be selected from at least one of a check valve, a flapper valve, a pressure relief valve, a gate valve, or other types of valves, a fuel valve, a butterfly valve, a piston engine valve, an exhaust valve, a relief valve, or other suitable type of valve. With this example, when part220is a check valve in valves for family of parts215, part220has two or more components. As depicted, components222for part220in the form of a check valve can comprise a poppet, a spring, an o-ring, and a housing. Other check valves in parts204can have one or more of these features that distinguish check valves from other types of valves in the same family of parts. For example, another check valve can have two poppets, a spring, an o-ring, and a housing. As depicted, part analyzer214can cluster or group parts204into groups226based on families of parts217. Parts204in each family of parts in families of parts217can have groupings of parts204in a particular family of parts using components230for parts204. In the illustrative examples, components230in a part represent design features228for the part. These components can be used to group parts204into groups226. In other words, the grouping can be performed using design features228based on components230in parts204. This classification or grouping of parts204into groups226can be performed by part analyzer214in a number of different ways. In one illustrative example, part analyzer214can determine groups226for parts204using parts list232for parts204. As depicted, parts list232contains text234. Text234contains text identifying components230in parts204. Text234can also include other text such as use instructions, assembler instructions, operating conditions, and other types relating to parts204. In one illustrative example, parts list232to can take the form of bill of materials (BOM)236. Bill of materials236can be a list of raw materials, components, and instructions that are used to construct, manufacture, repair, or use a part. In one illustrative example, the processes performed by part analyzer214to determine groups226for parts204described with respect to a single family of parts, such as family of parts215. This process can be performed for multiple families or all families in families of parts217in other examples. In this illustrative example, part analyzer214generates sets of unigrams238from text234in parts list232for parts204in a family of parts215using a natural language processing240. A unigram is an n-gram having a size of 1. An n-gram is a sequence of n words. For example, “housing” is a unigram. Each set of unigrams238in the sets of unigrams238is derived from text234for a part in parts204. In this example, a set of unigrams238in the sets of unigrams238represents components230for part220in parts204. Natural language processing240can be performed using at least one of a method, model, or system that can operate to extract unigrams238from text234. For example, natural language processing240can be performed using natural language processing model242such as a unigram language model. A unigram language model can discard all conditioning context, and estimates each term independently. Natural language processing model242can be implemented using a machine learning model, a deep neural network machine learning model, or other types of models. As depicted, part analyzer214creates document term matrix244using the sets of unigrams238. In this example, document term matrix244describes a presence of components230in parts204. In this example, document term matrix244is comprised of columns246for unigrams238and rows248for parts204. A row is for a part and has values for the presence of components corresponding to unigrams238that form that part. In this depicted example, the value is a binary value indicating whether a component is present. This type of document term matrix is a binary document term matrix. However, the number of unigrams238present document term matrix244can be the very large such that the amount of processing resources used in computer system212becomes greater than desired. The use of these processing resources for grouping parts204into groups226can result in an unavailability of computer system212to perform tasks with a desired level performance. This desired level performance can be defined using performance level metrics. These metrics can include, for example, response time, resource usage, and other metrics that can be measured for computer system212when performing various tasks. In the illustrative examples, not all of design features228are necessary or useful to distinguish or group parts204into groups226. Part analyzer214can identify design differentiating features229in design features228. Design features228that are not design differentiating features229can be removed. In other words, unigrams238for components230that are not design differentiating features229in design features228can be removed. This removal of design features228that are not helpful in distinguishing parts204is a removal of noise. As a result, the analysis of the remaining unigrams can be performed using less processing resources in computer system212. In this illustrative example, part analyzer214removes a number of unigrams238from document term matrix244that has occurrences in document term matrix244that are greater than a common design threshold250. Removing the number of unigrams238from document term matrix244forms processed document term matrix252. In this example, common design threshold250identifies a level of occurrence not useful in differentiating parts204from each other. For example, if a unigram representing a component occurs in 95% of rows248for parts204in family of parts215being analyzed, the column in columns246for that unigram can be removed. That level of occurrence is considered to be sufficiently high that the use of the unigram in that column would not be useful or helpful in grouping parts204in rows248into groups226. In other illustrative examples, other values can be used rather than 95%. For example, 90% or 87% can be used to determine whether a unigram should be removed from document term matrix244. In this illustrative example, part analyzer214clusters parts204into groups226using processed document term matrix252. This clustering can be performed using clustering algorithm254. Clustering algorithm254can be selected from a group comprising an agglomerative hierarchical clustering, a density based spatial clustering of applications with noise (DBSCAN) algorithm, a K means algorithm, a Gaussian mixture model (GMM) algorithm, Ordering points to identify the clustering structure (OPTICS), or other suitable type of clustering algorithm. Other processing in addition to the operations depicted above can be performed in clustering parts into groups226. For example, non-text symbols266can be removed from parts list232prior to generating sets of unigrams238. Further, unigrams238can be processed for consistency prior to creating document term matrix244. This processing can be performed using one or more of dictionaries267. In this example, dictionaries267comprises acronyms and concatenations dictionary268, synonyms and spelling dictionary270, and stop word dictionary272. In this illustrative example, dictionaries267can be domain specific to the particular family of parts being processed. For example, the dictionaries can include a dictionary specific to valves when the family of parts are valves. If the family of parts are flight entertainment systems, the dictionaries used are specific to flight entertainment systems. In this example, acronyms and concatenations dictionary268be used by part analyzer214to expand acronyms such of the unigrams use words rather than acronyms. In another example, the words can be changed into acronyms such that all the unigrams are consistent for a particular component. The concatenations can be used to remove incorrect concatenations or create concatenations that are correct for the unigrams based on the usage in the dictionary. As another example, synonyms and spelling dictionary270can be used by part analyzer214to use the same wording for the same components in a family of parts. Further, part analyzer214can use stop word dictionary272to remove non-technical unigrams. These non-technical unigrams are not associated with the design features for parts204in the families of parts217. For example, words such as o-ring, nut, housing, and body are examples of unigrams representing components in the parts such as valves. However, these unigrams can be components that are sufficiently common that these unigrams do not help differentiate family of parts215from each other in families of parts217. Additionally, part analyzer214can apply matrix correction276based on the number of groups selected for parts204. In this example, using matrix correction276is a process that removes unigrams238from document term matrix244based on the frequency of unigrams238in document term matrix244. Matrix correction276is a sparse matric correction when applied to document term matrix244in the form of a sparse document term matrix in which the sparse document term matrix has a large number of zero values. In removing unigrams238, part analyzer214can remove more unigrams from unigrams238when fewer groups are desired for groups226. For example, a lower number of groupings can be obtained in which less design differentiation occurs by removing unigrams that occur with a frequency in document term matrix244that is greater than 15%. When larger numbers of groups are desired, part analyzer214can remove fewer unigrams. For example, matrix correction can be performed such that unigrams are removed when unigrams have a frequency of less 10% or less within document term matrix244. In another example, matrix correction276can be applied to unigrams238in document term matrix244such that the minimum document frequency is from about 10% to 15%. In another illustrative example, part analyzer214can reduce the number of dimensions in processed document term matrix252using feature reducer251. Part analyzer214can use feature reducer251to replace unigrams238with features249in processed document term matrix252. In other words, part analyzer214can determine features249from unigrams238in processed document term matrix252. In this illustrative example, features249are a combination of unigrams238in processed document term matrix252. In other words, each feature in features249can be a combination of two or more of unigrams238. With the reduction in the number of dimensions, processed document term matrix252is an embedding matrix in which high-dimensional data is converted to low-dimensional data. Feature reducer251can reduce the number of dimensions by combining unigrams238into features249using a number of different techniques. For example, feature reducer251can be use multiple correspondence analysis (MCA). With the use of feature reducer251, the clustering of parts204into groups226is performed using features249in processed document term matrix252instead of unigrams238. In one illustrative example, one or more technical solutions are present that overcome a technical problem with grouping parts using part lists in a more efficient manner. As a result, one or more technical solutions may provide a technical effect in enabling increased efficiency and speed in grouping parts into groups. In the illustrative examples, the part lists are processed in a manner that reduces the information in part lists that not useful in grouping parts. For example, one or more illustrative examples can identify unigrams that are useful in distinguishing parts to group the parts. In one or more illustrative examples, unigrams can be removed when those unigrams do not represent components that have design differentiating features that can be used to group parts. The different operations performed by part analyzer214to remove unigrams238that are not useful can increase the efficiency at which computer system212operates to group parts into groups. These different techniques employed by part analyzer214are not present in currently used processes. As a result, part analyzer214can sort or group parts into groups more efficiently as compared to current grouping processes. Further, the illustrative examples can also reduce the number of dimensions of unigrams to simplify the grouping of parts. Computer system212can be configured to perform at least one of the steps, operations, or actions described in the different illustrative examples using software, hardware, firmware or a combination thereof. As a result, computer system212operates as a special purpose computer system in which part analyzer214in computer system212enables grouping parts. In particular, part analyzer214transforms computer system212into a special purpose computer system as compared to currently available general computer systems that do not have part analyzer214. In other words, other general-purpose computers without part analyzer214are unable to process parts list232to create unigrams238and process those unigrams for processed document term matrix252that can be analyzed in a manner that has increased performance as compared to current techniques. For example, by removing the information that is not useful in distinguishing parts, process document term matrix252can be processed more quickly by part analyzer214in computer system212. The different operations are performed automatically by part analyzer214without needing user input. Further, a lower use of resources in computer system212occurs through the removal of design features228that are not design differentiating features229. Further, the reduction of dimensions by creating features249from unigrams238for process document term matrix252can also increase the performance in grouping parts204into groups226. With reference next toFIG.3, an illustration of a binary document term matrix is depicted in accordance with an illustrative embodiment. In this illustrative example, binary document term matrix300is an example of an implementation for document term matrix244inFIG.2. As depicted, binary document term matrix300is for a family of parts in the form of valves. Binary document term matrix300comprises rows301in columns303. Rows301represent parts and columns303represent components. In this example, the components are represented as unigrams. Each row comprises a binary indication as to whether a unigram for the column is present for that part. These indications of whether unigrams are present are determined based on the unigrams created from the part list for each of the parts in rows301. As depicted, row302and row304each represent check valves307in which each row contains unigrams created from a part list for a check valve. Row306and row308represent gate valves305in which each row contains unigrams created from a part list for a gate valve. In this illustrative example, column318represents a part identifier. This part identifier can be a unique identifier obtained from the parts. This unique identifier can be used to determine which part list the information for a particular part was derived. As depicted, columns303for unigrams are housing320, spring322, o-ring324, poppet326, gate328, and solenoid330. In this example, binary document term matrix300can be processed to reduce the number of unigrams for clustering. For example, unigrams that occur more than some threshold level can be removed from binary document term matrix. As the occurrence of the unigrams increases in the parts in rows301, the usefulness of those unigrams to distinguish parts from each other is reduced. For example, housing320is unigram that has a 100% occurrence within binary document term matrix300. As result, this unigram can be removed because the housing is a component is not a design differentiating feature that can be used for different valves from each other. As another example, o-ring324has a 100% frequency in binary document term matrix300. This unigram also does not help in distinguishing different valves from each other. In other words, housing320and o-ring324are unigrams for components that are not design differentiating features. As a result, processing these unigrams increases the amount of processing resources and time needed to group the parts in binary document term matrix300. However, the use of these unigrams does not aid in differentiating check valves307from gate valves305. The illustration of binary document term matrix300is presented as a simplified example of document term matrix244. This illustration is not meant to limit the manner in which other document term matrices can be implemented. Only 6 unigrams for 4 parts are depicted in this example to illustrate how binary document term matrix300represents parts for grouping and how some of the unigrams in the columns can be processed. Actual implementations can include tens of thousands, hundreds of thousands, or millions of rows representing individual parts for a single family of parts. Additionally, hundreds or thousands components can be present that are used to create unigrams for populating columns in a binary document term matrix. Further, in other illustrative examples, the document term matrix can take forms other than a binary document term matrix. For example, the number of components in a part can be identified through an integer rather than a binary number for unigrams corresponding to the components. InFIG.4, an illustration of a block diagram of analysis and actions using grouping of parts is depicted in accordance with an illustrative embodiment. In this example, with the determination of groups226, part analyzer214can display graphically display groups226in human machine interface400. In this example, human machine interface400comprises display system402and input system404. Display system402is a physical hardware system and includes one or more display devices on which graphical user interface406can be displayed. The display devices can include at least one of a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a computer monitor, a projector, a flat panel display, a heads-up display (HUD), a head-mounted display (HMD), smart glasses, augmented reality glasses, or some other suitable device that can output information for the visual presentation of information. Human operator409is a person that can interact with graphical user interface406through user input408generated by input system404for computer system212. In this example, user input408is received by part analyzer214in computer system212. Input system404is a physical hardware system and can be selected from at least one of a mouse, a keyboard, a touch pad, a trackball, a touchscreen, a stylus, a motion sensing input device, a gesture detection device, a data glove, a cyber glove a haptic feedback device, or some other suitable type of input device. As depicted, part analyzer214can create graph410and display graph410on graphical user interface406to provide human operator409a visualization of groups226. Graph410can comprise graphical elements412that represent parts204in groups226. Each graphical element can be formed from one or more of graphical indicators414. A graphical indicator can be selected from a group comprising an icon, a pictogram, an ideogram, a graphic, an image, text, animation, bolding, a color, a line, an arrow, or other suitable graphic. In other words, graphical indicator can be a single graphical element or a combination of graphical elements. In one example, a graphical element representing a part can comprise a graphical indicator in the form of a dot. Additionally, the dot can have a particular color to indicate the group in which the part belongs in groups226. In one illustrative example, groups226can be clustered into groups that are result of hierarchical clustering. In other words, the clustering can include information about how similar different parts are to each other in groups226. With this type of clustering, graph410can be a graph that can be used to provide visualization of hierarchies occurring in the clustering parts204into groups226. For example, a constellation graph, a dendrogram, a scatterplot, or other graphs that can indicate how closely related different parts are to each other can be used. Thus, human operator409can be provided with a visualization of groups226of parts204. This visualization can aid human operator409in analyzing parts for a family of parts. This analysis can be performed for parts in inventory, parts in an aircraft, parts in a fleet of aircraft, or parts in some other location. Further, human operator409can use part analyzer214to perform analysis420on groups226to identify actions422that can be performed with respect to parts204. In this illustrative example, analysis420can take a number of different forms. For example, analysis420can be a cost analysis, design analysis, supply chain analysis, and inventory analysis, or other suitable type of analysis that can be performed for groups226of parts204. In this example, actions422can be selected to meet objectives selected from at least one of optimizing part costs, increasing supply chain diversity, reducing supply chain costs, increasing parts availability, reducing the number of parts in models of aircraft manufactured by manufacturer, and other suitable objectives. For example, analysis420can be used to determine that check valves in different models of aircraft are sufficiently similar that these check valves can be considered substitute parts. With this situation, both check valves meet tolerances and requirements for use in both models of aircraft. With this determination through analysis420, actions422can include designating the two check valves as substitutes for each other when check valves are replaced during maintenance. As another example, one pump may be currently used in one model of aircraft. In designing a new model of an aircraft, the analysis can indicate that the part used for that new design is sufficiently similar to the current part in use. In this case, the design for the new model of an aircraft can be modified to use the same part as in the current aircraft being manufactured. As result, the number of different parts can be reduced through this engineering analysis in comparison. Further, with respect to supply chain management, the determination can be made that valve A from Supplier AA is similar to valve B from Supplier BB. To reduce costs, supplier a may be asked to supply both valve A and valve B. When increased availability is desired, both Supplier AA can be asked to produce valve B and Supplier BB can pass to produce valve A. This increase in diversity of suppliers can be useful when some suppliers are located in geographic locations where supply chain issues can occur. Further this analysis of parts204can be made for different business units. In other words, the analysis of parts204can be performed for more uses as other than aircraft. For example, analysis of valves can be performed across commercial airplanes, military aircraft, spacecraft, manufacturing plants, and other types of platforms that may be manufactured by different business units in a company or by different companies. With reference now toFIG.5, illustration of a constellation diagram is depicted in accordance with an illustrative embodiment. Constellation diagram500is an example of graph410inFIG.4. This graph depicts one manner in which groups226can be displayed graphical user interface406inFIG.4. This graph shows a hierarchical relationship between parts. In this illustrative example, the parts are represented by graphical elements. In this example, the graphical element comprises graphical indicators in the form of a point and a shape for the point. The point represents the presence of a part. The shape of the point indicates which group the part belongs to in constellation diagram500. The positions of the black dots are based on the hierarchies of similarity. As depicted, the points are connected to each other by edges. In this example, the length of edge connecting two points indicates how similar the two parts represented by the points are to each other. The illustration of constellation diagram500is not meant to limit the manner in which other parts can be displayed in other illustrative examples. For example, points can be shown in clusters or groupings without connectors if the hierarchical relationship indicating the similarity of parts is not shown. In other illustrative examples, the graphical indicators for the graphic elements can take the form of a point and a color in place of the shape. Further, other types of graphs can be used in place of constellation diagram500when displaying groups to a human operator. These other types of graphs include, for example, dendrogram, a scatterplot, and other suitable types of graphs that can be used to indicate groups of parts. The illustration indicates the similarity between of part environment200in the different components in part environment200inFIGS.2-5is not meant to imply physical or architectural limitations to the manner in which an illustrative embodiment may be implemented. Other components in addition to or in place of the ones illustrated may be used. Some components may be unnecessary. Also, the blocks are presented to illustrate some functional components. One or more of these blocks may be combined, divided, or combined and divided into different blocks when implemented in an illustrative embodiment. Although the illustrative examples are described with respect to aircraft208, another illustrative example can be applied to other types of platforms. For example, platform206can be, for example, a mobile platform, a stationary platform, a land-based structure, an aquatic-based structure, and a space-based structure. More specifically, platform206can be a surface ship, a tank, a personnel carrier, a train, a spacecraft, a space station, a satellite, a submarine, an automobile, a power plant, a bridge, a dam, a house, a manufacturing facility, a building, and other suitable platforms. Turning next toFIG.6, a flowchart of a process for grouping parts using part lists is depicted in accordance with an illustrative embodiment. The process inFIG.6can be implemented in hardware, software, or both. When implemented in software, the process can take the form of program instructions that are run by one of more processor units located in one or more hardware devices in one or more computer systems. For example, the process can be implemented in part analyzer214in computer system212inFIG.2. The process begins by collecting a part list for each assembly in the family of parts (operation600). In operation600, a part list can take the form of a bill of materials (BOM). The process creates machine readable text from the collection of part lists (operation602). The process removes numbers and punctuation from the machine readable text (operation604). The process processes text into sets of unigrams having 3-characters length (operation606). In other examples, other character lengths can be used. The character length can be selected to not create unigrams for words that are too short to be useful in distinguishing parts from each other, such as “a”, “is”, and “of”. The process applies dictionaries to the set of unigrams (operation608). In this example, three dictionaries are used in the following order: acronyms and concatenations, synonyms and spelling, and stop words. The dictionaries in this illustrative example are domain specific to the family of parts being process. The process creates a binary weighted document term matrix (DTM) from the remaining unigrams after applying the dictionaries (operation610). The process removes unigrams from the binary weighted document term matrix that are common to nearly every part (operation612). In operation612, a unigram can be considered to be common when unigrams occur a number of times in the document term matrix that is greater than a threshold. For example, a threshold of 90% can be used to remove the unigrams. This removal of unigrams can be performed using a matrix correction when many zeros are present in the document term matrix. The process applies the sparse matrix correction a second time based on the number of groupings desired (operation614). In step614, the binary weighted document term matrix can be a sparse matrix based on the number of 0s in the matrix. The process performs dimensionality reduction on the remaining unigrams in the document term matrix to create features for clustering (operation618). Each feature in operation618is derived from two or more unigrams for the components. The process then clusters the features using agglomerative hierarchical clustering (operation620). The process terminates thereafter. In operation620, other types of clustering can be used in place of agglomerative hierarchical clustering in other illustrative examples. For example, a density based spatial clustering of applications with noise (DBSCAN) algorithm, K means algorithm, a Gaussian mixture models (GMM) algorithm, Ordering points to identify the clustering structure (OPTICS), or other suitable type of clustering algorithm can be used in other illustrative examples. Turning next toFIG.7, an illustration of a flowchart of a process for grouping parts is depicted in accordance with an illustrative embodiment. The process inFIG.7can be implemented in hardware, software, or both. When implemented in software, the process can take the form of program instructions that are run by one of more processor units located in one or more hardware devices in one or more computer systems. For example, the process can be implemented in part analyzer214in computer system212inFIG.2. The process begins by generating sets of unigrams from text in part lists for parts in a family of parts using natural language processing (operation700). In operation700, a set of unigrams in the sets of unigrams represent components for a part in the parts. The process creates a document term matrix using the sets of unigrams (operation702). In this operation, the document term matrix describes a presence of components in the parts. The process removes a number of unigrams from the document term matrix that has occurrences in the document term matrix that are greater than a common design threshold (operation704). In operation704, removing the number of unigrams from the document term matrix forms a processed document term matrix and wherein the common design threshold identifies a level of occurrence not useful in differentiating the parts from each other. The process clusters the parts into groups using the processed document term matrix (operation706). The process terminates thereafter. With reference toFIG.8, an illustration a flowchart of a process for displaying a result from grouping parts is depicted in accordance with an illustrative embodiment. The process inFIG.8is an example of an additional operation that can be performed with the operations inFIG.7. The process graphically displays the groups in a graphical user interface on a display system (operation800). The process terminates thereafter. In operation800, The groups can be displayed using a number of different types of graphical displays to provide visualization of the groups to a human operator. For example, the groups can be displayed in a constellation diagram in which points connected by edges, wherein the points represent parts and graphical indicators associated with the points represent groupings of the parts and lengths of edges indicates a similarity of nodes connected by the edges. Next inFIG.9, an illustration of a flowchart of a process for processing text in part lists is depicted in accordance with an illustrative embodiment. The process in this figure is an example of an additional operation that can form with the operations inFIG.7. The process removes non-text symbols from the part lists prior to generating the sets of unigrams (operation900). The process terminates thereafter. In operation900, the non-text symbols can be, for example, punctuation, numbers, and other non-text symbols. With reference now toFIG.10, an illustration of a flowchart of a process for processing unigrams is depicted in accordance with an illustrative embodiment. The process illustrated inFIG.10is an example of an additional operation that can be performed with the operations inFIG.7. The process processes the unigrams to create a consistency in the unigrams (operation1000). The process terminates thereafter. This operation can be performed on the unigrams prior to the unigrams being used to create a document term matrix in operation702. This consistency can include, for example, removing spelling errors, removing concatenation errors, using consistent terms for the same components, and other operations to increase the consistency the unigrams used to reference the components in the parts. Next inFIG.11, an illustration of a flowchart of a process for applying dictionaries to unigrams is depicted in accordance with an illustrative embodiment. The operations in this figure are an example of an implementation of operation1000inFIG.10. The process begins by applying an acronyms and concatenations dictionary to the unigrams (operation1100). The dictionary in operation1100can be domain specific to the particular family of parts. This process can be used to expand acronyms such as the unigrams that use words rather than acronyms. In another example, the words can be changed into acronyms such that all the unigrams are consistent for a particular component. The concatenations can be used to remove incorrect concatenations or create concatenations that are correct for the unigrams based on the usage in the dictionary. The process applies a synonyms and spelling dictionary to the unigrams (operation1102). The process terminates thereafter. This dictionary can also be a domain specific dictionary for synonyms used for a particular family of parts. This dictionary can be applied such the unigrams use the same wording for the same components in a family of parts. With reference next toFIG.12, an illustration a flowchart of a process for removing unigrams is depicted in accordance with an illustrative embodiment. The process illustrated in this figure is an example of an additional operation that can be performed with the operations inFIG.7. This operation can be performed prior to creating a document term matrix using the unigrams operation702. The process applies a stop word dictionary to remove unigrams that are not useful in differentiating the parts from each other (operation1200). The process terminates thereafter. In this operation, the stop word dictionary can be used to remove non-technical unigrams. These non-technical unigrams are not associated with the design features for parts in the family of parts. For example, words such as sealant, paint, fasteners, o-ring, or other unigrams represent components in the parts. However, the unigrams are components that are known to be so common that they do not help differentiate parts from each other. With reference now toFIG.13, an illustration flowchart for a process to remove unigrams based on the number groupings desired is depicted in accordance with an illustrative embodiment. The process inFIG.13is an example of an additional operation that can be performed with the operations inFIG.7. In one example, this process can be applied to the document term matrix prior to or after removing unigrams in operation704. The process applies a matrix correction that removes a set of unigrams from the processed document term matrix based on a number of groups selected for the parts (operation1300). The process terminates thereafter. InFIG.14, an illustration of a flowchart of a process for reducing dimensions in a document term matrix is depicted in accordance with an illustrative embodiment. The process inFIG.14is an example of an additional operation that can be performed with the operations inFIG.7. In this example, this operation can be performed after operation704using the process document term matrix created in operation704. The process determines features from the unigrams in the processed document term matrix (operation1400). In this operation, the features are formed from a combination of the unigrams. The process replaces the unigrams with the features in the processed document term matrix (operation1402). The process terminates thereafter. Turning now toFIG.15, an illustration of a flowchart for clustering a document term matrix is depicted in accordance with an illustrative embodiment. The process illustrated in this flowchart is an example of an implementation for operation706inFIG.7. The process clusters the parts into groups using the features in the processed document term matrix (operation1500). The process terminates thereafter. With reference now toFIG.16, an illustration of a flowchart of a process for managing parts using groups of parts is depicted in accordance with an illustrative embodiment. The process inFIG.7can be implemented in hardware, software, or both. When implemented in software, the process can take the form of program instructions that are run by one of more processor units located in one or more hardware devices in one or more computer systems. For example, the process can be implemented in part analyzer214in computer system212inFIG.2. The process begins by analyzing the groups of parts for a family of parts to form an analysis (operation1600). The process identifies actions that can be performed based on the analysis (operation1602). The process performs a number of actions in the set of actions (operation1604). The process terminates thereafter. The flowcharts and block diagrams in the different depicted embodiments illustrate the architecture, functionality, and operation of some possible implementations of apparatuses and methods in an illustrative embodiment. In this regard, each block in the flowcharts or block diagrams can represent at least one of a module, a segment, a function, or a portion of an operation or step. For example, one or more of the blocks can be implemented as program instructions, hardware, or a combination of the program instructions and hardware. When implemented in hardware, the hardware can, for example, take the form of integrated circuits that are manufactured or configured to perform one or more operations in the flowcharts or block diagrams. When implemented as a combination of program instructions and hardware, the implementation may take the form of firmware. Each block in the flowcharts or the block diagrams can be implemented using special purpose hardware systems that perform the different operations or combinations of special purpose hardware and program instructions run by the special purpose hardware. In some alternative implementations of an illustrative embodiment, the function or functions noted in the blocks may occur out of the order noted in the figures. For example, in some cases, two blocks shown in succession may be performed substantially concurrently, or the blocks may sometimes be performed in the reverse order, depending upon the functionality involved. Also, other blocks may be added in addition to the illustrated blocks in a flowchart or block diagram. Turning now toFIG.17, a block diagram of a data processing system is depicted in accordance with an illustrative embodiment. Data processing system1700can be used to implement server computer104, server computer106, client devices110, inFIG.1. Data processing system1700can also be used to implement computer system212inFIG.2. In this illustrative example, data processing system1700includes communications framework1702, which provides communications between processor unit1704, memory1706, persistent storage1708, communications unit1710, input/output (I/O) unit1712, and display1714. In this example, communications framework1702takes the form of a bus system. Processor unit1704serves to execute instructions for software that can be loaded into memory1706. Processor unit1704includes one or more processors. For example, processor unit1704can be selected from at least one of a multicore processor, a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), a digital signal processor (DSP), a network processor, or some other suitable type of processor. Further, processor unit1704can may be implemented using one or more heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit1704can be a symmetric multi-processor system containing multiple processors of the same type on a single chip. Memory1706and persistent storage1708are examples of storage devices1716. A storage device is any piece of hardware that is capable of storing information, such as, for example, without limitation, at least one of data, program instructions in functional form, or other suitable information either on a temporary basis, a permanent basis, or both on a temporary basis and a permanent basis. Storage devices1716may also be referred to as computer readable storage devices in these illustrative examples. Memory1706, in these examples, can be, for example, a random-access memory or any other suitable volatile or non-volatile storage device. Persistent storage1708may take various forms, depending on the particular implementation. For example, persistent storage1708may contain one or more components or devices. For example, persistent storage1708can be a hard drive, a solid-state drive (SSD), a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage1708also can be removable. For example, a removable hard drive can be used for persistent storage1708. Communications unit1710, in these illustrative examples, provides for communications with other data processing systems or devices. In these illustrative examples, communications unit1710is a network interface card. Input/output unit1712allows for input and output of data with other devices that can be connected to data processing system1700. For example, input/output unit1712may provide a connection for user input through at least one of a keyboard, a mouse, or some other suitable input device. Further, input/output unit1712may send output to a printer. Display1714provides a mechanism to display information to a user. Instructions for at least one of the operating system, applications, or programs can be located in storage devices1716, which are in communication with processor unit1704through communications framework1702. The processes of the different embodiments can be performed by processor unit1704using computer-implemented instructions, which may be located in a memory, such as memory1706. These instructions are referred to as program instructions, computer usable program instructions, or computer readable program instructions that can be read and executed by a processor in processor unit1704. The program instructions in the different embodiments can be embodied on different physical or computer readable storage media, such as memory1706or persistent storage1708. Program instructions1718are located in a functional form on computer readable media1720that is selectively removable and can be loaded onto or transferred to data processing system1700for execution by processor unit1704. Program instructions1718and computer readable media1720form computer program product1722in these illustrative examples. In the illustrative example, computer readable media1720is computer readable storage media1724. Computer readable storage media1724is a physical or tangible storage device used to store program instructions1718rather than a medium that propagates or transmits program instructions1718. Computer readable storage media1724can be at least one of an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or other physical storage medium. Some known types of storage devices that include these mediums include: a diskette, a hard disk, a random access memory (RAN), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SPA), a compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device, such as punch cards or pits/lands formed in a major surface of a disc, or any suitable combination thereof. Computer readable storage media1724, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as at least one of radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, or other transmission media. Further, data can be moved at some occasional points in time during normal operations of a storage device. These normal operations include access, de-fragmentation or garbage collection. However, these operations do not render the storage device as transitory because the data is not transitory while the data is stored in the storage device. Alternatively, program instructions1718can be transferred to data processing system1700using a computer readable signal media. The computer readable signal media are signals and can be, for example, a propagated data signal containing program instructions1718. For example, the computer readable signal media can be at least one of an electromagnetic signal, an optical signal, or any other suitable type of signal. These signals can be transmitted over connections, such as wireless connections, optical fiber cable, coaxial cable, a wire, or any other suitable type of connection. Further, as used herein, “computer readable media1720” can be singular or plural. For example, program instructions1718can be located in computer readable media1720in the form of a single storage device or system. In another example, program instructions1718can be located in computer readable media1720that is distributed in multiple data processing systems. In other words, some instructions in program instructions1718can be located in one data processing system while other instructions in program instructions1718can be located in one data processing system. For example, a portion of program instructions1718can be located in computer readable media1720in a server computer while another portion of program instructions1718can be located in computer readable media1720located in a set of client computers. The different components illustrated for data processing system1700are not meant to provide architectural limitations to the manner in which different embodiments can be implemented. In some illustrative examples, one or more of the components may be incorporated in or otherwise form a portion of, another component. For example, memory1706, or portions thereof, may be incorporated in processor unit1704in some illustrative examples. The different illustrative embodiments can be implemented in a data processing system including components in addition to or in place of those illustrated for data processing system1700. Other components shown inFIG.17can be varied from the illustrative examples shown. The different embodiments can be implemented using any hardware device or system capable of running program instructions1718. Illustrative embodiments of the disclosure may be described in the context of aircraft manufacturing and service method1800as shown inFIG.18and aircraft1900as shown inFIG.19. Turning first toFIG.18, an illustration of an aircraft manufacturing and service method is depicted in accordance with an illustrative embodiment. During pre-production, aircraft manufacturing and service method1800may include specification and design1802of aircraft1900inFIG.19and material procurement1804. During production, component and subassembly manufacturing1806and system integration1808of aircraft1900inFIG.19takes place. Thereafter, aircraft1900inFIG.19can go through certification and delivery1810in order to be placed in service1812. While in service1812by a customer, aircraft1900inFIG.19is scheduled for routine maintenance and service1814, which may include modification, reconfiguration, refurbishment, and other maintenance or service. Each of the processes of aircraft manufacturing and service method1800may be performed or carried out by a system integrator, a third party, an operator, or some combination thereof. In these examples, the operator may be a customer. For the purposes of this description, a system integrator may include, without limitation, any number of aircraft manufacturers and major-system subcontractors; a third party may include, without limitation, any number of vendors, subcontractors, and suppliers; and an operator may be an airline, a leasing company, a military entity, a service organization, and so on. With reference now toFIG.19, an illustration of an aircraft is depicted in which an illustrative embodiment may be implemented. In this example, aircraft1900is produced by aircraft manufacturing and service method1800inFIG.18and may include airframe1902with plurality of systems1904and interior1906. Examples of systems1904include one or more of propulsion system1908, electrical system1910, hydraulic system1912, and environmental system1914. Any number of other systems may be included. Although an aerospace example is shown, different illustrative embodiments may be applied to other industries, such as the automotive industry. Apparatuses and methods embodied herein may be employed during at least one of the stages of aircraft manufacturing and service method1800inFIG.18. In one illustrative example, components or subassemblies produced in component and subassembly manufacturing1806inFIG.18can be fabricated or manufactured in a manner similar to components or subassemblies produced while aircraft1900is in service1812inFIG.18. As yet another example, one or more apparatus embodiments, method embodiments, or a combination thereof can be utilized during production stages, such as component and subassembly manufacturing1806and system integration1808inFIG.18. One or more apparatus embodiments, method embodiments, or a combination thereof may be utilized while aircraft1900is in service1812, during maintenance and service1814inFIG.18, or both. The use of a number of the different illustrative embodiments may substantially expedite the assembly of aircraft1900, reduce the cost of aircraft1900, or both expedite the assembly of aircraft1900and reduce the cost of aircraft1900. For example, part analyzer214inFIG.2can be used during material procurement1804to identify cost reductions and manage supply chains based on identifying groups of parts throughout different product lines. Additionally, the groupings identified by part analyzer214can be used to identify substitute or equivalent parts that may be used during maintenance and service1814, which may include modification, reconfiguration, refurbishment, and other maintenance or service. As another example, the groupings of parts made by part analyzer214can be used to make adjustments or design changes such that fewer number of parts are used throughout a product line. Some features of the illustrative examples are described in the following clauses. These clauses are examples of features and are not intended to limit other illustrative examples. Clause 1 A method for grouping parts, the method comprising:generating, by a computer system, sets of unigrams from text in part lists for parts in a family of parts using natural language processing, wherein a set of unigrams in the sets of unigrams represent components for a part in the parts;creating, by the computer system, a document term matrix using the sets of unigrams, wherein the document term matrix describes a presence of components in the parts;removing, by the computer system, a number of unigrams from the document term matrix that has occurrences in the document term matrix that are greater than a common design threshold, wherein removing the number of unigrams from the document term matrix forms a processed document term matrix and wherein the common design threshold identifies a level of occurrence not useful in differentiating the parts from each other; andclustering, by the computer system, the parts into groups using the processed document term matrix. Clause 2 The method according to clause 1 further comprising:graphically displaying, by the computer system, the groups in a graphical user interface on a display system. Clause 3 The method according to clause 2, wherein the groups are displayed in a constellation diagram in which points connected by edges, wherein the points represent parts and graphical indicators associated with the points represent groupings of the parts and lengths of edges indicates a similarity of nodes connected by the edges. Clause 4 The method according to one of clauses 1, 2, or 3 further comprising:removing, by the computer system, non-text symbols from the part lists prior to generating the sets of unigrams. Clause 5 The method according to one of clauses 1, 2, 3, or 4 further comprising:processing, by the computer system, the unigrams to create a consistency in the unigrams. Clause 6 The method according to clause 5, wherein processing, by the computer system, the unigrams to create the consistency in the unigrams comprises:applying, by the computer system, an acronyms and concatenations dictionary to the unigrams; andapplying, by the computer system, a synonyms and spelling dictionary to the unigrams. Clause 7 The method according to one of clauses 1, 2, 3, 4, or 6 further comprising:applying, by the computer system, a stop word dictionary to remove unigrams that are not useful in differentiating the parts from each other. Clause 8 The method according to one of clauses 1, 2, 3, 4, 6, or 7 further comprising:applying, by the computer system, a matrix correction that removes a set of unigrams from the processed document term matrix based on a number of groups selected for the parts. Clause 9 The method according to one of clauses 1, 2, 3, 4, 6, 7, or 8 further comprising:determining, by the computer system, features from the unigrams in the processed document term matrix, wherein the features are formed from a combination of the unigrams; andreplacing, by the computer system, the unigrams with the features in the processed document term matrix. Clause 10 The according to clause 9, wherein clustering, by the computer system, the parts into groups using the processed document term matrix comprises:clustering, by the computer system, the parts into groups using the features in the processed document term matrix. Clause 11 The method according to one of clauses 1, 2, 3, 4, 6, 7, 8, 9, or 10, wherein the document term matrix comprises columns for the unigrams and rows for the parts. Clause 12 A method for grouping parts, the method comprising:generating, by a computer system, sets of unigrams from text in part lists for parts in a family of parts using natural language processing, wherein a set of unigrams in the sets of unigrams represent components for a part in the parts;creating, by the computer system, a document term matrix using the sets of unigrams, wherein the document term matrix describes a presence of components in the parts;removing, by the computer system, a number of unigrams from the document term matrix that has occurrences in the document term matrix that are greater than a common design threshold, wherein removing the number of unigrams from the document term matrix forms a processed document term matrix and wherein the common design threshold identifies a level of occurrence not useful in differentiating the parts from each other;determining, by the computer system, features from the unigrams in the processed document term matrix, wherein the features are formed from a combination of the unigrams;replacing, by the computer system, the unigrams with the features in the processed document term matrix; andclustering, by the computer system, the parts into groups using the processed document term matrix. Clause 13 The method according to one of clause 12 further comprising:removing, by the computer system, non-text symbols from the part lists prior to generating the sets of unigrams. Clause 14 The method according to one of clauses 12 or 13 further comprising:processing, by the computer system, the unigrams to create a consistency in the unigrams. Clause 15 The method according to clause 14, wherein processing, by the computer system, the unigrams to create the consistency in the unigrams comprises:applying, by the computer system, a first dictionary of acronyms and concatenations to the unigrams; andapplying, by the computer system, a second dictionary of synonyms and spelling. Clause 16 The method according to one of clauses 12, 13, 14, or 15 further comprising:applying, by the computer system, a stop word dictionary to remove unigrams that are not useful in differentiating the parts from each other. Clause 17 A part analysis system comprising:a computer system; anda part analyzer in the computer system, wherein the part analyzer is configured to:generate sets of unigrams from text in part lists for parts in a family of parts using natural language processing, wherein a set of unigrams in the sets of unigrams represent components for a part in the parts;create a document term matrix using the sets of unigrams, wherein the document term matrix describes a presence of components in the parts;remove a number of unigrams from the document term matrix that has occurrences in the document term matrix that are greater than a common design threshold, wherein removing the number of unigrams from the document term matrix forms a processed document term matrix and wherein the common design threshold identifies a level of occurrence not useful in differentiating the parts from each other; andcluster the parts into groups using the processed document term matrix. Clause 18 The part analysis system according to clause 17, wherein the part analyzer is configured to:graphically display the groups in a graphical user interface on a display system. Clause 19 The part analysis system according to clause 18, wherein the groups are displayed in a constellation diagram in which points connected by edges, wherein the points represent parts and graphical indicators associated with the points represent groupings of the parts and lengths of edges indicates a similarity of nodes connected by the edges. Clause 20 The part analysis system according to one of clauses 17, 18, or 19, wherein the part analyzer is configured to:remove non-text symbols from the part lists prior to generating the sets of unigrams. Clause 21 The part analysis system according to one of clauses 17, 18, 19, or 20, wherein the part analyzer is configured to:process the unigrams to create a consistency in the unigrams. Clause 22 22. The part analysis system according to clause 21, wherein in processing the unigrams to create the consistency in the unigrams comprises, the part analyzer is configured to:apply a first dictionary of acronyms and concatenations to the unigrams; andapply a second dictionary of synonyms and spelling. Clause 23 The part analysis system according to one of clauses 17, 18, 19, 20, or 21, wherein the part analyzer is configured to:applying a stop word dictionary to remove unigrams that are not useful in differentiating the parts from each other. Clause 24 The part analysis system according to one of clauses 17, 18, 19, 20, 21, 22, or 23, wherein the part analyzer is configured to:apply a matrix correction that removes a set of unigrams from the processed document term matrix based on a number of groups selected for the parts. Clause 25 The part analysis system according to one of clauses 17, 18, 19, 20, 21, 22, 23, or 24, wherein the part analyzer is configured to:determine features from the unigrams in the processed document term matrix, wherein the features are formed from a combination of the unigrams; andreplacing the unigrams with the features in the processed document term matrix. 26. The part analysis system according to clause 15, wherein in clustering the parts into groups using the processed document term matrix, the part analyzer is configured to:cluster the parts into groups using the features in the processed document term matrix. Clause 27 27. The part analysis system according to one of clauses 17, 18, 19, 20, 21, 22, 23, 24, 25, or 26, wherein the document term matrix comprises columns for the unigrams and rows for the parts. Clause 28 A computer program product for grouping parts, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer system to cause the computer system to perform a method of:generating sets of unigrams from text in part lists for parts in a family of parts using a natural language processing method, wherein a set of unigrams in the sets of unigrams represent components for a part in the parts; creating a document term matrix using the sets of unigrams, wherein the document term matrix describes a presence of components in the parts;removing a number of unigrams from the document term matrix that has occurrences in the document term matrix that are greater than a common design threshold, wherein removing the number of unigrams from the document term matrix forms a processed document term matrix and wherein the common design threshold identifies a level of occurrence not useful in differentiating the parts from each other; andclustering the parts into groups using the processed document term matrix. As result, processing part lists using combinations of the various techniques described for part analyzer214increases performance in grouping or sorting parts into groups. Further, the different techniques remove unigrams for components that do not represent design differentiating features in a document term matrix. As a result, less unigrams are processed resulting in increased performance that can lead to a lower use of resources in the computer system. This removal of unigrams without design differentiating features reduces noise in processing data, enabling increasing performance in grouping parts. Additionally, the part analyzer can also preprocess the part to remove unnecessary information prior to generating unigrams from the part list. Further, dimension reduction can occur by creating features from unigrams to further reduce the amounts of processing needed to cluster parts into groups. The description of the different illustrative embodiments has been presented for purposes of illustration and description and is not intended to be exhaustive or limited to the embodiments in the form disclosed. The different illustrative examples describe components that perform actions or operations. In an illustrative embodiment, a component can be configured to perform the action or operation described. For example, the component can have a configuration or design for a structure that provides the component an ability to perform the action or operation that is described in the illustrative examples as being performed by the component. Further, to the extent that terms “includes”, “including”, “has”, “contains”, and variants thereof are used herein, such terms are intended to be inclusive in a manner similar to the term “comprises” as an open transition word without precluding any additional or other elements. Many modifications and variations will be apparent to those of ordinary skill in the art. Further, different illustrative embodiments may provide different features as compared to other desirable embodiments. The embodiment or embodiments selected are chosen and described in order to best explain the principles of the embodiments, the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated. | 83,481 |
11861302 | DETAILED DESCRIPTION OF THE INVENTION The present invention relates to automating the process of creating and maintaining computerized forms, which in the preferred embodiment is used to prepare tax returns and reports, for example payroll tax forms, but the invention is not so limited. A number of technological obstacles need to be overcome in order to solve problems addressed by the present invention, and which are described in detail herein-below, but in general the obstacles relate to making improvements to computer related technology, primarily through the implementation of software techniques to improve the functioning thereof. Additionally, overcoming these obstacles involves solving a number of problems that did not exist outside of the context of computer related technological solutions also as described herein-below. One such technological hurdle involves the fact that the forms must be updated frequently, and in some embodiments the forms are provided to the appropriate governmental entity for approval and must be edited to each agency's strict requirements at least annually. These and other technological hurdles are described in greater detail below. Broadly, the invention is part of a programing construct and physical computer apparatus used to create and distribute forms, and can be divided into several general components, three of which are helpful to discuss in some detail herein, these being: a form file; a data file; and a forms viewer, all as described in additional detail below. The foregoing work in partnership with, and receive data from a plurality of third party software programs, preferably (but not limited to) accounting software programs. The form file is a programing construct that includes information that allows the computer system to create or duplicate a form. The form file is not a file that contains an image of the form (an image file), but instead contains computer code by which a computer-generated form is created so that when completed, it will match the original form (whether a paper or electronic form—such as a pdf) within the level of precision required. This is a process that cannot be completed by merely scanning or digitally capturing an original form (if an original form even exists). The form file will include computer executable instructions and/or information for placing text that appears on the form such as the form title, instructions for filing out and/or mailing the forms, names of various fields on the form, and the like. The form file also includes computer executable instructions and/or information for placing drawn elements such as lines, shapes like circles, triangles, boxes, shading, and other non-text items that appear on the form. The form file can be manipulated by a user (or form designer) as necessary to duplicate the original, or create a new form, such as for example shifting the positions of text and drawn elements, changing characteristics such as font, color, size, line widths, formatting, and other changes. A forms designer program is provided for this purpose, which also includes features and capabilities for further editing, designing, modifying, and use of the forms created—for example, forms can be linked in the case of multiple page forms, or one form can be linked to another form in the event they share related data, form values can be updated in real time as data is entered, and mathematical operations can be performed on form fields. All of this information is captured in the form file in the form of programing constructs, that can be executed by the computer to create the form. Any image of the original form if used as an aid to create the form file, is discarded after the form file is created. The form file as stated is not an image file. The form (and its form file) can also be created from scratch without the use of an image file, but can instead be designed from scratch by allowing a user to simply place any desired text and drawn elements, such as lines, boxes, circles or triangles, or other items so as to duplicate the original form—or create a wholly original form. In this embodiment, a user can create a form file without the aid of any underlying image, background, or template form of any kind. The user can piece together the elements, which are then converted to programming and other elements stored in the form file for use as described herein. In other settings, the form may already exist in an electronic form, or semi-electronic form. Many agencies, or other sources of forms, supply forms in formats that are already somewhat amenable to computer usage, such as pdf files (or other similar formats). In this case, the existing electronic form can be used in connection with various programs and interfaces to create a form file from the information in these types of forms; however, the form file remains a distinct element from the original source. Thus, the form file includes all the computer instructions and programming, as well as data such as text and drawn elements, to duplicate an existing form or create from scratch a new computer form. This file is then available for use by the rest of the system as described herein. The second general component referenced above is the data file, which includes the data that is processed by and placed on the forms. The present invention operates on data that is typically provided from a variety of proprietary third party software applications, such as accounting: software and payroll programs, and the like. Each program can have its own internal structure for the data, which commonly is proprietary to the software provider. In order for the present invention to operate, the data needed for the forms needed to be in a standard format, but no standard existed, and so one was created. The standard is known as an AUF file (Aatrix Universal File). The AUF file contains the data that will populate the fields of the forms represented by the form files, Fields are areas on a form that need to be filled in with alphanumeric or other information, either automatically taken from another program, such as the payroll application, calculated in response to other information on the form, or manually filled in by the user. The AUF file contains the information from the accounting and payroll applications. In its preferred embodiment, data from a user application is placed into the AUF file, which has been prepared in accord with an IT data file specification. The AUF file can then be used to auto-populate information appearing in the forms, instead of having the information entered manually by a user. Typical data in the AUF file can include, without limitation, data such as: Company Name, Address, Federal and State ID Numbers, Phone Numbers, Contact Name, Contact Address, and Contact Phone Number. Also, the AUF file can include data than can be used to fill in an employee's information, including the Employee's Name, Address, Phone Number, Wages, Pay Rates, and individual Paycheck Amounts including Deductions, Employer Paid Items, and Types of Income. All the paycheck information can be split out to individual days, or added together to get many different totals for Weekly, Biweekly, Monthly Semi Monthly, Quarterly, Semi-Annually, or Annually to mention a few. The third main component is the viewer program. The viewer program operates on a data file (such as the AUF file) and the form file to perform calculations or operations, to allow the user of the data processing system to review and change the data, and to create/complete the forms. Rules and calculations can be performed in connection with the data in the AUF file. In real time the numbers on the form are updated or changed based on the data that is entered in the form, by programing statements acting thereon. Calculations and rule conditions are performed immediately and dynamically in response to user input or data calculations. This has the benefit of allowing the user to immediately see the results of his or her data entry, and assisting in the production of accurate and complete forms. Additionally, “If . . . Then . . . Else” rules can be applied to manipulate data on the form, and other rules can be applied across fields of a form. Scanlines or barcodes can be analyzed and acted on, or created, as well. In addition, the program has the ability to securely (using encryption algorithms) transmit forms/reports with data, as well as other critical company information, across the Internet to an e-file Server. With the above structure in mind, the present invention has implemented a solution to technological problems related to maintaining the AUF file that is used by the computer system to duplicate and fill out forms. The nature of the AUF file, and some of the problems solved by using it, are described above; however, there are additional issues that arise from the use of a standard data structure in the context of forms that change over time. The additional obstacles have to do with the various record and data types used with the forms and the data—namely, they change over time too. In particular, one of the challenges of implementing new form types is being able to incorporate new record and data types into the existing AUF and form file structure, and allowing those new record and data types to be processed by the remainder of the program without having to code the change into the main program architecture and source code. These type of changes happen frequently, and without a solution to this problem a great deal of time is needed to make code changes, to verify and test the code changes, and the level of expertise needed for these tasks is relatively high. For clarification purposes, a data type, described in detail below, generally refers to a classification that specifies which type of value a variable has and therefore what type of mathematical, relational, or logical operations can be applied to it without causing an error. A record type is a collection of data types that together represent the information necessary to perform a function or task, or completely capture a group or set of data. For example, a data type needs to be defined to allow data to be read into a data field defined in a form file so that the program manipulating the data knows what can and cannot be done with the data, and how to interpret the data. Creating new data types and record types used by the AUF file and form files previously required the following steps: identify the nature of the data needed and its characteristics based on its use/function (for example, determine whether data used in a form is text, numeric, or a date—each being an example of a different data type used on a form); create and document a field name for this new data to be used for data mapping purposes; modify all form files that require the new data with the new correct field name and update any other attributes relating thereto; have a programmer modify all references in the forms viewer program code to properly interpret and render the new data type, and identify any enhancement or processing request for the new data that need to be supported in the forms viewer; program the forms viewer application code to read the new data point, validate it, and process it on the form in the proper location or in accord with other appropriate instructions; once the new data point is implemented in the forms viewer, online documentation and the AUF specification file (XML, file) need to be manually updated to reflect the new data as well and its characteristics so that third party applications can properly export the data; testing and quality assurance processes need to be conducted to ensure that all changes made with regard to the data point are correct and that no other problems occur; once the new forms viewer code is certified for release, the third party accounting software platforms that provide the AUF data files are informed of the change to the AUF specification so they can support the new data point and the updated forms viewer is released to the general public. A number of problems result from the above approach: the process is reliant on relatively high skilled programmer involvement—adding new data points required the attention of a software engineer capable of not only modifying the source code, but that has a high level of knowledge of the entire system to understand the scope of the changes needed; modifying the forms viewer code represents the risk of introducing errors or bugs into the application code; changes to the code require all users to obtain and install the new update; the time between the identification of a new data point and full roll out is lengthy—a minimum of several weeks, but more likely to be a number of months; and third party providers of accounting software needed to implement the new data point, even if they do not use the affected forms, or they would fall out of compliance with the AUF specifications. Of course, this needs to be repeated every time there is a change, and changes are constantly required. These and other problems and technological obstacles are substantially overcome with the present invention. As described above, the present invention utilizes the AUF data specification which allows third party developer partners to provide data in a format that can be used to populate the data fields in the various forms represented in the forms files with data from the third party partner's application. As new forms are released, or existing forms are updated with new data fields, if they require or allow for new data types, the AUF specification must be updated, published, and implemented, and in addition the forms designer and forms viewer programs must be taught how to handle the new data type. In other words, the programs have no idea how to handle data in a data field unless it knows its data type and understands the rules applicable thereto, which had required updating the source code in every location that might have to process in some manner said data. More specifically, in the past, if a new data type was required for data record used in a form, the user who was designing the form would identify the appropriate field name, specify what the data point was (giving it a name, data type [i.e.: Char, Integer, Decimal, Date] minimum and/or maximum size and a description,) update the AUF spec with the new information, submit a change request to the engineers that maintains the forms viewer and forms designer to update the code in the programs to handle the new data type, and once it was implemented in code and released publish the new AUF spec. This process could take many months to accomplish as explained above. The present invention substantially eliminates these drawbacks. The present invention handles at least three possible changes that might be made to the AUF spec relating to record types and data types:1) The addition of one or more columns, representing new data points, that can be added to an existing record type (as an example, adding a column for “Foreign address” to the existing “CMP” (company data) record type). Again, the new data point has to have a data type associated therewith.2) The addition of a completely new record type, having several associated data types, (as an example, a “TAX” data type for data that is necessary for sales tax reporting forms). In this instance, a new record type is created and one or more columns, representing data points within the record, are added. Each data point is of a certain data type.3) Deprecating an existing record type. If the data in a column is no longer needed or has been supplanted by another column the data is deprecated, or marked to be ignored. For compatibility, columns are never deleted, they are simply marked as being deprecated and those data points are ignored in future releases of the forms viewer; however, that requires updating the forms viewer program with this new information. One of the necessary steps in implementing new record types, or adding data types to an existing record type, is to provide a way for the elements of the program that need to use the new data to get the data out of the AUF file, after the record is created and/or updated. If the change is a single data point added to an existing record type, as in example 1 above, the code that needs to be inserted into various spots in the forms viewer (or elsewhere) is along the lines of: RetunAUFChunk(theLine, 4, utilStr); g_formsToFill→SetFieldContents(“\pDPhone”, utilStr, 0); In this code snippet, the AUF data file includes a record with a variable “theLine” which stores data of a certain type, and the function “ReturnAUFChunk” returns the contents of the fourth column of the data record into the variable “utilStr.” The second line allows the forms viewer to place the contents of “utilStr” into a form field called “DPhone”. If the change is a new record type, as in example 2 above, a new function of the type shown below needs to be added to the forms viewer, which would be called when the AUF parser encountered the new record type in an AUF data file: HandleTAXLine(*theLine) { Str255 utilStr;ReturnAUFChunk(theLine, 2, utilStr);g_formsToFill→SetFieldContents(“\pTAXLocality”, utilStr, 0);ReturnAUFChunk(theLine, 3, utilStr);g_formsToFill→SetFieldContents(“\pTAXRate”, utilStr, 0);ReturnAUFChunk(theLine, 4, utilStr);g_formsToFill→SetFieldContents(“\pTAXGross”, utilStr, 0);ReturnAUFChunk(theLine, 5, utilStr);g_formsToFill→SetFieldContents(“\pTAXTaxable”, utilStr, 0);return true; } As can be seen in that code snippet, the function is a series of calls that go through all of the columns of the new record type and place the data in the appropriate fields on the form. The above code example is added to every program module in the forms viewer (or elsewhere) responsible for reading in the AUF file, and placing the data onto the appropriate forms. The programing code above, of course, only works if the data types and record types are already defined, and the AUF files contain the actual data. Thus, the above is a necessary but not sufficient set of changes to the program code needed to implement any of the three types of changes to record types described above. The actual data records need to be created, or updated to support the new data records and data types. In the past this required the painstaking and time consuming process described above of editing the program to create new structures, or to find every instance of the old structures and amend them. Simplifying this process required support for new data types, and record types. Tools were developed that allowed the previous “find and fill” process to be centralized and generalized, and pulled away from the forms viewer and designer programs, and more efficiently implemented with unique specific tools. As part of this process, two new attributes were added to basically all AUF form fields: a record type attribute; and a column attribute. These attributes can be used by the new tools to build new data types and record types into the AUF structure. Both attributes are selected from drop down menus that are populated by a thin client application that accesses a database of such attributes stored on a server. This functionality allows the user of the forms designer tool, at the time that the forms are designed and the forms file created, to define the specific data and record types that are used to populate form fields when the forms viewer merges the form with the AUF data file. An additional tool, the AUF builder, is used to create new, or modify existing, record types within the AUF database file by providing a way to add to, and modify, a library of available AUF record types and data point descriptions available for use in the form file creation process. These record types, once added or modified, are then automatically available for use in constructing a form file, and upon merger with the AUF file hold data used in the form viewer for filing out the forms—the previous process of editing the program code to create new structures, or to find every instance of the old structures and amend them is required anymore. A screen shot of the AUF builder is shown inFIG.1. In the window shown inFIG.1, the user enters the record type tag (AUF tag), a three letter code (TAX in this example), the maximum number of records with this type that can appear in a form file (0 indicating no maximum, 1 being the only other commonly used value for “Count”), an indicator whether the Record is required (meaning that the forms viewer will reject the AUF data file if this record is missing), and a description of what the AUF tag represents. Once the AUF tag has been created, the user may use the next AUF Builder tool shown inFIG.2to further specify the structure of the created record type having this AUF tag, which comprises a plurality of columns where each column is designed to hold data points of the specified data type. In the window shown inFIG.2, the user creates the record type associated with the AUF tag (TAX in this case), column by column. The user first selects the AUF tag specifying a record type from the list shown in the background (the newly created TAX tag having been selected inFIG.2), and provides the following information.1) AUF column—the physical column number associated with each data type in a data record where the data will go.2) Short field name—limited in size to eight characters, for use in databases and other instances where field size is limited.3) Regular field name—long field name.4) Field/Data type—CHAR (for text), DATE, INTEGER, DECIMAL, or N/A.5) Formatting—Only applies to CHAR fields, options are none, FEIN (Federal Employer Identification Number,) PHONE, SSN (Social Security Number) or ZIP (Zip code). This allows the forms viewer to correctly format the field when it is populated.6) Min Chars/Max Chars—Only applies to CHAR fields, allows the user to specify field content lengths.7) Min Value/Max Value—Only applies to INTEGER and DECIMAL fields, allows for range checking.8) Precision—Only applies to DECIMAL fields, forces decimal precision (Example, with precision of 2, data that was “1.23456” would be entered on the form as “1.23”.9) Required—Flag to indicate that this data point must be populated or the AUF data file will be rejected by the forms viewer.10) Published—This field is part of a previously released AUF specification.11) Unused—Field column that should be left blank.12) Deprecated—This field is no longer used by the Forms Viewer and will be ignored.13) Description—long description of the field, for use when generating documentation. The process can be repeated for multiple columns for the particular AUF tag as needed to fully define the record type. When the record type creation or modification of the AUF database is complete, the user clicks the “Publish . . . ” button on the AUF Items window (FIG.2) displaying the screen shown inFIG.3. As shown inFIG.3, the user provides the version number and has the opportunity to modify the two dates that are embedded in the published AUF specification files (spec.xml). After reviewing and clicking publish, the tool generates the XML, file for the new AUF specification and the HTML files that are published for third party partners' reference. Once the AUF database has been updated, the new or updated AUF record type is available to be assigned to a field of a form in the forms designer program, and then become part of a form file, which is used by the forms viewer program to construct the form file, wherein once the form file is merged with the AUF data the fields of the form will then be filled with data as appropriate. For example, the field assigned the AUF Tag TAX is shown inFIG.4as it is displayed in the forms designer program. In the window shown inFIG.4, the user can select a record tag from the “AUF Tag” dropdown menu (“TAX” is selected, in this instance). Once the tag has been chosen, the column popup menu configures itself to list the columns that are applicable for data types associated with the selected AUF tag data record as shown inFIG.5. Each column having been configured as described in reference toFIG.2above. When a column is selected from the menu, the forms viewer uses the column description established in the AUF Builder as described above (such as data type, maximum or minimum characters, etc.) to configure some of the selections in the Field Characteristics editor. After correctly assigning AUF tags and columns, the user saves the configuration to the applicable form file, and the normal testing and release procedures are followed, and any third party partner who chooses to support the new data points simply adds them to their existing AUF data file per the AUF specification, and the fields are automatically populated correctly, without any changes made to the forms viewer or forms designer applications. This system involves the following components: personal computers, network servers; the AUF builder program to view, edit and publish AUF record type tags and columns of data points; the AUF builder thin client support module which is a bridge between the AUF database file and the forms designer program to circumvent making changes to the database; the forms designer program, which assigns the AUF tag created or amended by the AUF builder to fields on a form; and the forms viewer program which merges the form file created by the forms designer with the AUF data file to populate the form automatically. The benefits of the present invention are numerous and described herein. Substantial technological problems are overcome including relieving software engineers of the problem of implementing code changes across a wide array of programming modules in order to create new records, or new data types and fields in existing records all associated with the task of forms creation. The present invention implements a process that previously took weeks or even months, and required manual manipulation of source code and databases, which can now be accomplished within a matter of minutes by individuals with relatively lower skill levels than before. Previously, data points were mapped to field names, and field names assigned to forms. A change thereto required hunting through myriad of code in the forms designer, forms viewer, and other supporting modules to make changes. The present invention creates a specialized independent tool to create and amend record types, which can then be referenced simply referring to an AUF Tag. The forms viewer and forms designer can then utilize any record types by its AUF tag, essentially without any other information. Modifications to the AUF specification do not impact existing functionality, limiting testing to a simple verification that fields are being properly filled out with data from the AUF data file. The above specification and accompanying Figures are for illustrative use only. The scope of the present invention is defined by the following claims. The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it is therefore desired that the present embodiment be considered in all respects as illustrative and not restrictive, reference being made to the appended claims rather than to the foregoing description to indicate the scope of the invention. Those of ordinary skill in the art that have the disclosure before them will be able to make modifications and variations therein without departing from the scope of the invention. | 27,728 |
11861303 | DETAILED DESCRIPTION General terms that are currently widely used are selected as terms used in example embodiments in consideration of functions in the present disclosure but may be changed depending on the intention of those skilled in the art or a judicial precedent, the emergence of a new technique, and the like. In addition, in specific cases, there may be terms arbitrarily chosen by the applicant. In this case, the meaning of such a term will be described in detail in a corresponding description portion. Therefore, the terms used in the present disclosure should be defined on the basis of the meanings of the terms and the content throughout the present disclosure rather than the simple names of the terms. Throughout the specification, unless otherwise specified, “including” any component means that other components may be further included rather than excluding other components. The expression “at least one of a, b, and c” described throughout the specification may include “a alone,” “b alone,” “c alone,” “a and b,” “a and c,” “b and c,” or “all of a, b, and c.” An “electronic apparatus” described below may also be referred to as an electronic device and may be implemented as a computer or a portable terminal capable of accessing a server or another electronic apparatus through a network. Here, computers may include, for example, a notebook, a desktop, a laptop, and the like, which are equipped with a web browser, and portable terminals are wireless communication devices that ensure portability and mobility, and may include, for example, International Mobile Telecommunications (IMT), code division multiple access (CDMA), W-code division multiple access (W-CDMA), and Long Term Evolution (LTE) terminals, and all kinds of handheld-based wireless communication devices, such as a smartphone and a tablet PC. In the following description, example embodiments of the present disclosure will be described in detail with reference to the drawings so that those skilled in the art can easily carry out the present disclosure. However, the present disclosure may be implemented in various different forms but it is not limited to the example embodiments described herein. Hereinafter, example embodiments of the present disclosure will be described with reference to the drawings. FIG.1is a diagram illustrating a relationship between an electronic apparatus for providing information, which is related to a fulfillment center, and another apparatus according to an example embodiment. Referring toFIG.1, the electronic apparatus100may be connected to an apparatus related to each of one or more fulfillment centers111and113and one or more other apparatuses121,123, and125including data. In an example embodiment, the apparatus related to the first fulfillment center111may include at least one of an apparatus for management of the first fulfillment center111, an apparatus used by the administrator of the first fulfillment center111, and an apparatus for requesting information related to the first fulfillment center111. The apparatus related to the first fulfillment center111may transmit data request information for requesting data to the electronic apparatus100. When the electronic apparatus100confirms the data request information, the electronic apparatus100may acquire data corresponding to the data request information from one or more other apparatuses121,123, and125according to the data request information, and transmit the acquired data to the apparatus related to the first fulfillment center111. The data request information is information on or regarding data to be acquired by the apparatus related to the first fulfillment center111and may include, for example, at least one of a time at which data is to be received, a kind of data, and a type of data. When the electronic apparatus100confirms the data request information, the electronic apparatus100may acquire data from one or more other apparatuses121,123, and125to provide data according to the data request to the apparatus for requesting data, that is, the apparatus related to the first fulfillment center111. As a result, the electronic apparatus100may provide the data corresponding to the data request information to the apparatus related to the first fulfillment center111. In an example embodiment, upon confirming the data request information, the electronic apparatus100may confirm the apparatus related to the data request information among the other apparatuses121,123, and125. For example, the electronic apparatus100may confirm the type of data request information in response to the confirmation of the data request information. When the type of data request information is confirmed as the first type, an apparatus related to the first type of data among the other apparatuses121,123, and125may be confirmed. The electronic apparatus100may acquire the data corresponding to the data request information by requesting the data corresponding to the data request information from the confirmed apparatus. Here, the type of data request information may include at least one of, for example, a type related to a sale of items, a type related to work progress of the first fulfillment center111, and a type related to manpower, but is not limited thereto. In an example embodiment, the electronic apparatus100may include template information corresponding to the data request information. Upon confirming the data request information, the electronic apparatus100may process data acquired from one or more other apparatuses121,123, and125based on the template information corresponding to the data request information. The electronic apparatus100may provide the processed data to the apparatus related to the first fulfillment center111. This will be described in detail below. Referring toFIG.1, the electronic apparatus100may be connected to the apparatus related to the second fulfillment center113. In this case, the electronic apparatus100may apply the operation on the apparatus related to the first fulfillment center111to the apparatus related to the second fulfillment center113. Meanwhile,FIG.1is only an exemplary, and the electronic apparatus100may be connected to apparatuses related to a larger or smaller number of fulfillment centers and other apparatuses. Meanwhile, the description is based on acquiring information related to the fulfillment center throughout the example embodiment, but the present disclosure is not limited thereto. In the example embodiment of the present specification, a method of providing data of a desired field for data stored in a specific server or a database to a user according to a template is provided. FIG.2is a functional block diagram of the electronic apparatus according to the example embodiment. Although components related to the present example embodiment are illustrated inFIG.2, the present disclosure is not limited thereto, and other general-purpose components may be further provided in addition to the components illustrated inFIG.2. Referring toFIG.2, an electronic apparatus200may include a memory210and a processor220. Each of the memory210and the processor220is a unit that processes at least one function or operation, which may be implemented by hardware, software, or a combination of hardware and software. According to an example embodiment, the electronic apparatus200ofFIG.1may be implemented as a server, a computer, or a terminal, and the present specification is not limited by the implementation method of the electronic apparatus200. The memory210may store various types of data related to the electronic apparatus200. For example, the memory210may store at least one instruction for an operation of the electronic apparatus200. In this case, the processor220to be described below may perform various operations based on the instruction stored in the memory210. The processor220may be connected to the memory210to perform various operations of the electronic apparatus100. The processor220may confirm data request information for requesting data related to the fulfillment center. The data request information may be acquired from other apparatuses but is not limited thereto. In an example embodiment, the data request information may include a time condition. For example, the data request information may be a request for data on the inventory status of the first fulfillment center at 9:00 am. As another example, the data request information may be a request for data on the inventory status around the first fulfillment center at 3:00 pm. As another example, the data request information may be a request for data on a work processing rate of the first fulfillment center at 6:00 pm on that day. In an example embodiment, the data request information may be confirmed based on a user input of a first apparatus connected to the electronic apparatus200. For example, the data request information may be generated by the first apparatus in response to inputting data conditions required by the user of the first apparatus to the first apparatus. For example, the data request information may be generated in response to the time when the required data is to be received and the input of the information on or regarding the required data to the first apparatus. The processor220may acquire first data corresponding to the data request information from at least one other apparatus connected to the electronic apparatus200. In an example embodiment, upon confirming the data request information, the processor220may confirm the apparatus related to the data request information among one or more other apparatuses. The processor220may acquire the first data corresponding to the data request information from the confirmed apparatus. For example, when the data request information is data related to a work rate of the first fulfillment center, the processor220may confirm the apparatus related to the work rate of the first fulfillment center among one or more other apparatuses and acquire the first data from the confirmed apparatus. In this case, an apparatus corresponding to each piece of data request information (or type of data request information) may be designated in advance. In an example embodiment, the acquired first data may include raw data corresponding to the data request information. The raw data may include data that has not yet been processed or handled. For example, when the raw data is related to a sensor, the sensing value itself may be included. For example, the first data may include data indicated by the data request information, that is, requested data. When the data request information is information for requesting the work rate of the first fulfillment center, the first data may include work rate data of the first fulfillment center. In an example embodiment, when the data request information includes a time condition, the processor220may acquire the first data from at least one other apparatus according to the time condition. For example, when the time condition is 9:00 am, the processor220may acquire the first data from at least one other apparatus at 9:00 am. In an example embodiment, when at least one other apparatus connected to the electronic apparatus200includes a plurality of apparatuses, the processor220may confirm the apparatus related to the first data among the plurality of apparatuses based on the information on or regarding the plurality of apparatuses. In some cases, the apparatus related to the first data may be designated in advance, and the processor220may confirm the apparatus related to the first data based thereon. The processor220may acquire the first data corresponding to the data request information from the confirmed apparatus. In an example embodiment, at least one other apparatus may include a first server and a second server. When the first server and the second server are apparatuses related to the first data, the processor220may acquire data corresponding to the data request information from each of the first server and the second server in parallel. The processor220may confirm the first data based on data acquired from each of the first server and the second server. However, the present disclosure is not limited thereto, and in some cases, the processor220may obtain data from the first server and the second server regardless of the order or sequentially. The processor220may generate second data by processing the first data based on the template information corresponding to the data request information. The template information may include information for processing the first data to acquire the processed data like the second data. The template information may include, for example, information on or regarding a method of displaying the first data to provide the first data to the user. In this case, the processor220may generate the second data by changing at least one of an arrangement, a font size, a font color, and a layout in which the first data is displayed based on the template information. An example of the second data processed based on the template information can be seen with reference toFIG.5or6. In an example embodiment, the processor220may transmit the second data to the first apparatus connected to the electronic apparatus200. The first apparatus may include the apparatus corresponding to the data request information. For example, the first apparatus may correspond to the apparatus that generates the data request information but is not limited thereto. In an example embodiment, the data request information may include at least one condition. Specifically, the data request information may include a first condition and a second condition. For example, the first condition may include a time condition, and the second condition may include a condition related to a shipping rate or a work rate of the fulfillment center. Specifically, for example, the first condition may include at least one of a specific time of day (for example, 9:00 am) and a specific time interval. The second condition may include, for example, at least one of a shipping rate, a shipping amount, a shipping speed, a workload, a work rate, and a work speed of the first fulfillment center. However, the present disclosure is not limited thereto, and conditions for various types of information requested by a user may be included. In this case, the processor220may transmit the second data to the first apparatus connected to the electronic apparatus in response to satisfying at least one of the first condition and the second condition. For example, when the first condition is 9:00 am and the second condition includes a case where the shipping rate is less than 30%, the processor220may transmit the second data to the first apparatus when at least one of 9:00 am and the shipping rate of less than 30% is satisfied. In an example embodiment, when the transmission condition of the second data is related to the shipping rate or the work rate of the first fulfillment center, the processor220acquires data from at least one other apparatus at a specific time interval (for example, 1 minute) to confirm whether the shipping rate satisfies the conditions. The processor220may generate the second data based on the acquired data when the shipping rate satisfies the conditions. When the second data is generated, the processor220may transmit the second data to the first apparatus connected to the electronic apparatus200. In an example embodiment, the data request information may be typified. For example, the data request information may correspond to one of a plurality of predetermined types. The type of data request information may correspond to, for example, a first type in which data related to the fulfillment center includes data of a certain size or more, a second type in which data related to the fulfillment center includes a certain amount of text or more, or a third type in which data related to the fulfillment center includes a certain number of images or more. Here, the first type may correspond to the case where the data request information is information for requesting accumulated data, the second type may correspond to the case where the data request information is information for requesting data (which may be designated in advance) indicated by numbers, and the third type may correspond to the case where the data request information is information for requesting an image. In some cases, the type of data request information may include the first type in which the time condition is included in the first time range, and the second type in which the time condition is included in the second time range in response to the data request information including a time condition. For example, the first type may include a type that corresponds to daytime hours (for example, 9:00 am to before 5:00 pm) and the second type may include a type that corresponds to nighttime hours (for example, 5:00 pm to before 9:00 am). Each of the first time range and the second time range may be designated in advance, and the processor220may confirm the type of time condition corresponding to the data request information based on confirming the data request information. In an example embodiment, the processor220may confirm the template information for processing the second data based on the type of data request information. For example, the electronic apparatus200may include a plurality of pieces of template information, and the processor220may generate the second data by processing the first data using a template corresponding to the type of data request information. For example, the template information corresponding to the first type may include information for setting brightness of an area in which the second data is displayed to be greater than or equal to a predetermined value. The template information corresponding to the second type may include information for setting the brightness of the area in which the second data is displayed to be less than a predetermined value. As another example, the template information corresponding to the first type may include the template information that is displayed in black text (or dark text, text whose brightness is less than a predetermined value) on a white background (or a light background, a background having brightness greater than or equal to a predetermined value). The template information corresponding to the second type may include template information displayed as a white background (or a bright background) on black text (or dark text). In an example embodiment, the template information for processing the first data may be acquired based on a user input. For example, in response to acquiring the data request information, the processor220may display a screen for requesting the user input for selecting the template information. The processor220may confirm the user input on the displayed screen, and confirm the template information indicated by the confirmed user input as the template information for processing the first data. In an example embodiment, the screen for requesting the user input for selecting the template information may be provided through the electronic apparatus200, but is not limited thereto, and may be provided to the first apparatus connected to the electronic apparatus, for example, the apparatus corresponding to the data request information. In this case, the user input may be input by a user of the first apparatus, that is, a user requesting data. In an example embodiment, the template information may include information on or regarding a template type (for example, a first type and a second type). In this case, the selection of the template information described above may correspond to the selection of the template type. In an example embodiment, the processor220may confirm the template type based on the template information. For example, the processor220may confirm the template type based on the template information. When the processor220confirms the template type, the processor220may identify a candidate apparatus that is expected to provide the second data. The processor220may transmit the second data to the selected apparatus based on the selection of one of the candidate apparatuses. In this case, the candidate apparatus that is expected to provide the second data for each template type may be designated in advance. In addition, the selected candidate apparatus may include the electronic apparatus corresponding to the data request information, for example, the apparatus for requesting data, but is not limited thereto. In an example embodiment, the processor220may store the information related to the generation of the second data. The information related to the generation of the second data may store information about at least one of the apparatus to provide the second data, details of the information included in the second data, the template information corresponding to the second data, the information representing the first data, a generation time of the second data, and the information on at least one of the apparatuses corresponding to the data request information corresponding to the second data. In an example embodiment, the processor220may change the information related to the generation of the second data in response to the input to the stored information. When the input to the stored information is confirmed, the processor220may change at least some of the information related to the generation of the second data. For example, the processor220may change the apparatus that provides the second data or the template information. In an example embodiment, the second data may include the information on or regarding the time when the processor220requests the first data from at least one other apparatus and the information on the time when the processor220acquires the first data from at least one other apparatus. In an example embodiment, the processor220may change the information on the data request time included in the data request information based on the information on the difference between the time when the first data is requested and the time when the first data is acquired. Specifically, the processor220may confirm information that the time when the first data is requested is 13:34:05 and the time when the first data is acquired is 13:34:59. The processor220may determine that the difference between the time when the first data is requested and the time when the first data is acquired is 54 seconds based on the information. Accordingly, the next time the first data is requested, the processor220may request the first data at a time 54 seconds earlier than the time when the first data was last requested. For example, the processor220may request the first data at 13:34:05 on December 15, and when the information on the difference is confirmed to be 54 seconds, the processor220may request the first data at 13:33:11 which is 54 seconds earlier than the existing 13:34:05 on December 16. In this case, the processor220may provide more accurate and up-to-date information by minimizing the data error that may occur due to the difference in the data transmission speed. FIG.3is a flowchart illustrating a flow of each operation of a method of providing information by an electronic apparatus according to an example embodiment. Each operation of the method illustrated inFIG.3may be performed in a different order from those illustrated in the drawings in some cases. Hereinafter, content overlapping that described above may be omitted. Referring toFIG.3, in operation310, the electronic apparatus may confirm the data request information for requesting the data related to the fulfillment center. In an example embodiment, the data request information may be received from the first apparatus connected to the electronic apparatus. In this case, the data request information is the information requested by the first apparatus, and may include, for example, the information selected by the user of the first apparatus among various types of information on or regarding the fulfillment center in which the first apparatus is installed, such as the workload, the work rate, the shipping rate, the shipping amount, or the manpower. In an example embodiment, the data request information may include a time condition related to the time when the first apparatus intends to receive data. For example, the data request information may include the time information of 9:00 am. In this case, the data corresponding to the data request information may be provided to the first apparatus at 9:00 am every day. When the information on the reception period is further set, data reflecting the reception period may be provided to the first apparatus. For example, when the reception period is one week, the electronic apparatus may provide data to the first apparatus at 9:00 am at one week intervals. In operation320, the electronic apparatus may acquire the first data corresponding to the data request information from at least one other apparatus connected to the electronic apparatus. In an example embodiment, the electronic apparatus may be connected to at least one other apparatus. Each of the one or more other apparatuses may store different data. In this case, when the electronic apparatus acquires the data request information, the electronic apparatus may confirm other apparatuses related to the first data to be acquired according to the data request information, and acquire the first data from the confirmed other apparatuses. In some cases, the data corresponding to the data request information may be divided and stored in a plurality of apparatuses. In this case, the electronic apparatus may acquire the first data based on acquiring data from each of the plurality of apparatuses. In an example embodiment, when the data request information includes the time condition, the electronic apparatus may acquire the first data from at least one other apparatus according to the time condition. For example, when the time condition is 9:00 am, the electronic apparatus may acquire the first data from at least one other apparatus at 9:00 am. In an example embodiment, the data request information may include the plurality of conditions. In this case, the electronic apparatus may acquire the first data from at least one other apparatus in response to satisfying at least one of the plurality of conditions. In an example embodiment, the electronic apparatus may confirm information on a request time and an acquisition time of the first data. In this case, the electronic apparatus may confirm the difference between the request time and the acquisition time of the first data, and update the next acquisition time of the first data based on the confirmed difference. In operation330, the electronic apparatus may generate the second data by processing the first data based on the template information corresponding to the data request information. The template information is information for displaying the first data, and the electronic apparatus may generate the second data by changing at least one of an arrangement, a font size, a font color, and a layout in which the first data is displayed using the template information. In an example embodiment, the template information may be designated in advance by the data request information. In this case, upon acquiring the first data, the second apparatus may generate the second data by processing the first data based on the template information according to the data request information. In an example embodiment, upon acquiring the first data, the electronic apparatus may confirm the template information corresponding to the type of first data. The electronic apparatus may generate the second data by processing the first data based on the confirmed template information. Here, the type of first data may be distinguished according to, for example, content indicated by data. For example, the type of first data may be distinguished according to whether the first data is related to sales, the work of the fulfillment center, or the manpower. In some cases, the type of first data may be distinguished according to the type of data included in the first data. For example, the type of first data may be distinguished according to whether an image is a type that includes a predetermined ratio or more, or text is a type that includes a predetermined ratio or more. In another case, the type of first data may be distinguished based on the condition corresponding to the data request information. For example, the type of first data may be distinguished according to a case where the time condition corresponding to the data request information is in a morning time zone and a case where the time condition corresponding to the data request information is in an afternoon time zone. FIG.4is a diagram illustrating an example of a data providing list generated by the electronic apparatus according to the example embodiment. Specifically,FIG.4illustrates an example of the data providing list generated based on receiving a plurality of pieces of data request information. The electronic apparatus may confirm the mutually distinct data request information. For example, the electronic apparatus may receive the information on or regarding the first data request from the first apparatus and the information on the second data request from the second apparatus. At least a part of the first data request information and the second data request information may be distinguished. For example, even if the same data is requested, the time conditions for receiving data may be different. As another example, the first data request information and the second data request information may be information for requesting different data according to different time conditions. In this case, the electronic apparatus may provide a list representing each piece of data request information as illustrated inFIG.4. Referring toFIG.4, a list indicating two data request information items may be provided in response to the confirmation of three pieces of data request information. Referring toFIG.4, the electronic apparatus may display template information410and corresponding apparatus information420related to each piece of data request information. In an example embodiment, such information may be displayed on a management page. The management page may be managed by an electronic apparatus, and subscribers may input additional template information through the management page and subscribe to related information through the input template information. In addition, in an example embodiment, a user may create an additional template based on at least some of the templates input by another user. To this end, the management page may provide a user with an example of the type of information subscribed to by another user and the template information for subscription. The template information410may be designated in advance by the data request information, but is not limited thereto, and may be designated by the electronic apparatus according to data content indicated by the data request information. The corresponding apparatus information420is information on the apparatus that transmits the data request information and may include, for example, the name of the apparatus or the user information of the apparatus. Meanwhile, in an example embodiment, the user may add the template information, and the template information may include information on or regarding the data information to be received and a format for receiving data. By providing such information, each user may adaptively subscribe to specific information. In addition, in an example embodiment, when the same user subscribes to the same information through another template, the electronic apparatus provides the information on the template to the user so that the user may select the information without receiving the duplicated information through another template. Accordingly, when new subscription information and a template for the new subscription information are input, information related thereto may be provided to the user through comparison with information and templates subscribed to by the existing corresponding user. In an example embodiment, when an input for selecting one of the plurality of pieces of data request information appearing in the list is received, the electronic apparatus may provide a screen for modifying the data request information corresponding to the input. The data request information may be modified based on a user input on a screen for modifying the data request information. FIG.5is a diagram illustrating an example of the data provided by the electronic apparatus according to the example embodiment. Specifically,FIG.5illustrates an example of second data processed according to the template information and provided to another apparatus. The electronic apparatus may generate the second data by determining the template type corresponding to the first data and processing the first data according to the determined template type. Referring toFIG.5, the template type may include a first template type in which data is displayed at specific brightness or higher. The first template type may include, for example, a template type in which data is displayed in a form in which a background is bright and text is dark in response to the background appearing at a first brightness or higher and text appearing at less than a second brightness. The first data to which the first template type is applied may correspond to a case where the acquisition of the first data or the provision of the second data falls within the first time range. When the first time range is daytime, the first template type may improve the usability for the user because it is easier to secure visibility when the surrounding environment is bright. Meanwhile, in an example embodiment, a time when corresponding information is additionally acquired and time information collected and reported by the corresponding information may be additionally provided to the user. In addition, the electronic apparatus may determine a specific data collection start time based on the information on a time required to collect data for a previous report to a subscriber who takes time to collect the information. In this way, by determining the time to collect the data provided to the user who subscribes through the existing report information, it is possible to provide information to which the user is subscribed at a certain time. FIG.6is a diagram illustrating another example of the data provided by the electronic apparatus according to the example embodiment. Specifically,FIG.6illustrates another example of second data processed according to the template information and provided to another apparatus. Referring toFIG.6, the template type may include a second template type in which data is displayed at less than a specific brightness. The second template type may include, for example, the template type in which data is displayed in a form in which the background is bright and the text is dark in response to the background appearing at less than a first brightness and text appearing at a second brightness or higher. The first data to which the second template type is applied may correspond to a case where the acquisition of the first data or the provision of the second data falls within the second time range. When the second time range is nighttime, the second template type may improve the usability for the user because it is easier to secure visibility when the surrounding environment is dark. In connection with the application of the template type in the case ofFIGS.5and6, the electronic apparatus may determine the template type to be applied to the first data based on confirming the time condition corresponding to the data request information. The electronic apparatus may provide the information in a form adaptive to the user's surrounding environment by processing the first data according to the determined template type. However, the template type is not limited to the above-described example, and various types may exist according to the content of data or the type of data. For example, when data includes a certain amount of text, there may be a template type in which the size, font, spacing, and the like of the text are adjusted to improve visibility, or a template type in which the arrangement of data is adjusted. FIG.7is a diagram illustrating an example in which data is transmitted by the electronic apparatus according to the example embodiment and is displayed on another apparatus. Specifically,FIG.7illustrates an example in which the second data is displayed by the apparatus that requests the data when the second data is provided to the apparatus that requests the data. Referring toFIG.7, the electronic apparatus may provide the second data in the form of mail to the apparatus corresponding to the data request information. In this case, as illustrated, the second data may be displayed in the form of the information provided in the mail. The electronic apparatus or terminal according to the above-described example embodiments may include a processor, a memory that stores and executes program data, a permanent storage such as a disk drive, a communication port that communicates with an external device, a touch panel, a key, a user interface device such as a button, and the like. Methods implemented as software modules or algorithms may be stored on a computer-readable recording medium as computer-readable codes or program instructions executable on the processor. Here, examples of the computer-readable recording medium may include magnetic storage media (for example, a read-only memory (ROM), a random-access memory (RAM), a floppy disk, a hard disk, etc.), optical reading media (for example, a compact disc (CD)-ROM or a digital versatile disc (DVD)), and the like. The computer-readable recording medium may be distributed in computer systems connected to each other through a network, and as a result, the computer-readable codes may be stored and executed in a distributed scheme. The medium may be readable by a computer, stored in a memory, and executed on a processor. The present example embodiment may be represented by functional block configurations and various processing operations. These functional blocks may be implemented by various numbers of hardware and/or software components that execute specific functions. For example, the example embodiment may employ integrated circuit configurations, such as a memory, processing, logic, and a look-up table, capable of executing various functions by control of one or more microprocessors or other control devices. Similar to executing the components in software programming or software elements, the present example embodiment can be implemented in programming or scripting languages such as python, C, C++, Java, and assembler, including various algorithms implemented by a combination of data structures, processes, routines or other programming configurations. Functional aspects may be implemented in algorithms executed on one or more processors. In addition, the present example embodiment may employ a conventional technology for electronic environment setting, signal processing, and/or data processing, and the like. Terms such as “mechanism,” “element,” “means,” and “configuration” may be used broadly and are not limited to mechanical and physical configurations. The terms may include the meaning of a series of routines of software in connection with a processor or the like. The above-described example embodiments are merely exemplary, and other example embodiments may be implemented within the scope of the following claims. | 39,926 |
11861304 | The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components. DETAILED DESCRIPTION As used herein, a regular expression (also referred to as a regex) is defined to be a text string corresponding to a pattern of text and/or other data. Regular expressions are useful in programming methods for pattern matching. For example the regular expression of “\$\d” identifies a string of text that has a $ before a digit (e.g., 0-9). As used herein, a string of text, sometimes referred to as “a string” is any ordered series of alphanumeric characters. Regular expressions may be used as a signature in anti-spam engines. For example, known spam messages (emails, texts, char interface(s), etc. that have been marked, tagged, and/or identified as spam messages by a user, processor, and/or other device) and/or parts of spam messages (e.g., the title, subject, body, uniform resource locator (URL), title of attachments, etc.) may be analyzed to determine patterns exhibited by such spam messages. In such an example, one or more regular expression(s) corresponding to the patterns can be combined to generate a signature that identifies the patterns in subsequent messages to determine whether the subsequent messages should be tagged as spam. As used herein, a signature, anti-spam signature, regular expression signature, and/or regex signature have the same meaning and are defined to include one or more regular expressions that may be used to identify spam. The signature, anti-spam signature, regular expression signature, and/or regex signature may be generated by analyzing known and/or suspected spam and/or malicious emails. The generated signatures can be transmitted and/or released to devices periodically, aperiodically, and/or based on a trigger to enable the devices to filter out spam messages. Prior regex signature generation techniques require team(s) of researchers to analyze many (e.g., millions of) messages per day. In such prior techniques, the team(s) of researchers attempt to find group(s) of messages (e.g., tagged messages that are similar) and to generate a signature corresponding to one or more regular expressions based on pattern(s) found in a given group. However, human-based signature generation is prone to error, is manually intensive, expensive, and slow. Accordingly, humans may not be able to generate signatures based on a large number (e.g., hundreds, thousands and/or millions) of messages in a timely fashion. In other regex signature generation technique, tools have been implemented to generate anti-spam signatures. However, such tools are slow and are limited in the number of strings that can be analyzed. For example, such tools can take over eight minutes to generate a signature based on two strings. Additionally, such tools can only analyze a maximum of fifteen strings at a time before crashing. Examples disclosed herein can generate an accurate anti-spam signature based on thousands of strings within seconds. Examples disclosed herein generate an anti-spam signature by generating a token graph representative of different combinations of ordered sub-strings of messages of interest. As used herein, a sub-string is any portion (less than all) of a string. A token graph includes nodes representative of the sub-strings of the messages. A sub-string of a string is referred to herein as a token. Examples disclosed herein identify pivots or pivot nodes of the token graph from messages which have been grouped based on similarity. The messages may be grouped before processing based on various criteria such as length, common sub-strings, format, etc. Grouping the messages prior to building the token graph leads to better results. For example, a token graph may be generated for each group of similar message identified as spam. In some examples, a device (e.g., a processor) groups known spam messages (e.g., messages that have been marked, tagged, and/or identified as spam messages by a user, processor, and/or other device) based on the various criteria prior to the generation of the token graph. As used herein, pivots represent the most common substrings (e.g., substrings occurring at more than a threshold frequency) included in the grouped messages. The “most common” substring are application specific. For example, they may occur in more than X % of the messages. For example, if 1,000 emails in a group all start with “Hello there,” “Hello,” “there,” and/or “Hello there” may be defined as pivot(s). Examples disclosed herein generate an anti-spam signature based on the token graph and identified pivots for a corresponding group of messages. For example, individual pivots of the token graph are respectively converted into corresponding single regular expressions (a single regex). Multiple non-pivots (e.g., substrings that do not occur more than the threshold number of times in the group of messages) between two pivots are merged and converted into a single regular expression (e.g., a single, merged regex). Accordingly, the resulting anti-spam signatures include the regular expressions for pivots representative of the most common substrings of the group of messages, and may be interleaved with regular expressions for the random context surrounding the pivots (e.g., the non-pivots). Such anti-spam signatures represents 90% or more of the corresponding group of messages. Using examples disclosed herein an accurate anti-spam signature can be generated based on analysis of thousands of strings within seconds (e.g., in less than one minute). FIG.1is a block diagram of an example implementation of an example regex engine100to automatically generate and/or deploy an anti-spam signature corresponding to a group of messages. In the example ofFIG.1, the example regex engine100includes an example interface102, an example string converter104, an example token graph generator106, an example counter108, an example pivot engine110, an example regex signature generator118, and an example deployment interface124. The example pivot engine110includes an example comparator112, an example filter114, and an example pivot graph generator116. The example regex signature generator118includes an example pivot applicator120and an example regex converter122. The example interface102ofFIG.1obtains a group of messages (herein referred to as a cluster of strings or string cluster) from another device and/or component. The group of messages may be one of a plurality of groups of messages. The group may be defined using any criteria of interest. For example, the messages may be grouped based on subject, points of origin, destination, characteristics of the recipient, text, number of characters, links, tags, length of text, etc. A group of messages may include thousands of messages. Typically, the messages in the group are known to be spam messages. For example, the known messages may have been previously sent to a user device and tagged as spam by the user and/or the user device. The device and/or component processes messages to identify similar messages and to place the similar messages into groups. For example, when a user tags a message as spam, the user device may transmit the tagged message to the device and/or component and the device and/or component may then separate the tagged messages into groups of similar messages. The device and/or component transmits all the messages of a group to the example interface102. Accordingly, the example interface102obtains a group of known spam messages that are similar based on a characteristic. The example string converter104ofFIG.1converts the obtained cluster of strings (sometimes referred to herein as a string cluster) into tokens (e.g., a first string is converted into one or more first tokens, a second string is converted into one or more second tokens, etc.) by performing a tokenization technique. The example string converter104tokenizes the cluster of strings to separate the string(s) into sub-strings (e.g., tokens). The string converter104may separate a string into two or more sub-strings based on any criteria such as by identifying spaces and/or special characters. For example, the string converter104may break the string “Is this the right-time for working” into the sub-strings (e.g., tokens) “Is” “this” “the” “right” “time” “for” and “working” because each of these words is separated from adjacent words by a space or a special character (e.g., “-”, a dash). The example token graph generator106ofFIG.1generates a token graph corresponding to the string cluster. The token graph includes a node for the unique token(s) at each position of the string cluster. For example, the first node(s) at the first position of the graph corresponds to the first unique word(s) used in the first position of the string cluster, the second node(s) at the second position of the graph corresponds to the second unique word(s) used in the second position of the string of cluster, etc. For example, for a string cluster including “Welcome to . . . ” and “Welcome everyone . . . ”, the token graph generator106generates a first node at a first position for the substring “Welcome” and second nodes at a second position for the substrings “to” and “everyone.” An example of a token graph that may be generated by the example token graph generator106based on a cluster of strings is further described below in conjunction with the example token graph204ofFIGS.2A-2B. The token graph generator106provides the generated token graph to the regex signature generator118to generate a regex signature, as further described below. The example counter108ofFIG.1generates a word bag count by generating a count of each unique token in the string cluster being processed. For example, if the word “Hello” appears 1,000 times in the string cluster, the counter108will, based on the tokenization of the string cluster, count the 1,000 “Hello” occurrences and tag the 1,000 count to the “Hello” token. As used herein, a word bag count corresponds to a set of unique tokens tagged with their respective counts. The example counter108provides the word bag count (e.g., the tokens tagged with the corresponding counts representative of how many times the tokens appear in the cluster of strings) to the example pivot engine110. The word bag count may be, for example a two column array. A first column of the array may be populated by the token. A second column of the array may be populated with corresponding counts for the respective tokens. The counts and their corresponding tokens may be mapped to one another by being in the same row of the array. The example pivot engine110ofFIG.1utilizes the word bag count and the string cluster to generate a pivot graph. The pivot engine110ofFIG.1includes the example comparator112, the example filter114, and the example pivot graph generator116. The pivot graph identifies particular tokens as pivots (e.g., pivot tokens). The example pivot engine110selects a token to be a pivot when a token appears more than a threshold number of times in the cluster of strings (e.g., the token is tagged with a count over the threshold number). In contrast, a non-pivot token is a token that occurs less than the threshold number of times (e.g., the token is tagged with a count below the threshold number). Alternatively to comparing to a specific threshold, pivots may instead correspond to the X (e.g., 20) most frequently occurring tokens in the cluster of strings. X may be is based on user and/or manufacture preferences. X may be any number based on the circumstances or application. The pivot engine110ofFIG.1orders the pivots to correspond to the most common order in which the pivots occur in the cluster by identifying where the pivots occur within the cluster of strings. Accordingly, the example pivot engine110generates a pivot or association that identifies which tokens are pivots and an order corresponding to where the pivots occur most frequently within the cluster of strings. The pivot association may be represented by any data structure. In the illustrated example, the association is represented by a pivot graph or a pivot map. To identify the pivots, the comparator112of the pivot engine110ofFIG.1compares the counts of tokens (e.g., using the word bag count) to a threshold. The threshold may be based on user and/or manufacturer preferences. In some examples, the comparator112may compare the counts of the tokens to each other. For example, the comparator112may sort the counts in the word bag count to identify the top X number of tokens based on the respective counts, where X is based on user and/or manufacturer preferences and/or is based on the characteristics of the cluster of strings. X may be dependent on the data being processed. For example, the comparator112may identify (A) the top two most frequently occurring tokens when the cluster of strings includes less than ten substrings (e.g., X=2), (B) the top three most frequently occurring tokens when the cluster of strings include between ten and twenty substrings (e.g., X=3), etc. The comparator112outputs the result of the comparisons to the example filter114. The example filter114of the pivot engine110ofFIG.1selects pivots (e.g., pivot tokens) of the string cluster based on the output of the comparator112. For example, the filter114filters out (e.g., removes) any tokens with counts that do not satisfy the above-threshold. Accordingly, the tokens remaining after filtering correspond to the tokens which are most common in the string cluster. In some examples, the filter114may additionally filter out pivot(s) whose length(s) is/are below a threshold. For example, smaller words (e.g., words of three letters or less) may not be accurate indicators of spam when used as a pivot. Accordingly, the example filter114may filter out such smaller words (e.g., when the length of the token doesn't satisfy a threshold) to remove such words from the group of pivots. The threshold length may be application specific and/or based on user and/or manufacturer preferences. In some examples, the filter114may filter out the tokens (e.g., tokenized substrings) with lengths below a threshold prior to the comparator112comparing the tokens to the threshold. The example pivot graph generator116ofFIG.1generates a pivot graph based on the pivots identified by the example comparator112and the example filter114. The pivot graph reflects the identified pivots in the order in which they most commonly occur in the cluster of strings. For example, assume the comparator112and filter114identify a first word, “A,” a second word, “B,”, and a third word, “C,” as pivots of the string cluster of interest. In such an example, if B follows A and C follows B in most instances of the strings in the string cluster, the pivot graph generator116generates a pivot graph with the token arranged in the order A-B-C, where A is first in the graph, B is second in the graph, and C is third in the graph. In another example, the example pivot graph generator116may generate an association by tagging A as a first pivot, tagging B as a second pivot, and tagging C as a third pivot. An example of a pivot graph that may be generated by the example pivot graph generator116is further described below in conjunction with the example pivot graph206ofFIGS.2A-2B. The pivot graph generator116transmits the pivot graph to the example regex signature generator118. The example regex signature generator118ofFIG.1generates a regex signature corresponding to the string cluster based on the token graph generated by the token graph generator106and the pivot graph generated by the pivot engine110. For example, the regex signature generator118tags nodes of the token graph as pivot nodes based on specified pivot tokens of the pivot graph to generate a tagged token graph. The regex signature generator118leaves the non-pivot nodes as untagged nodes in the tagged token graph. As used herein, tagged nodes are pivot nodes of the token graph. As used herein, untagged nodes are nodes of the token graph that are not pivot nodes. An example of a tagged token graph is described below in conjunction with the example token graph207ofFIGS.2A-2B. The regex signature generator118converts multiple untagged nodes (e.g., non-pivot nodes) of the tagged token graph that are before a first pivot node, after a last pivot node, and/or between any two pivot nodes into single (e.g., a merged or combined) expression representing two or more non-pivots) regex expression(s). The regex signature generator118additionally converts the each of pivot node(s) into a respective (e.g., one) regex expression. In response to converting the pivot (e.g., tagged nodes) and non-pivots (e.g., untagged nodes) into regex expressions, the regex signature generator118generates a regex signature including the generated regex expressions. An example of a regex signature is described below in conjunction with the example regex signature208ofFIGS.2A-2B. In some examples, the regex signature may be a data string that a processor may be programmed to match against a target message to determine if it is spam and/or may be an executable that may be executed by a processor to identify spam by finding messages that matches the characteristics of the cluster of strings (e.g., a search pattern for messages that match the characteristics of the cluster of strings). In other examples, the regex expression is data that is used or, for example, an argument in performing comparison to attempt to identify spam messages. Initially, the example pivot applicator120of the example regex signature generator118ofFIG.1identifies pivots specified in the pivot graph and tags nodes of the token graph corresponding to the identified pivots based on the order of the pivot graph. For example, the pivot applicator120traverses through the positions of the token graph until a node corresponding to the first pivot is found and tags the node of the token graph as a pivot or a pivot node (e.g., by applying metadata to the node). After tagging the first pivot, the pivot applicator120continues to traverse through the positions of the token graph until a second node corresponding to the second pivot is found and tags the second node of the token graph as a second pivot. This process may continue until the pivot applicator120has tagged the remaining pivots in the token graph as pivot nodes, resulting in a tagged pivot graph, as further described below in conjunction withFIGS.2A-2B. Once the nodes of the token graph have been processed (e.g., tagged or left untagged), the example regex converter122of the example regex signature generator118ofFIG.1generates a regex signature based on the tagged token graph. For example, the regex converter122merges each tagged pivot node of the tagged token graph to a single regex expression and converts the multiple untagged nodes (e.g., the nodes of the tagged token graph that have not been tagged as pivots) positioned between two pivot nodes into a single, merged regular expression. Additionally, if there are non-pivot node(s) (e.g., untagged nodes preceding a first pivot, between pivot nodes, or following a last pivot), the example regex converter122respectively converts the groups of non-pivot node(s) (e.g., untagged nodes of the tagged token graph) into corresponding single, merged regex expression(s). A group of non-pivots nodes (e.g., multiple untagged nodes of the tagged token graph) is any of: (A) non-pivot nodes between two pivot nodes; (b) non-pivot nodes preceding the first pivot node, or (c) non-pivot nodes after the last pivot node. For example, if a tagged token graph includes non-pivots “A” and “B” followed by pivot “X” followed by non-pivots “C” and “D” followed by pivot “Y” the regex converter122converts “A” and “B” into a single regex expression (e.g., {\A\B}), “X” into a single regex expression (e.g., X), “C” and “D” into a single specific regex expression (e.g., {\C\D}) and “Y” into a single regex expression (e.g., Y) to generate the regex signature “({\A\B})(X)({\C\D})(Y)”. In this expression “\” indicates that one of the strings that may be found to satisfy a search and “{ }” groups the all the possible strings that may be found to satisfy the search. As described above, the regex signature may be a data string that a processor may be programmed to match against a target message to determine if it is spam (e.g., messages that start with a first string A or B followed by a second string X followed by a third string C or D, followed by a fourth string Y). Alternatively, if there are more than a threshold number of pivots between the two pivots, the example regex converter122may convert the pivots into a generic regex based on the length of the strings, as further described below in conjunction withFIGS.2A-2B. A generic regex does not cause a processor to look for the exact non-pivot string, but for a string that has a similar length as the non-pivot string. If the regex converter122generates a regex expression corresponding to the cluster of strings that corresponds to known spam, a processor can utilize the regex expression to find one or more messages that match the cluster and tag such message(s) as potential spam. Because, by definition created by the threshold test above, non-pivot nodes occur less often in the string cluster than pivot nodes, non-pivots correspond to some degree of randomness in the string cluster. Accordingly, by representing non-pivot nodes with a single, compound regex expression, the regex converter122generates a regex signature focused on the pivots and accounts for randomization between different pivots. For example, the creator of a spam message may attempt to create messages that have been slightly adjusted to avoid being filtered. Accordingly, although the spam messages created may be very similar, the creator may create four different spam messages with the first word being different (e.g., the first word being one of “A” “B” “C” or “D”) and the rest of the words being the same. The regex converter122may generate a regex signature based on the four messages focused on the pivots (e.g., the words of the spam messages that are the same) and accounting for the randomization of the first word among the four messages by generating a single regex of {\A\B\C\D} at a first position of a signature. In this manner, any message that includes any one of “A” “B” “C” or “D” in the first position of the message followed by the pivots can be identified as spam. As such, the regex signature corresponds to a larger percentage of the strings in the cluster. The example deployment interface124ofFIG.1deploys generated regex signatures to devices. For example, the deployment interface124can be transmitted to a device of an end user (e.g., via a network, such as the Internet) in response to a user initiated request of a spam filtering software, part of a software update, and/or to a storage unit so that a package of regex signatures can later be generated. Additionally or alternatively, the deployment interface124may deploy the generated regex signatures to a server or other device. In this manner, the server or other device may utilize the regex signatures to filter out spam messages before being transmitted to a user device. In some examples, the deployment interface124deploys regex signatures in response to a new regex signature being generated. In some examples, the deployment interface124transmits a group of regex signatures for multiple different string clusters at a set period of time (e.g., hourly, daily, weekly, etc.), based on a trigger (e.g., a request from a device), and/or after a threshold number of regex signatures have been created. As described above, the regex signatures may be used by the processors to identify spam by identifying messages that include a pattern of alphanumeric text corresponding to one or more of the regex signatures. Accordingly, transmitting the regex signatures to devices enable those devices to block spam reduce (e.g., eliminate) these potential vehicles for transferring malware that can damage, misuse, or even destroy a computing device FIGS.2A-2Brepresent an example regex signature generation process200performed by the example regex engine100ofFIG.1. The example ofFIGS.2A-2Binclude an example cluster of strings202, an example token graph204, an example pivot graph206, an example tagged token graph207, and an example regex signature208. As described above, the cluster of strings202are strings from different messages that have been identified as similar by another device or component. The messages may be, for example, known spam messages. Although the example ofFIGS.2A-2Bincludes four messages, a cluster of strings may include thousands of strings. The example string converter104ofFIG.1converts the four strings of the cluster of strings202into tokens (e.g., sub-strings) by tokenizing the four strings based on, for example, spaces and/or special characters. For example, the string converter104converts “Hello sir how are things today?” to the tokens “Hello” “sir” “how” “are” “things” “today”. The token graph generator106(FIG.1) converts the tokens for the strings of the cluster into the example token graph204ofFIGS.2A-2B. For example, the token graph generator106determines that the first token of each of the strings is “Hello”. Accordingly, the token graph generator106generates a node for the “Hello” token at a first position. Subsequently, the token graph generator106determines the second tokens for each of the strings and generates a node for each unique token at the second position. The position are shown by dotted vertical lines in the example ofFIGS.2A-2B. As shown in the example token graph204, the positions of respective nodes of the token graph204correspond to positions of respective substrings of the cluster of strings202. For example, the substring “Hello” is in the first position of each string of the cluster of strings202and the respective node “Hello” is in the first position of the example token graph204, the substrings “sir” “madam” “there” and “pal” are in the second position of each string of the cluster of strings202and the respective nodes “sir” “madam” “there” and “pal” are in the second position of the example token graph204, etc. In this manner, the first order of tokens of the token graph204is the same as the second order of substrings in the cluster of strings202. The example token graph generator106continues to convert the tokens until nodes at the respective positions until all the tokens have been implemented in a node. Accordingly, the token graph204represents the possible combinations of the cluster of strings202. The example counter108(FIG.1) counts the number of occurrences of each token from the cluster of strings202and tags each tokens with the corresponding count. The counts are represented in the nodes of the token graph204ofFIG.2by a number in parentheses. The example filter114(FIG.1) filters out tokens that are below a threshold. For example, in the example regex signature generation process200ofFIGS.2A-2B, the threshold is three characters long. Accordingly, the example filter114filters out tokens that are three characters of less (e.g., “sir” “pal” “how” “are” and “you”) to generate the example filtered token graph205. Once filtered, the example comparator112(FIG.1) compares the count to a threshold to identify pivots. For example, in the example regex signature generation process200ofFIGS.2A-2B, the threshold is three instances. Accordingly, the example comparator112identifies tokens that appear more than three times in the cluster of strings (e.g., in the same position or at any position, based on user and/or manufacturer preferences). Additionally or alternatively, the example comparator112may determine the X number of tokens with the Y highest counts. The example filter114filters out the tokens that do not satisfy the threshold, resulting in pivots (e.g., “Hello” and “today”) of the cluster of strings202. The example pivot graph generator116generates the example pivot graph206based on the identified pivots. Because “Hello” occurs before “today” in all of the strings of the cluster of strings202, the example pivot graph generator116generates the example pivot graph206to include the pivot “Hello” before the pivot “today.” The example pivot applicator120(FIG.1) of the regex signature generator118(FIG.1) generates the example tagged token graph207by tagging the pivots from the pivot graph206in the example token graph204and leaving non-pivots as untagged nodes. In other examples, non-pivots may be affectively labeled as non-pivot nodes, rather than leaving as untagged. Once the pivots are tagged, the example regex converter122converts the pivots into single regexes and converts any non-pivots between the pivots into corresponding a single, merged regex. Accordingly, the example regex converter122converts pivot “Hello” to regex “Hello”, converts the non-pivots “sir” “madam” “there” “pal” “how” “are” “things” and “you” between the “Hello” and “today” pivots to a single generic regex or a single specific regex, and converts pivot “Today” to regex “Today,” resulting in the example regex signature208. A specific regex corresponds to a search identifying the specific words of the pivots. For example, the example regex converter122converts the non-pivots “sir” “madam” “there” “pal” “how” “are” “things” and “you” between the “Hello” and “today” pivots to a single specific regex of “({\sir′\‘madam’\‘there’\‘pal’\‘how’\‘are’\‘things’\‘you’}).” In such an example, the single specific regex corresponds to a search that identifies messages that include any one of “sir” “madam” “there” “pal” “how” “are” “things” and “you” in one or more positions (e.g., between the two pivots). A generic regex corresponds to search identifying any word that has a similar length. For example, because the first four non-pivots at the second position of the example token graph204are words varying from 3 characters to 5 characters, the example regex converter122creates the first part of the regex to be {{a-z}{3-8}} which searches for any word that includes 3 to 8 letters (e.g., giving a three letter cushion from the 5 character maximum of the non-pivots in the second position). For example, the {a-z} part of the regex identifies any string with letters a-z and the {3-8} part of the regex identifies any string with a character length of 3 to 8 characters. Accordingly, the regex{{a-z}{3-8}} searches for any word that includes 3 to 8 letters. The regex converter122converts the non-pivot nodes into a single generic regex or a single specific regex based on the number of non-pivots in the between the two pivots. A user and/or manufacture may define a threshold number of non-pivots that define whether the regex conversion should be generic or specific. The amount of cushion added to the minimum characters and/or maximum characters may be based on user and/or manufacturer preferences. Additionally, different cushions may be added to different maximum/minimum word lengths (e.g., a cushion of 2 characters may be added to words of less than 5 characters and a cushion of 3 characters may be added to words with 5 or more characters). In the example ofFIGS.2A-2B, the number of non-pivots between the “Hello” pivot and the “today” pivot is above the threshold. Accordingly, the example regex converter122converts the non-pivots into the single regex “({{a-z}{3-8}}{{a-z}{3-5}}{{a-z}{3-5}}{{a-z}{3-9}})” The example deployment interface124deploys the example regex signature208to devices. In this manner, the devices can execute a search of messages using the regex signature208to identify messages that start with “Hello.” followed by a word with a character length between 3 and 8 characters, followed by a word with a character length between 3 and 5 characters, followed by a word with a character length between 3 and 5 characters, followed by a word with a character length between 3 and 9 characters, followed by “today.” While an example manner of implementing the example regex engine100is illustrated inFIG.1, one or more of the elements, processes and/or devices illustrated inFIG.1may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example interfaced102, the example string converter104, the example token graph generator106, the example counter108, the example pivot engine110, the example comparator112, the example filter114, the example pivot graph generator116, the example regex signature generator118, the example pivot applicator120, the example regex converter, and the example deployment interface124, and/or, more generally the example regex engine100ofFIG.1may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example interfaced102, the example string converter104, the example token graph generator106, the example counter108, the example pivot engine110, the example comparator112, the example filter114, the example pivot graph generator116, the example regex signature generator118, the example pivot applicator120, the example regex converter, and the example deployment interface124, and/or, more generally the example regex engine100ofFIG.1could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example interfaced102, the example string converter104, the example token graph generator106, the example counter108, the example pivot engine110, the example comparator112, the example filter114, the example pivot graph generator116, the example regex signature generator118, the example pivot applicator120, the example regex converter, and the example deployment interface124, and/or, more generally the example regex engine100ofFIG.1is and/or are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example regex engine100may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIG.1, and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the example regex engine100ofFIG.1are shown inFIGS.3-6. The machine readable instructions may be one or more executable program or portion(s) of an executable program for execution by a computer processor such as the processor712shown in the example processor platform700discussed below in connection withFIG.7. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor712, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor712and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS.3-6, many other methods of implementing the example regex engine100ofFIG.1may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, etc. in order to make them directly readable and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein. In another example, the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit. As mentioned above, the example process ofFIGS.3-6may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in that information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. FIG.3is an example flowchart300representative of example machine readable instructions that may be executed by the example regex engine100ofFIG.1to generate a regex signature based on a cluster of similar strings. Although the flowchart300ofFIG.3is described in conjunction with the example regex engine100ofFIG.1, other type(s) of engine(s), and/or other type(s) of processor(s) may be utilized instead. At block302, the example string converter104and the example pivot engine110obtain a cluster of strings via the example interface102. The cluster of strings correspond to similar messages (e.g., as identified by another device or component) that have been identified as spam messages. At block304, the example string converter104tokenizes the cluster of strings. For example, the string converter104converts the strings of the cluster of strings into tokens (e.g., sub-strings) based on spaces and/or special characters. At block306, the example token graph generator106generates a token graph (e.g., such as the token graph204ofFIGS.2A-2B) based on the tokenized cluster of strings, as further described below in conjunction withFIG.4. At block308, the example counter108generates word counts of tokens of the cluster of strings corresponding to how many times the tokens occurred in the cluster of strings. For example, the counter108counts the number of occurrences of each unique token in the cluster of strings. At block310, the example filter114determines if there is one or more tokens less than a threshold length. If the example filter114determines that there is not one or more tokens less than a threshold length (block310: NO), the process continues to block314. If the example filter114determines that there is one or more tokens less than a threshold length (block310: YES), the example filter114filters out the token(s) less than the threshold length (block312). At block314, the example comparator112compares the word counts of the remaining tokens to a threshold (e.g., based on user and/or manufacturer preferences). In some examples, the threshold is a preset number by a user and/or manufacturer. In some examples, the threshold corresponds to the X most common words in the cluster of strings. In some examples, the threshold corresponds to a percentage of strings. In such examples, the comparator112may set the threshold based on the number of strings in the string cluster (e.g., if the threshold is set be 80% of the number of strings in the cluster and the cluster includes 200 strings, then the threshold will be set to 160). At block316, the example filter114filters out token(s) with word count(s) that do(es) not satisfy the threshold (e.g., based on the output of the comparator112). The remaining tokens after the filtering are the pivot(s). At block318, the example pivot graph generator116generates a pivot graph based on the pivot(s) and/or the order of the pivots based on the cluster of strings. For example, once the pivot(s) are determined, the example pivot graph generator116determines the order of the pivots based on the most common order within the cluster of strings. The example pivot graph generator116generates the pivot graph to represent the selected pivots and the corresponding order with respect to the cluster of strings. At block320, the example regex signature generator118generates the regex signature based on the pivot graph and the token graph, as further described below in conjunction withFIG.5. At block322the example deployment interface124transmits the generated regex signature to one or more devices to filter spam. For example, the deployment interface124may transmit the generated regex signature to one or more devices via a network (e.g., the Internet). As described above, the example deployment interface124may transmit the generated regex signature periodically, aperiodically, based on a trigger, alone, and/or as a package of multiple regex signatures. In some examples, the deployment interface124stores the generated regex signature temporarily (e.g., in a register) until transmission of the regex signature or a bundle of signatures are triggered. The devices execute the regex signatures to search through messages to tag potential spam messages that are similar to the cluster of strings. The result executing the regex signature results in filtering or otherwise warning the user of potential spam messages. FIG.4is an example flowchart400representative of example machine readable instructions that may be executed to implement the example regex engine100ofFIG.1to generate a token graph based on the tokenized cluster of strings, as described above in conjunction with block306ofFIG.3. Although the flowchart400ofFIG.4is described in conjunction with the example regex engine100ofFIG.1, other type(s) of engine(s), and/or other type(s) of processor(s) may be utilized instead. At block402, the example token graph generator106selects a first position of the tokenized strings. The token graph includes nodes representative of ordered tokens of the string of cluster. Accordingly, the token graph generator106selects a first position of the cluster of strings to initiate the token map. At block404, the example token graph generator106identifies the unique token(s) in the selected position. For example, if every tokened string of the cluster of strings begins with wither “Hello” or “Hi,” the token graph generator106will identify “Hello” and “Hi” as the unique tokens at the first position. At block406, the example token graph generator106generates one or more nodes for the one or more unique tokens at the selected position. Using the above example, the token graph generator106would generate two nodes (e.g., one for “Hello” and one for “Hi”) in the first position. At block408, the example token graph generator106determines if there are subsequent token(s) of the tokenized strings in subsequent position(s). If the example token graph generator106determines that there are no subsequent tokens in subsequent positions (block408: NO), the process returns to block308ofFIG.3. If the example token graph generator106determines that there are subsequent token(s) in subsequent position(s) (block408: YES), the token graph generator106selects the subsequent position of the cluster of strings (block410) and the process returns to block404to generate additional nodes for the token graph at the subsequent position(s). FIG.5is an example flowchart500representative of example machine readable instructions that may be executed to implement the example regex engine100ofFIG.1to generate a regex signature based on the pivot graph and the token graph, as described above in conjunction with block320ofFIG.3. Although the flowchart500ofFIG.5is described in conjunction with the example regex engine100ofFIG.1, other type(s) of engine(s), and/or other type(s) of processor(s) may be utilized instead. At block501, the example pivot applicator120tags node(s) in the token graph as pivot(s) of pivot node(s) based on the pivot and pivot order of the pivot graph. For example, if the pivot graph identifies a first pivot of “A” and a second subsequent pivot of “B,” the pivot applicator120traverses through the token graph in order until it finds a first “A” node and tags the first “A” node as a pivot node. After the first “A” node is tagged, the pivot applicator120continues to traverse the token graph until it finds a first “B” node located after the first “A” node and tags the first “B” node as a pivot node. At block502, the example regex converter122determines if the first position of the token graph correspond to a pivot node (e.g., the node at the first position has been tagged as a pivot). If the example regex converter122determines that the first position of the token graph does correspond to a pivot (e.g., is tagged as a pivot) (block502: YES), the process continues to block504. If the example regex converter122determines that the first position of the token graph does not correspond to a pivot (block502: NO), the example regex converter122converts the non-pivot nodes preceding the first pivot into a single regex (block504), as further described below in conjunction withFIG.6. At block506, the example regex converter122selects the first pivot of the token graph and selects the following pivot of token graph. At block508, the example regex converter122converts the first pivot into a single regex. when a processor executes a search based on the single regex, the processor look for a word corresponding to the pivot in the position identified in the regex for the pivot. At block510, the example regex converter122converts all the non-pivot nodes between the selected pivots to generate a single regex, in a manner similar to block504. At block512, the example regex converter122converts the following pivot node to a single regex. At block514, the example regex converter122determines if there is a subsequent pivot in a subsequent position. If the example regex converter122determines that there is a subsequent pivot in a subsequent position (block514: YES), the example regex converter122replaces the first pivot with the following pivot and selects the subsequent pivot of the token graph as the new following pivot (block516) and the process returns to block510to continue to convert subsequent pivot node(s) and/or non-pivot node(s) of the pivot graph to regexes. If the example regex converter122determines that there is not a subsequent pivot in a subsequent position (block514: NO), the example regex converter122determines if there are one or more non-pivot nodes in any subsequent positions (block518) (e.g., any non-pivot after the last pivot of the token graph). If the example regex converter122determines that there are not one or more non-pivot nodes in any subsequent positions (block518: NO), the process returns to block322ofFIG.1. If the example regex converter122determines that there are one or more non-pivot nodes in any subsequent positions (block518: YES), the example regex converter122converts the remaining non-pivot node to a single regex (block520), in a manner similar to block504and the process returns to block322ofFIG.3. FIG.6is an example flowchart600representative of example machine readable instructions that may be executed to implement the example regex engine100ofFIG.1to convert non-pivot node(s) between two pivots, non-pivot node(s) before a first pivot, or non-pivot node(s) after a final pivot, as described above in conjunction with blocks504,510, and520ofFIG.5. Although the flowchart600ofFIG.6is described in conjunction with the example regex engine100ofFIG.1, other type(s) of engine(s), and/or other type(s) of processor(s) may be utilized instead. At block601, the example regex converter122determines if the number of non-pivot nodes(s) before (e.g., corresponding to block504ofFIG.5), between (e.g., corresponding to block510ofFIG.5), or after the pivot(s) (e.g., corresponding to block520ofFIG.5) is above a threshold. As described above in conjunction withFIGS.2A-2B, the threshold identifies whether the non-pivot node(s) are to be converted into a single generic regex or a single specific regex. The threshold may be based on user and/or manufacturer preferences. If the example regex converter122determines that the number of non-pivot nodes(s) is above the threshold (block600: YES), the example regex converter122converts the non-pivot node(s) into a single generic regex based on the length of the non-pivot node(s) (block602). For example, if there are two non-pivots, “how” and “why,” at the first position, three non-pivots, “is,” “are,” and “do,” at the second position, and one non-pivot, “things” at the third position, the regex converter122determines that the length of the strings at the first position is three, the length of the strings at the second position is between 2 and 3, and the length of the string in the third position is 4. Accordingly, the regex converter122converts the strings at each position to reflect the character lengths. For example, the regex converter122may generate the single generic regex of ({{a-z} {3} } {{a-z} {2-3} } {{a-z} {4} }), where {{a-z}{3}} corresponds to the non-pivots in the first position, {{a-z} {2-3} } corresponds to the non-pivots in the second position, and {{a-z} {4} } corresponds to the non-pivot in the third position. In some examples, the regex converter122may add a cushion to expand the search to words that go beyond the character lengths. For example, the regex converter122may add a one character cushion to the minimum and maximum lengths of each position based on user and/or manufacturer preferences. Using the above example regex, the regex converter122adds a 1 character cushion to the minimum and maximum length of each position corresponding to the regex of ({{a-z} {2-4} } {{a-z} {1-4} } {{a-z} {3-5} }). In this manner, when a processor executes a search based on the above regex, it will pull and/or flag messages that include a word with a 2-4 character length in a first position, a word with a 1-4 character length in a second position, and a word with a 3-4 character length in a third position. If the example regex converter122determines that the number of non-pivot nodes(s) is above the threshold (block600: NO), the example regex converter122converts the non-pivot node(s) into a single specific regex based on the non-pivot node(s) (block604). For example, if there are two non-pivots (e.g., “A” and “B”) in a first position preceding a pivot (e.g., “C”) in a second subsequent position of the pivot graph, the regex converter122converts the two non-pivots into the single, merged regex {\‘A’\‘B’}. In this manner, when a processor executes a search based on the above regex, it will pull and/or flag messages that include a “A” followed by “B.” FIG.7is a block diagram of an example processor platform700structured to execute the instructions ofFIGS.3-6to implement the example regex engine100ofFIG.1. The processor platform700can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), or any other type of computing device. The processor platform700of the illustrated example includes a processor712. The processor712of the illustrated example is hardware. For example, the processor712can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example interfaced102, the example string converter104, the example token graph generator106, the example counter108, the example pivot engine110, the example comparator112, the example filter114, the example pivot graph generator116, the example regex signature generator118, the example pivot applicator120, the example regex converter, and the example deployment interface124. The processor712of the illustrated example includes a local memory713(e.g., a cache). The processor712of the illustrated example is in communication with a main memory715including a volatile memory714and a non-volatile memory716via a bus718. The volatile memory714may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory716may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory715is controlled by a memory controller. The processor platform700of the illustrated example also includes an interface circuit720. The interface circuit720may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. In the illustrated example, one or more input devices722are connected to the interface circuit720. The input device(s)722permit(s) a user to enter data and/or commands into the processor712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. One or more output devices724are also connected to the interface circuit720of the illustrated example. The output devices724can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit720of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor. The interface circuit720of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. The processor platform700of the illustrated example also includes one or more mass storage devices728for storing software and/or data. Examples of such mass storage devices728include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. The machine executable instructions732ofFIG.3-5may be stored in the one or more mass storage devices728, in the volatile memory714, in the non-volatile memory716, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. Example methods, apparatus, systems, and articles of manufacture to generate regex and detect data similarity are disclosed herein. Further examples and combinations thereof include the following: Example 1 includes an apparatus to generate an anti-spam signature, the apparatus comprising a token graph generator to generate a token graph including nodes based on a cluster of strings corresponding to a group of messages that are known to be spam, a pivot engine to identify pivot nodes in the cluster of strings, a pivot applicator to tag corresponding ones of the nodes of the token graph as the pivot nodes, and a regex converter to generate the anti-spam signature based on (a) the tagged nodes and (b) at least one of the node of the token graph that is not tagged as a pivot node. Example 2 includes the apparatus of example 1, further including a deployment interface to transmit the anti-spam signature to a device via a network to enable the device to identify spam messages based on the anti-spam signature. Example 3 includes the apparatus of example 1, wherein the nodes of the token graph correspond to substrings of the cluster of strings. Example 4 includes the apparatus of example 3, wherein positions of nodes of the token graph respectively correspond to positions of respective substrings of the cluster of strings. Example 5 includes the apparatus of example 1, wherein a first node of the nodes corresponds to first substrings at a first position of the cluster of strings and a second node of the nodes corresponds to second substrings at the first position of the cluster of strings. Example 6 includes the apparatus of example 1, further including a string converter to convert strings of the cluster of strings into substrings, the token graph generator to generate the token graph based on the substrings, the nodes of the token graph corresponding to the substrings. Example 7 includes the apparatus of example 3, wherein the substrings are first substrings and further including a filter to filter out second substrings with lengths that do not satisfy a threshold. Example 8 includes the apparatus of example 1, wherein the regex converter is to generate the anti-spam signature by converting a first tagged node into a first single regular expression, converting a second tagged node into a second single regular expression, and converting multiple untagged nodes between the first tagged node and the second tagged node into a third single regular expression, the anti-spam signature including the first single regular expression, the second single regular expression, and the third single regular expression. Example 9 includes a non-transitory computer readable storage medium comprising instructions which, when executed, cause a machine to at least generate a token graph including nodes based on a cluster of strings corresponding to a group of messages that are known to be spam, identify pivot nodes in the cluster of strings, tag corresponding ones of the nodes of the token graph as pivot nodes, and generate an anti-spam signature based on (a) the tagged nodes and (b) at least one of the node of the token graph that is not tagged as a pivot node. Example 10 includes the computer readable storage medium of example 9, wherein the instructions cause the machine to transmit the anti-spam signature to a device via a network to enable the device to identify spam messages based on the anti-spam signature. Example 11 includes the computer readable storage medium of example 9, wherein the nodes of the token graph correspond to substrings of the cluster of strings. Example 12 includes the computer readable storage medium of example 11, wherein positions of nodes of the token graph respectively correspond to positions of respective substrings of the cluster of strings. Example 13 includes the computer readable storage medium of example 9, wherein a first node of the nodes corresponds to first substrings at a first position of the cluster of strings and a second node of the nodes corresponds to second substrings at the first position of the cluster of strings. Example 14 includes the computer readable storage medium of example 9, wherein the instructions cause the machine to convert strings of the cluster of strings into substrings and generate the token graph based on the substrings, the nodes of the token graph corresponding to the substrings. Example 15 includes the computer readable storage medium of example 14, wherein the substrings are first substrings, wherein the instructions cause the machine to filter out second substrings with lengths that do not satisfy a threshold. Example 16 includes the computer readable storage medium of example 9, wherein the instructions cause the machine to generate the anti-spam signature by converting a first tagged node into a first single regular expression, converting a second tagged node into a second single regular expression, and converting multiple untagged nodes between the first tagged node and the second tagged node into a third single regular expression, the anti-spam signature including the first single regular expression, the second single regular expression, and the third single regular expression. Example 17 includes a method to generate an anti-spam signature, the method comprising generating, by executing an instruction with a processor, a token graph including nodes based on a cluster of strings corresponding to a group of messages that are known to be spam, identifying, by executing an instruction with the processor, pivot nodes in the cluster of strings, tagging, by executing an instruction with the processor, corresponding ones of the nodes of the token graph as the pivot nodes, and generating, by executing an instruction with the processor, the anti-spam signature based on (a) the tagged nodes and (b) at least one of the nodes of the token graph that is not tagged as a pivot node. Example 18 includes the method of example 17, further including transmitting the anti-spam signature to a device via a network to enable the device to identify spam messages based on the anti-spam signature. Example 19 includes the method of example 17, wherein the nodes of the token graph correspond to substrings of the cluster of strings. Example 20 includes the method of example 19, wherein a positions of nodes of the token graph respectively correspond to positions of respective substrings of the cluster of strings. Example 21 includes the method of example 17, wherein a first node of the nodes corresponds to first substrings at a first position of the cluster of strings and a second node of the nodes corresponds to second substrings at the first position of the cluster of strings. Example 22 includes the method of example 17, further including converting strings of the cluster of strings into substrings, the generating of the token graph based on the substrings, the nodes of the token graph corresponding to the substrings. Example 23 includes the method of example 22, wherein the substrings are first substrings, and further including filtering out second substrings with lengths that do not satisfy a threshold. Example 24 includes the method of example 17, wherein the generating of the anti-spam signature includes converting a first tagged node into a first single regular expression, converting a second tagged node into a second single regular expression, and converting multiple untagged nodes between the first tagged node and the second tagged node into a third single regular expression, the anti-spam signature including the first single regular expression, the second single regular expression, and the third single regular expression. From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed herein to generate regex and detect data similarity. Disclosed methods, apparatus and articles of manufacture generate a token graph representative of different combinations of ordered sub-strings of the group of messages. Additionally, examples disclosed herein identify pivots or pivot nodes of the token graph from group of messages (e.g., messages grouped based on similarity of subject, points of origin, destination, characteristics of the recipient, text, number of characters, links, tags, length of text, etc.). Examples disclosed herein generate an anti-spam signature based on the token graph and identified pivots. For example, individual pivots of the token graph are converted into single regex expressions and multiple non-pivots (e.g., substrings that do not occur more than the threshold number of times in the group of messages) between two pivots are represented by into single, merged regex expression. Accordingly, the anti-spam signature include the regular expressions for pivots representative of the most common substrings of the group of messages and regular expressions for the random context surrounding the pivots (e.g., the non-pivots), representing 90% or more of the group of messages. Using examples disclosed herein, an accurate anti-spam signature can be automatically generated based on thousands of strings within seconds. Filtering out spam messages can solve network traffic and eliminate potential vehicles for transferring malware that can damage, misuse, or even destroy a computing device. Disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer by creating signature(s) that help device(s) automatically identify and tag spam. It is noted that this patent claims priority from Indian Patent Application Serial Number 201911019039, which was filed on May 13, 2019, and is hereby incorporated by reference in its entirety. Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. | 71,208 |
11861305 | DESCRIPTION OF EMBODIMENTS (1) First Embodiment An embodiment of the present invention is now explained in detail. The present invention, however, is not limited to the following embodiment. The word processing system of this embodiment automatically extracts, from a plurality of sentences, a paraphrasing rule including paraphrased expressions (first expression, and a second expression as a paraphrase of the first expression). Here, the first expression and the second expression are different expressions with the same meaning. The paraphrasing rule is data retaining expressions before and after the paraphrasing. For example, the word processing system acquires a pair of similar hierarchy data from a corpus, and extracts a difference between the acquired hierarchy data as the paraphrasing rule data. The hierarchy data is data (syntax tree data) indicating a syntax tree in which each sentence of the corpus has been divided into each hierarchy. According to the foregoing configuration, by using the syntax tree data divided into each hierarchy upon acquiring a pair of hierarchy data having a similar meaning, modifier parts and other clauses can be eliminated, and similarity of the hierarchy data can be properly determined. Moreover, for example, the word processing system extracts the paraphrasing rule data which satisfies the condition (paraphrasing condition) for selecting the paraphrasing rule data desired by the user. According to the foregoing configuration, it is possible to select the user's intended paraphrasing rule data even when a corpus is used. Moreover, for example, the word processing system extracts the paraphrasing rule data in which the frequency of appearance after the paraphrasing exceeds a threshold. According to the foregoing configuration, it is possible to avoid a situation where the paraphrasing rule data, in which the meanings are not similar, are registered. Moreover, the word processing system generates a plurality of relation extraction rules from example sentences using the paraphrasing rule data. Here, the relation extraction rule is data indicating the rule expressing the grammatical structure for extracting the relation between phrases from the text (target sentence). According to the foregoing configuration, since the relation extraction rules can be easily generated, the user can more easily use the relation extraction system. An embodiment of the present invention is now explained with reference to the appended drawings. The following descriptions and drawings are illustrations for explaining the present invention, and have been omitted or simplified as needed to clarify the explanation of the present invention. The present invention can also be worked in other various modes. Unless otherwise provided for herein, each constituent element may be singular or plural. Note that, in the following explanation, the same number is assigned to the same elements in the drawings and the explanation thereof will be omitted as appropriate. Moreover, when the same types of elements are explained without being differentiated, the common part (part excluding the branch number) of the reference code including the branch number will be used, and when the same types of elements are explained by being differentiated, the reference code including the branch number may be used. For example, when the expression data are explained without any particular differentiation, they will be indicated as “expression data410”, and when the individual expression data are explained by being differentiated, they may be indicated as “expression data410-1”, “expression data410-2” and so on. FIG.1is a diagram showing an example of the configuration of the word processing system100. The word processing system100comprises a word processing device101, an input device102, and an output device103. The word processing device101is a computer such as a personal computer, a server device, or a tablet terminal. The word processing device101comprises a processor110, a primary storage device120, an auxiliary storage device130, and a communication device140. The processor110is a device that performs arithmetic processing. The processor110is, for example, a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU (Graphics Processing Unit), or an AI (Artificial Intelligence) chip. The primary storage device120is a device which stores programs, data and the like. The primary storage device120is, for example, a ROM (Read Only Memory), a RAM (Random Access Memory) or the like. The ROM is an SRAM (Static Random Access Memory), an NVRAM (Non Volatile RAM), a mask ROM (Mask Read Only Memory), a PROM (Programmable ROM) or the like. The RAM is a DRAM (Dynamic Random Access Memory) or the like. The auxiliary storage device130is a hard disk drive (Hard Disk Drive), a flash memory (Flash Memory), an SSD (Solid State Drive), an optical storage device or the like. The optical storage device is a CD (Compact Disc), a DVD (Digital Versatile Disc) or the like. The programs, data and the like stored in the auxiliary storage device130are read from the primary storage device120as needed. The auxiliary storage device130stores corpus information131, hierarchy information132, paraphrasing rule information133, relation extraction rule information134and the like. The communication device140is a communication interface which communicates with other devices. The communication device140is, for example, an NIC (Network Interface Card), a wireless communication module, a USB (Universal Serial Interface) module, a serial communication module or the like. The communication device140can also function as an input device which receives information from other devices that are communicably connected. Moreover, the communication device140can also function as an output device which sends information to other devices that are communicably connected. The functions (first generation unit121, second generation unit122, third generation unit123, fourth generation unit124, storage unit125, output unit126and the like) of the word processing device101, for example, may be realized by the processor110reading the programs stored in the auxiliary storage device130into the primary storage device120and executing the programs (software), or realized with hardware such as a dedicated circuit or the like, or realized by combining the software and the hardware. Moreover, the word processing device101may additionally comprise, in addition to the foregoing functions, for example, the functions of an operating system, a device driver, a file system, a DBMS (Data Base Management System) and the like. The first generation unit121generates the hierarchy information132based on the corpus information131. The second generation unit122generates the paraphrasing rule information133based on the hierarchy information132. More specifically, the second generation unit122comprises a search unit122A, an extraction unit122B, a selection unit122C, and a validation unit122D. The search unit122A searches for second hierarchy data, which is similar to first hierarchy data, from the hierarchy information132. The extraction unit122B extracts a difference between the first hierarchy data and the second hierarchy data as the paraphrasing rule data. The selection unit122C selects the paraphrasing rule data desired by the user from the paraphrasing rule data extracted by the extraction unit122B. The validation unit122D validates the paraphrasing rule data selected by the selection unit122C. The third generation unit123generates the relation extraction rule information134based on the paraphrasing rule information133. More specifically, the third generation unit123comprises an input unit123A, a morphological parsing unit123B, a dependency parsing unit123C, a modification unit123D, and a conversion unit123E. The input unit123A inputs a target sentence (text) according to an operation of the input device102. The morphological parsing unit123B divides the target sentence input from the input unit123A into a minimum unit of language that has its own meaning (morpheme). The dependency parsing unit123C parses the modification relation between the clauses based on the morphemes divided by the morphological parsing unit123B, and thereby generates syntax tree data. The modification unit123D modifies the syntax tree data generated by the morphological parsing unit123B and the dependency parsing unit123C into syntax tree data in which the target of extraction has been set (hereinafter sometimes referred to as “paraphrasing rules”). The conversion unit123E converts paraphrasing rule data into relation extraction rules data by using the syntax tree data modified by the modification unit123D. The fourth generation unit124generates paraphrasing rule data based on the paraphrasing rule data stored in the paraphrasing rule information133. The storage unit125stores, in the auxiliary storage device130, the hierarchy information132generated by the first generation unit121, the paraphrasing rule information133generated by the second generation unit122, the relation extraction rule information134generated by the third generation unit123, and the paraphrasing rule information133generated by the fourth generation unit124. The output unit126outputs, to the output device103, information of all or a part of the paraphrasing rule information133, and information of all or a part of the relation extraction rule information134. Note that one function of the word processing device101may be divided into a plurality of functions, and a plurality of functions may be consolidated into one function. Moreover, a part of the functions of the word processing device101may be provided as a separate function, or may be included in another function. Moreover, a part of the functions of the word processing device101may also be realized with another computer that is able to communicate with the word processing device101. The input device102is a user interface which accepts information from the user. The input device102is, for example, a keyboard, a mouse, a card reader, a touch panel, a tablet terminal, a laptop computer or the like. The output device103is a user interface which outputs various types of information (display output, sound output, print output or the like). The output device103is, for example, a display device, a sound output device (speaker), a printing device or the like which visualizes the various types of information. The display device is an LCD (Liquid Crystal Display), a graphic card or the like. The word processing device101and the input device102are communicably connected via wired or wireless connection. The word processing device101and the input device102may be connected directly, or connected indirectly (for example, via a network). Moreover, the word processing device101and the input device102may be provided integrally, or provided separately. The word processing device101and the output device103are communicably connected via wired or wireless connection. The word processing device101and the output device103may be connected directly, or connected indirectly (for example, via a network). Moreover, the word processing device101and the output device103may be provided integrally, or provided separately. FIG.2is a diagram showing an example (corpus table200) of the corpus information131. Note that one or more pieces of corpus information131are stored in the auxiliary storage device130before the operation of the word processing system100(for example, at the time of introduction thereof) by the user or the system administrator via the input device102. The corpus table200stores extensive data of sentences (sentence data) used in texts, sounds, videos and the like. For example, the corpus table200stores various types of sentence data collected from WEB (World Wide Web) sites, theses, newspaper articles and the like. FIG.3is a diagram showing an example (hierarchy table300) of the hierarchy information132. The data (syntax tree data310) indicating a syntax tree for each hierarchy generated from the sentence data stored in the corpus table200and the vector (syntax tree vector320) indicating that syntax tree are associated and stored in the hierarchy table300. The syntax tree data310retains, in an XML (Extensible Markup Language) format, a tree structure for each hierarchy. The syntax tree vector320retains data in a binary format. Here, the tag “<node . . . >” in the syntax tree of the word processing system100indicates a node. For example, in a record330of the hierarchy table300, the node “have” in the first line indicates a parent node. Moreover, the node “interest rates will” in the second line, the node “stock prices on” in the third line, and the node “impact” on the fourth line indicate child nodes. Moreover, “<attribute>=<attribute value>” in the tag indicates the attribute and the attribute value that can be set in the node. For example, the attribute “lemma” indicates a lemma. Note that, in a lemma, the past form “had” is deemed the present form “have”. Moreover, for example, the attribute “case” indicates a postpositional particle. Note that the format of retaining data is not limited to the foregoing formats, and may be other formats. Moreover, the method of generating the syntax tree data310and the syntax tree vector320will be explained later with reference toFIG.7A,FIG.7Band so on. FIG.4is a diagram showing an example (paraphrasing rule table400) of the paraphrasing rule information133. A first expression (expression data410-1) and a second expression (expression data410-2), which is a paraphrased expression of the first expression, are associated and stored as the paraphrasing rule data in the paraphrasing rule table400. The expression data410is retaining a tree structure of the expression data410in an XML format. Note that the method of generating the expression data410will be explained later with reference toFIG.8and so on. FIG.5is a diagram showing an example (relation extraction rule table500) of the relation extraction rule information134. The relation (relational data510) set by the user and the relation extraction rule (relation extraction rules data520) generated based on the paraphrasing rule table400are associated and stored in the relation extraction rule table500. The relation extraction rules data520is retaining a tree structure of the relation extraction rules data520. Here, in “(condition of node1(condition of node2) (condition of node3) . . . )” of the relation extraction rules data520, the node1indicates a parent node, and the node2, the node3, . . . indicate the child nodes. “<attribute>=<attribute value>” in the relation extraction rules data520indicates the definition of the attribute and the attribute value that can be set in the node. “#a<number>” in the relation extraction rules data520indicates the target (phrase) to be extracted when the relation extraction rule is a match. For example, the relation extraction rules data521indicates that the relation extraction rule matches the following syntax tree.The lemma of the parent node is “make”, and there is no postpositional particleThe lemma of the first child node is arbitrary, and the postpositional particle is “will”The lemma of the second child node is arbitrary, and the postpositional particle is “on”The lemma of the third child node is “impact”, and there is no postpositional particle Note that the method of generating the relation extraction rules data520will be explained later with reference toFIG.10and so on. FIG.6is a diagram showing an example of the processing performed by the word processing device101. The first generation unit121of the word processing device101performs hierarchy information generation processing621of generating the hierarchy information132from the corpus information131. More specifically, in the hierarchy information generation processing621, the first generation unit121generates hierarchy data in which each sentence data of the corpus information131is divided into each hierarchy of the syntax tree. The hierarchy information generation processing621will be explained later with reference toFIG.7AandFIG.7B. The second generation unit122of the word processing device101performs paraphrasing rule information generation processing622of generating the paraphrasing rule information133from the hierarchy information132. The paraphrasing rule information generation processing622is configured by including search processing622A, extraction processing622B, selection processing622C, and validation processing622D. The search processing622A is, for example, processing to be performed by the search unit122A. The search processing622A will be explained later with reference to S801and S802ofFIG.8. The extraction processing622B is, for example, processing to be performed by the extraction unit122B. The extraction processing622B will be explained later with reference to S803and S804ofFIG.8. The selection processing622C is, for example, processing to be performed by the selection unit122C. The selection processing622C will be explained later with reference to S805ofFIG.8. The validation processing622D is, for example, processing to be performed by the validation unit122D. The validation processing622D will be explained later with reference to S806to S809ofFIG.8. The third generation unit123of the word processing device101performs relation extraction rule information generation processing623of generating the relation extraction rule information134from the paraphrasing rule information133, and the example sentence610including the relation. The relation extraction rule information generation processing623is configured by including input processing623A, morphological parsing processing623B, dependency parsing processing623C, modification processing623D, and conversion processing623E. The input processing623A is, for example, processing to be performed by the input unit123A. The input processing623A will be explained later with reference to S1001ofFIG.10. The morphological parsing processing623B is, for example, processing to be performed by the morphological parsing unit123B. The morphological parsing processing623B will be explained later with reference to S1002ofFIG.10. The dependency parsing processing623C is, for example, processing to be performed by the dependency parsing unit123C. The dependency parsing processing623C will be explained later with reference to S1002ofFIG.10. The modification processing623D is, for example, processing to be performed by the modification unit123D. The modification processing623D will be explained later with reference to S1003ofFIG.10. The conversion processing623E is, for example, processing to be performed by the conversion unit123E. The conversion processing623E will be explained later with reference to S1004to S1006ofFIG.10. The fourth generation unit124of the word processing device101performs addition processing624of generating the paraphrasing rule information133from the paraphrasing rule information133. The addition processing624is, for example, processing to be performed by the fourth generation unit124. The addition processing624will be explained later with reference toFIG.9. FIG.7Ais a diagram showing an example of the hierarchy information generation processing621. The hierarchy information generation processing621is started, for example, at the timing instructed by the user via the input device102. In the hierarchy information generation processing621, the processing of S701to S705is performed for each sentence data included in the corpus information131. In the following explanation, the processing of S701to S705is explained with reference toFIG.7Bas needed.FIG.7Bis a diagram showing the generated image of the hierarchy data. In S701, the word processing device101acquires one unprocessed sentence data from the corpus information131. For example, the word processing device101acquires the sentence data711“Interest rates will have an impact on stock prices in Japan” shown inFIG.7B. In S702, the word processing device101performs syntax parsing (morphological parsing and dependency parsing) regarding the sentence data acquired in S701, and thereby generates syntax tree data. For example, the word processing device101generates the syntax tree data721from the sentence data711. In S703, the word processing device101generates syntax tree data for each hierarchy. For example, the word processing device101generates the syntax tree data731and the syntax tree data732for each hierarchy from the syntax tree data721. As a result of the word processing device101dividing the syntax tree for each hierarchy as described above, it is possible to eliminate modifier parts and other unneeded clauses, and facilitate the acquisition of similar expression data. In S704, the word processing device101generates a vector of the syntax tree data for each hierarchy. For example, the word processing device101generates a vector741of syntax tree data731and a vector742of syntax tree data732. Since the clauses of “will have” “interest rates” “on stock prices” “an impact” are individually included (i.e., one clause each) in the syntax tree data731, the word processing device101sets the frequency “1” to the corresponding position of the clause in the vector741. Note that the word processing device101can compare the vectors without having to retain information for identifying the clauses by listing all clauses and fixing the positions of the clauses. In S705, the word processing device101stores hierarchy data. For example, the word processing device101stores the syntax tree data731, and the vector741of the syntax tree data731, as the hierarchy data751in the hierarchy information132. FIG.8is a diagram showing an example of the paraphrasing rule information generation processing622. The paraphrasing rule information generation processing622is performed subsequent to the hierarchy information generation processing621. In the paraphrasing rule information generation processing622, the processing of S801to S809is performed with regard to each hierarchy data included in the hierarchy information132. In S801, the word processing device101acquires one unprocessed hierarchy data from the hierarchy information132. In the following explanation, the hierarchy data acquired by the word processing device101in S801is hereinafter referred to as “original data”. In S802, the word processing device101acquires hierarchy data which is similar to the vector of the original data from the hierarchy information132(this data is hereinafter referred to as “similar data”). For example, the word processing device101calculates the similarity between the original data and all hierarchy data, and uses the most similar hierarchy data as the similar data. The similarity may be a cosine similarity or a Euclidean distance, or a value calculated based on other calculation methods. In S803, the word processing device101compares the syntax tree data of the original data and the similar data, and acquires the difference of both data (original data and similar data). More specifically, the word processing device101deletes the same nodes existing in both data. For example, in a case where the parent node of the original data is “A” and the child nodes of the original data are “B”, “C” and “D”, and the parent node of the similar data is “E” and the child nodes of the similar data are “B” and “C”, the common nodes “B” and “C” are deleted from both data. In S804, the word processing device101extracts the difference between both data as the paraphrasing rule data. For example, in the foregoing case, expression data (parent node “A” of the original data and child node “D” of the original data) as the difference in the original data and expression data (parent node “E” of the similar data) as the difference in the similar data are extracted as the paraphrasing rule data. In S805, the word processing device101determines whether the paraphrasing rule data extracted in S804satisfies the paraphrasing condition. The word processing device101proceeds to the processing of S806upon determining that the paraphrasing rule data satisfies the paraphrasing condition, proceeds to the processing of S801upon determining that the paraphrasing rule data does not satisfy the paraphrasing condition or when there is unprocessed hierarchy data, and ends the paraphrasing rule information generation processing622when there is no unprocessed hierarchy data. As the paraphrasing condition, considered may be the paraphrasing of a part of speech such as the paraphrasing of a verb or the paraphrasing of an adjective. For example, when the paraphrasing of a verb has been set by the user via the input device102, the word processing device101determines that the paraphrasing condition is satisfied when the parent nodes of both data are verbs and the parent nodes of both data are different. Moreover, for example, when the paraphrasing of an adjective has been set by the user via the input device102, the word processing device101determines that the paraphrasing condition is satisfied when the parent nodes of both data are adjectives and the parent nodes of both data are different. In S806, the word processing device101acquires, from the hierarchy information132, all hierarchy data using one expression data of the paraphrasing rule data determined as satisfying the paraphrasing condition in S805. In the following explanation, described is a case where the word processing device101acquires the expression data of the original data, as one expression data, when the paraphrasing rule data determined as satisfying the paraphrasing condition in S805is the paraphrasing rule data in which the expression data of the original data is (parent node “will have” and child node “an impact”), and the expression data of the similar data is (parent node “impacts”). For example, when the first hierarchy data (parent node “will have” and child nodes “strong yen” “an impact” “on stock prices”) and the second hierarchy data (parent node “will have” and child nodes “an impact” “on one's life”) are included in the hierarchy information132, the word processing device101acquires, from the hierarchy information132, the first hierarchy data and the second hierarchy data using the expression data of the original data. Note that, while the frequency of appearance is calculated in S807in relation to one expression data, since the frequency of appearance will be of the same value whether it is the expression data of the original data or the expression data of the similar data, the expression data of the original data or the expression data of the similar data may be used as the one expression data. In S807, the word processing device101paraphrases the syntax tree data of the hierarchy data extracted in S806and calculates the frequency of appearance. For example, the word processing device101confirms whether the expression data (parent node “impacts” and child nodes “strong yen” “on stock prices”) obtained by paraphrasing the syntax tree data (parent node “will have” and child nodes “strong yen” “an impact” “on stock prices”) of the extracted first hierarchy data using the paraphrasing rule data (parent node “will have” and child node “an impact”-parent node “impacts”) determined as satisfying the paraphrasing condition in S805is included in the hierarchy information132. Moreover, for example, the word processing device101confirms whether the expression data (parent node “will have” and child nodes “an impact” “on one's life”) obtained by paraphrasing the syntax tree data (parent node “will have” and child nodes “an impact” “on one's life”) of the extracted second hierarchy data using the paraphrasing rule data (parent node “will have” and child node “an impact”-parent node “impacts”) determined as satisfying the paraphrasing condition in S805is included in the hierarchy information132. Subsequently, the word processing device101counts the number of expression data obtained by paraphrasing the syntax tree data of the first hierarchy data using the paraphrasing rule data included in the hierarchy information132, and the number of expression data obtained by paraphrasing the syntax tree data of the second hierarchy data using the paraphrasing rule data included in the hierarchy information132, and uses the result as the frequency of appearance. In S808, the word processing device101determines whether the frequency of appearance is equal to or greater than a threshold. The word processing device101proceeds to the processing of S809upon determining that the frequency of appearance is equal to or greater than the threshold, proceeds to the processing of S801upon determining that the frequency of appearance is less than the threshold or when there is unprocessed hierarchy data, and ends the paraphrasing rule information generation processing622when there is no unprocessed hierarchy data. Note that the threshold is set by the user via the input device102before the processing of S808is performed. In S809, the word processing device101stores, in the paraphrasing rule information133, the paraphrasing rule data determined as satisfying the paraphrasing condition in S805, proceeds to the processing of S801when there is unprocessed hierarchy data, and ends the paraphrasing rule information generation processing622when there is no unprocessed hierarchy data. FIG.9is a diagram showing an example of the addition processing624. The addition processing624is performed at a suitable timing. A suitable timing may be the timing that the paraphrasing rule information generation processing622has ended, the timing designated by the user, periodic, the timing designated in advance, or any other timing. In the addition processing624, the processing of S902to S907is performed with regard to each hierarchy data included in the hierarchy information132. In S901, the word processing device101acquires one unprocessed paraphrasing rule data from the paraphrasing rule information133. In the following explanation, described is a case where the paraphrasing rule data “A-B” is acquired in S901, and the paraphrasing rule data “A-C” and the paraphrasing rule data “B-D” have previously been stored in the paraphrasing rule information133. When the paraphrasing rule data “A-B” is acquired in S901, since the expression data “A” and the expression data “B” have a similar meaning, and the expression data “A” and the expression data “C” have a similar meaning, there is a possibility that the expression data “B” and the expression data “C” are similar. In S902to S904, whether the paraphrasing rule data “B-C” has a similar meaning is validated, and, when it is determined as being similar as a result of the validation, the paraphrasing rule data “B-C” is stored in the paraphrasing rule information133. In S902, the word processing device101acquires, from the paraphrasing rule information133, the paraphrasing rule data including the expression data “A” of the paraphrasing rule data acquired in S901. The word processing device101performs the processing of S903and S904with regard to each of the acquired paraphrasing rule data. In S903, the word processing device101generates the paraphrasing rule data “B-C” obtained by combining the expression data “C” and the expression data “B”, which are not on the side of the expression data “A” of the paraphrasing rule data acquired in S902. In S904, the word processing device101validates the paraphrasing rule data “B-C” generated in S903. More specifically, the word processing device101performs the processing of S806to S809. Moreover, when the paraphrasing rule data “A-B” is acquired in S901, since the expression data “A” and the expression data “B” have a similar meaning, and the expression data “B” and the expression data “D” have a similar meaning, there is a possibility that the expression data “A” and the expression data “D” are similar. In S905to S907, whether the paraphrasing rule data “A-D” has a similar meaning is validated, and, when it is determined as being similar as a result of the validation, the paraphrasing rule data “A-D” is stored in the paraphrasing rule information133. In S905, the word processing device101acquires, from the paraphrasing rule information133, the paraphrasing rule data including the expression data “B” of the paraphrasing rule data acquired in S901. The word processing device101performs the processing of S906and S907with regard to each of the acquired paraphrasing rule data. In S906, the word processing device101generates the paraphrasing rule data “A-D” obtained by combining the expression data “D” and the expression data “A”, which are not on the side of the expression data “B” of the paraphrasing rule data acquired in S905. In S907, the word processing device101validates the paraphrasing rule data “A-D” generated in S903. More specifically, the word processing device101performs the processing of S806to S809. FIG.10is a diagram showing an example of the relation extraction rule information generation processing623. The relation extraction rule information generation processing623is started, for example, at the timing designated by the user via the input device102. In S1001, the word processing device101accepts an example sentence from the user. For example, the example sentence1010is input from the user via the input device102. In the example sentence1010, a mark (underline in this example) is affixed to the phrase that the user wishes to extract. In S1002, the word processing device101performs syntax parsing, and generates syntax tree data of the example sentence accepted in S1001. For example, the word processing device101generates the syntax tree data1020of the example sentence1010. In S1003, the word processing device101generates syntax tree data (paraphrasing rules data) in which the underlined part of the syntax tree data generated in S1002is set as a wild card. For example, the word processing device101sets the underlined part of the syntax tree data1020as a wild card “[noun]”, and generates the paraphrasing rules data1030. In S1004, the word processing device101acquires the paraphrasing rule data from the paraphrasing rule information133. The word processing device101acquires, from the paraphrasing rule information133, the paraphrasing rule data1040that can be applied to the paraphrasing rules data1030generated from the example sentence1010. For example, the word processing device101acquires the paraphrasing rule data1040of the expression data including the node “impacts” in which a wild card has not been set in the paraphrasing rules data1030. In S1005, the word processing device101applies the paraphrasing rules data generated in S1003to the paraphrasing rule data acquired in S1004, and thereby generates relation extraction rules data. For example, the word processing device101applies the paraphrasing rules data1030to the paraphrasing rule data1040, and thereby generates the relation extraction rules data1050. In S1006, the word processing device101stores, in the relation extraction rule information134, the relation extraction rules data generated in S1005. FIG.11is a diagram showing an example (screen1100) of the screen for generating the paraphrasing rule data. The screen1100is displayed on the output device103according to the user's operation of the input device102. The screen1100is configured by including a selection part1110, a selection part1120, a setting part1130, a start button1140, and a cancel button1150. The selection part1110is an example of the user interface for the user to select the corpus information131as the target for which the paraphrasing rule data is to be generated from a plurality of pieces of corpus information131. The selection part1120is an example of the user interface for the user to select, upon limiting the paraphrasing rule data that the user wishes to extract, the paraphrasing condition to be used for the limitation from a plurality of paraphrasing conditions. The setting part1130is an example of the user interface for the user to set a threshold of the frequency of appearance. The start button1140is an example of the user interface for the user to instruct the start of the generation of the paraphrasing rule data. When the start button1140is pressed by the user, the hierarchy information generation processing621is started. The cancel button1150is an example of the user interface for the user to instruct the cancellation of the generation of the paraphrasing rule data. FIG.12is a diagram showing an example (screen1200) of the screen for displaying the paraphrasing rule data. The screen1200is displayed on the output device103according to the user's operation of the input device102. The screen1200comprises a display part1210, a file output button1220, and an end button1230. The display part1210is an example of the user interface for the user to display the paraphrasing rule data stored in the paraphrasing rule information133. The file output button1220is an example of the user interface for the user to output the paraphrasing rule data stored in the paraphrasing rule information133as a file. The end button1230is an example of the user interface for the user to close the screen1200. According to the screen1200, the user can confirm all or a part of the paraphrasing rule data stored in the paraphrasing rule information133, or output the paraphrasing rule data as a file. FIG.13is a diagram showing an example (screen1300) of the screen for generating the relation extraction rules data. The screen1300is displayed on the output device103according to the user's operation of the input device102. The screen1300comprises input parts1310to1340, an input addition button1350, a start button1360, and a cancel button1370. The input part1310is an example of the user interface for the user to input the relation used for the classification of the relation extraction rules data. The input part1320is an example of the user interface for the user to input an example sentence. The input part1330is an example of the user interface for the user to input a first phrase extracted from the example sentence input to the input part1320. The input part1340is an example of the user interface for the user to input a second phrase extracted from the example sentence input to the input part1320. The input addition button1350is an example of the user interface for the user to add the column to which the extracted phrase is input. The start button1360is an example of the user interface for the user to instruct the start of the generation of the relation extraction rules data. When the start button1360is pressed by the user, the relation extraction rule information generation processing623is started. The cancel button1150is an example of the user interface for the user to instruct the cancellation of the generation of the paraphrasing rule data. FIG.14is a diagram showing an example (screen1400) of the screen for displaying the relation extraction rules data. The screen1400is displayed on the output device103according to the user's operation of the input device102. The screen1400comprises a display part1410, a file output button1420, and an end button1430. The display part1410is an example of the user interface for the user to display, for each relation input by the user, the relation extraction rules data stored in the relation extraction rule information134. The file output button1420is an example of the user interface for the user to output the relation extraction rules data stored in the relation extraction rule information134as a file. The end button1430is an example of the user interface for the user to close the screen1400. According to the screen1400, the user can confirm the relation extraction rules data stored in the relation extraction rule information134or output the relation extraction rules data as a file for each input relation. FIG.15is a diagram showing an example of the method of using the paraphrasing rule information133and the relation extraction rule information134. The paraphrasing rule information133can be used for information search1510. For example, the word processing device101creates a search query (for example, “send an email”) as a paraphrase of a search query (for example, “transmit an email”). According to this configuration, since information is searched using a plurality of search queries, the user can easily obtain one's intended information. Moreover, the paraphrasing rule information133can be used for relation extraction1520by generating the relation extraction rule information134as described above. In the relation extraction1520, the relation extraction system1521matches (compares) the syntax tree data of the target sentence1522and the relation extraction rules data, and extracts the matched phrase1523. Note that, as the relation extraction system1521, the sentence generation system described in, for example, Japanese Unexamined Patent Application Publication No. 2019-83040 may be adopted. Moreover, the relation extraction system1521may be included in the word processing system100, or connected communicably with the word processing device101. Moreover, the method of using the paraphrasing rule information133is not limited to the method of use explained above. For example, the paraphrasing rule information133may also be used for simplifying the expression data. In the foregoing case, the word processing device101paraphrases abstruse expression data (for example, “the minister will be removed from office”) used in a medium such as a newspaper article or news for children or foreigners to plain expression data (for example, “the minister will be forced to quit”). According to this configuration, since abstruse expression data will be paraphrased into simple expression data, the user will be able to more easily understand the subject matter. According to this embodiment, it is possible to provide a highly convenient word processing system. (2) Supplementary Notes The foregoing embodiment includes, for example, the following subject matter. While the foregoing embodiment explained a case of applying the present invention to a word processing system, the present invention is not limited thereto, and may be broadly applied to other various systems, devices, methods, and programs. Moreover, while the foregoing embodiment explained a case where, in S704, the value of the vector is the frequency of a clause, the present invention is not limited thereto, and the value of the vector may also be the existence of a clause. Moreover, while the foregoing embodiment explained a case where, in S802, the most similar data is used as the similar data, the present invention is not limited thereto, and data which is higher than a predetermined threshold may also be used as the similar data. In the foregoing case, the processing (addition processing624) of S901may be omitted. Note that the predetermined threshold is set by the user before the processing of S802is performed. Moreover, while the foregoing embodiment explained a case where the addition processing624is performed to all paraphrasing rule data after the paraphrasing rule information generation processing622(validation processing622D of all hierarchy data) is completed, the present invention is not limited thereto, and the addition processing624(regarding the stored paraphrasing rule data) may also be performed subsequent to the validation processing622D (S809) with regard to each of the hierarchy data. Moreover, in the foregoing embodiment, the configuration of each table is an example, one table may be divided into two or more tables, or all or a part of two or more tables may be one table. Moreover, in the foregoing embodiment, while various types of data were explained using an expression of “XX table” for the sake of convenience in explaining the present invention, the data structure is not limited thereto, and an expression such as “XX information” may also be used. Moreover, in the foregoing embodiment, the illustrated and explained screens are examples, and may be of any design so as long as the accepted information is the same. Moreover, in the foregoing embodiment, the output of information is not limited to an indication on a display. The output of information may be a sound output from a speaker, or an output to a file, or printed on a paper medium by a printing device, or projected on a screen or the like by a projector, or may be an output of any other mode. Moreover, in the foregoing explanation, information of programs, tables, files and the like which realize the respective functions may be stored in a memory, a storage device such as a hard disk or an SSD (Solid State Drive), or in a recording medium such as an IC card, an SD card, or a DVD. The foregoing embodiment includes, for example, the following characteristic configuration. A word processing system (for example, word processing system100) comprises a first generation unit (for example, first generation unit121, word processing device101, circuit) which generates, based on sentence information (for example, corpus information131, a plurality of sentence data) including a plurality of sentences, hierarchy data (for example, hierarchy information132, hierarchy data) indicating a syntax tree for each hierarchy with regard to each sentence, a second generation unit (for example, second generation unit122, word processing device101, circuit) which acquires, from a plurality of hierarchy data generated by the first generation unit, hierarchy data of a second sentence similar to hierarchy data of a first sentence generated by the first generation unit (for example, see S802), extracts a difference between the hierarchy data of the first sentence and the hierarchy data of the second sentence (for example, see S803), and generates, as paraphrasing rule data (for example, paraphrasing rule data), first expression data as a difference in the first sentence and second expression data as a difference in the second sentence, and a storage unit (for example, storage unit125, word processing device101, circuit) which stores the paraphrasing rule data generated by the second generation unit in a storage device (for example, auxiliary storage device130, or external storage device capable of communicating with the word processing system100). According to the foregoing configuration, since the difference between the hierarchy data of the first sentence and the hierarchy data of the second sentence; that is, since the first expression data of the first sentence and the second expression data of the second sentence, which is a paraphrased expression of the first expression data, are automatically generated as the paraphrasing rule data, the user can easily obtain paraphrased expressions. The foregoing word processing system additionally comprises a third generation unit (for example, third generation unit123, word processing device101, circuit) which generates syntax tree data of an example sentence (for example, example sentence610) in which a mark is affixed to a phrase desired by a user (for example, see S1002), modifies the generated syntax tree data into syntax tree data in which the phrase in the generated syntax tree data is set as a symbol (for example, wild card) indicating a phrase which matches all phrases (for example, see S1003), acquires paraphrasing rule data including the modified syntax tree data as expression data from the paraphrasing rule data stored in the storage device (for example, see S1004), and generates extraction rules data (for example, relation extraction rule information134, relation extraction rules data) in which the modified syntax tree data has been applied to the acquired paraphrasing rule data. According to the foregoing configuration, for example, the extraction rules data for extracting the phrase desired by the user from arbitrary sentences can be easily generated from the paraphrasing rule data. The foregoing second generation unit determines whether the first expression data and the second expression data satisfy a condition (for example, paraphrasing condition) for selecting paraphrasing rule data desired by a user (for example, see S805), and generates the first expression data and the second expression data as the paraphrasing rule data upon determining that the first expression data and the second expression data satisfy the condition. With the foregoing configuration, for example, even using a corpus accumulated with a huge quantity of sentences included in newspapers, magazines, books and the like as the sentence information, there is no need to manually select the sentences that match the condition, and the paraphrasing rule data desired by the user can be appropriately generated. The foregoing second generation unit acquires hierarchy data using the first expression data from the plurality of hierarchy data (for example, see S806), paraphrases syntax tree data of the acquired hierarchy data with the second expression data, counts a number of the paraphrased syntax tree data included in the plurality of hierarchy data (for example, see S807), and generates the first expression data and the second expression data as the paraphrasing rule data when the counted number exceeds a threshold. According to the foregoing configuration, for example, it is possible to avoid a situation where paraphrasing rule data, in which the meanings of two expression data are not similar, are registered. The foregoing word processing system additionally comprises a fourth generation unit (for example, fourth generation unit124, word processing device101, circuit) which acquires paraphrasing rule data included in the first expression data from the paraphrasing rule data stored in the storage device (for example, see S902), uses third expression data on a side which differs from the first expression data in the acquired paraphrasing rule data, and the second expression data, as the paraphrasing rule data (for example, see S903), acquires hierarchy data using the second expression data from the plurality of hierarchy data (for example, see S806), paraphrases syntax tree data of the acquired hierarchy data with the third expression data, counts a number of the paraphrased syntax tree data included in the plurality of hierarchy data (for example, see S807), and generates the second expression data and the third expression data as the paraphrasing rule data when the counted number exceeds a threshold. With the foregoing configuration, for example, the paraphrasing rule data can be efficiently generated. The foregoing word processing system additionally comprises an output unit (for example, output unit126, word processing device101, circuit) which outputs all or a part of the paraphrasing rule data stored in the storage device. With the foregoing configuration, since the paraphrasing rule data is output, for example, the user can easily obtain the paraphrasing rule data. Moreover, the foregoing configurations may be changed, rearranged, combined or omitted as needed to the extent that such change, rearrangement, combination or omission does not exceed the subject matter of the present invention. REFERENCE SIGNS LIST 100. . . word processing system,101. . . word processing device,121. . . first generation unit,122. . . second generation unit. | 51,609 |
11861306 | DETAILED DESCRIPTION In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without one or more of these specific details. Operational Overview A digital assistant request is a natural language input provided to a digital assistant so as to instigate an interaction with responders who can fulfill the request. A digital assistant interface accepts natural language inputs and in turn provides responses in a human consumable format. To provide precise and complete responses to the natural language request, a unique-name based framework is discussed in detail below. The unique-name based framework may use unique identifiers, e.g., domain names, for request types, requesting entities, responders, and target entities. Further, the framework enables interpreting natural language requests according to ontologies associated with different responders. An ontology operates to define the syntax for interacting with responders, defining the keywords that identify request types and the corresponding allowable values to be used for request parameters. Further, the ontology may operate to identify allowable values that can be returned in responses to requests. The unique-name based framework thus enables the digital assistant to interact with any responder that supports a ontology to generate precise and complete responses to natural language based requests. System Overview FIG.1illustrates a system100configured to implement one or more aspects of the invention. As shown, the system100includes, without limitation, a digital assistant110, a digital assistant back-end service130(also referred to as the “back-end service130), responders140(0)-140(N) (also referred to as “the responders140,” collectively, and “the responder140,” individually). In alternate embodiments, the system100may include any number of digital assistants110, back-end services130, and responders140, in any combination. A digital assistant110is an interactive digital assistant that exposes a conversational interface allowing users of the digital assistant110to perform natural language interactions. In one embodiment, a digital assistant may be a voice assistant that provides an audio user interface. In some embodiments, a digital assistant110may be implemented as a standalone device that is relatively simple in terms of functional capabilities with limited input/output components, memory, and processing capabilities. In some embodiments, the digital assistant110includes or is coupled to at least one microphone and includes or is coupled to at least one speaker in order to facilitate audio interactions with one or more users. In some embodiments, the digital assistant110is implemented without haptic input components (e.g., keyboard, keypad, touch screen, joystick, control buttons, etc.) or a display. In other embodiments, a limited set of one or more haptic input components may be included in the digital assistant110(e.g., a dedicated button to initiate a configuration, power on/off, etc.). The primary mode of user interaction with the digital assistant110as a voice assistant may be through voice input and audible output. In other embodiments, the digital assistant110may be a chatbot that provides a text based interface that allows a user to type input to the digital assistant110and the chatbot provides responses to the input in a textual display. Embodiments of digital assistants110as voice assistant are exemplary of processing capabilities that may be provided by other forms of digital assistants. Those with ordinary skill in the art are capable of applying the principles of the voice assistant embodiments to digital assistants that provide other user interface mechanisms, such as the text-based interface provided by a chatbot or a visual interface provided by a voice assistant that includes a touch-sensitive screen that serves as an input and output device. In operation, the digital assistant110implemented as a voice assistant, via an audio input device, e.g., a microphone, captures audio from an environment in which the digital assistant110is placed. Such audio may include spoken words, music, ambient noise, etc. The digital assistant110and/or the other components in the system100(such as the back-end service130) perform speech recognition on the captured audio to identify spoken words and interpret those spoken words into a user request. The requests may be for any type of operation. For example, the request may be for performing a specific action in the environment (e.g., “Turn living room lights off,” and “Shuffle nursery rhymes”) or may be targeted at an Internet-enabled service (e.g., “What is the weather in New York?,” and “What were the major league baseball scores from last night?”). Other examples of such requests include consuming entertainment (e.g., gaming, finding and playing music, movies or other content, etc.), personal management (e.g., calendaring, note taking, etc.), online shopping, financial transactions, telephone communication, and so forth. In an embodiment where the digital assistant110is a voice assistant, the digital assistant110, via an audio output device, e.g., a speaker, may output a response confirming that the user request has been detected. In another embodiment where a digital assistant is a chatbot, the digital assistant may output a text response confirming that the user request has been detected. In addition, the digital assistant110may transmit user requests to the back-end service130to further process the request. The digital assistant110communicates with the back-end service130over a network (not shown) using wired technologies (e.g., wires, USB, fiber optic cable, etc.), wireless technologies (e.g., RF, cellular, satellite, Bluetooth, etc.), the Internet, or via other connection technologies. In an embodiment where the digital assistant110is a voice assistant, the back-end service130processes audio captured by the digital assistant110to perform speech recognition operations in order to detect user requests and generate one or more responses or actions associated with the request. To generate the responses or actions, the back-end service130may interact with one or more responders140to process the user request. A responder140is a service, e.g., a networked service, that provides information or processing resources to its upstream clients, such as the user of the digital assistant110. In some embodiments, a responder140is a service to be accessed over a local network or the Internet and that performs request-specific processing needed to fulfill a request. In some embodiments, target entities are specific entities that a responder140considers when fulfilling a request. A responder140may be a news platform, a gaming platform, or a records management service. Depending on the type of user request, the back-end service130may interact with several responders140to fulfill the user request and generate the associated responses. The back-end service130provides the responses to the digital assistant110, which, in turn, outputs the responses via the speaker and/or performs the actions. To perform the above functions, the back-end service130includes a data collection module132, user data store134, responder data store136, and a natural language (NL) request processing module138. The data collection module132collects information associated with users of the digital assistant110and stores the user information in the user data store134. User information includes but is not limited to a user's identity, history of requests made by the user, history of responses provided to the user and any related subsequent requests, and configuration information provided by the user or automatically learned by the data collection module132over time. The data collection module132also collects information associated with the responders140and stores the responder information in the responder data store136. Responder information includes but is not limited to the types of and details regarding the services and information that a responder140provides and mechanisms for connecting to and communicating with the responder140. Further details regarding the data collection module132are provided in conjunction withFIG.2and the description thereof. In an embodiment where the digital assistant110is a voice assistant, the NL request processing module138receives audio captured by the digital assistant110and performs speech recognition operations on the audio to identify a request spoken by the user. In an embodiment where the digital assistant110is a chatbot, the NL request processing module138receives textual input and may perform processing functions to normalize the request such as correcting misspellings. The NL request processing module138determines one or more parameters associated with the request such as the request type, the identity of the user who spoke the request, services and/or responders identified in the request, and a type of action to be performed. The NL request processing module138determines the parameters based on user information stored in the user data store134and responder information stored in the responder data store136. In particular, the NL request processing module138identifies, based on the determined parameters, one or more responders140to which the user request is to be targeted (referred to as the “relevant responders140”) and the data to be provided to the relevant responders140in order to fulfill the request. The NL request processing module138generates a responder request based on the data to be provided to the relevant responders140. The responder request may be formatted as a uniform resource identifier (URI) or some other format applicable to the communication protocol appropriate for interacting with a responder. Each relevant responder140may provide to the NL request processing module138a response to the responder request. The NL request processing module138may transmit the responses to the digital assistant110or may process the responses to generate a formatted response. The formatted response may be an aggregated response including responses from multiple targeted responders or may be organized in a manner that enables a user of the digital assistant to navigate the formatted response. For example, in an embodiment where the digital assistant110is a voice assistant, audio output may be created that can be interactively spoken to a user one item at time with a pause between items to provide an opportunity for user input. Further details regarding the NL request processing module138are provided in conjunction withFIG.4and the description thereof. In one embodiment, the digital assistant110locally performs some or all of the functions performed by the back-end service130. In such an embodiment, the digital assistant110may collect and locally store user and responder configuration information. Further, the digital assistant110may perform speech recognition operations on audio to identify requests spoken by users and process those requests in conjunction with the responders140without the back-end service130operating as an intermediary. The network150includes a plurality of network communications systems, such as routers and switches, configured to facilitate data communication between the back-end service130and the responders140. Persons skilled in the art will recognize that many technically feasible techniques exist for building the communications network150, including technologies practiced in deploying the well-known Internet communications network. Unique Name Based Framework for Interpreting and Processing Natural Language Requests To interpret and process a natural language request with precision, the digital assistant110in conjunction with the back-end service130may need to extract from the request or otherwise infer one or more parameters associated with the request. The parameters may include, but are not limited to, (i) the identity of the user who provided the request, (ii) the identity of the device owner, (iii) a type of the request, e.g., information retrieval or performing an action external or internal to an environment in which the digital assistant110is placed, (iv) the identities of one or more responders140to be interacted with to fulfill the request, (v) the identities of one or more target entities and aspects of those target entities to which the request applies, and (vi) how to programmatically handle the request. For instance, “Ted Somebody” may issue the following request to a digital assistant110, “ask Funky Stocks about Lunar Vacations Inc. stock price.” In this request, the type of request is “ask,” the identity of the device owner is “Ted Somebody,” the responder is “Funky Stocks.” Further, the target entity is “Lunar Vacations Inc.,” and the specific aspect of the target entity is “stock price.” How to handle the request may be identified as “find latest stock price” based on a default processing function defined for “stock price” and the user not specifying a processing option. The parameters associated with a natural language request may be included in the request or may be semantically or structurally inferred by the back-end service130. The unique name based framework implemented by the back-end service130allows for these parameters to be semantically determined based on specific identifiers, i.e., domain names or some other unique identifier, that may be included in or may be mapped to keywords included in the natural language request. A unique name based framework is exemplary of how the use of unique identifiers operates as an interoperable mechanism for interpreting natural language requests, where the responders plug into the framework by providing processing capabilities defined by ontologies that map keywords and/or request parameters to unique names. In one embodiment, the unique name-based framework could be based on domain names and sub-domain names as the unique identifiers. In this case, the unique name based framework is referred to as a domain name based framework. Many of the embodiments of a unique name based framework found here-in are based on a domain name based framework. Those with ordinary skill in the art are capable of applying the principles of the domain name based frameworks to other forms of unique name frameworks based on other types of unique names, such a Handle identifiers as defined in IETF RFC 3650, IETF RFC 3651 and IETF RFC 3652. In an embodiment of using unique identifiers as entity identifiers, domain names are unique identifiers providing advantages versus other forms of unique identifiers that could be used to provide precision in identifying requesting entities (e.g., users of the digital assistant110), responders140, and target entities. First, domain names support explicit correspondence between unique names and unique entities such as requesting entities, responders140, and target entities. For example, the domain name “dunkmeister-berlin.tld” could identify a donut shop in Berlin, Germany with the name “dunkmeister” and the domain name “bigleague-dunkmeister.tld” could identify a basketball player with the nickname “dunkmeister”. Second, the Internet and the domain name system provide a global registration and resolution infrastructure to support using domain names as entity identifiers. Third, domain names are an inexpensive way for requesting entities, responders140, and target entities to have globally unique identifiers. For example, domain names are already in place for many potential responders140that are accessed over the Internet so no or little additional cost or resources are expended in setting up domain names as unique identifiers. Fourth, a domain name can identify an Internet accessible server providing services that can be interacted with, for example as a responder or to provide services related to a specific entity. In some embodiments, entity identifiers may be handle identifiers that are supported by a Handle Registry system as defined in IETF RFC 3650, IETF RFC 3651, and IETF RFC 3652. In an example of using Handle Identifiers, the handle identifier “999.9999/restaurants/Germany/Berlin/dunkmeister” could identify a donut shop in Berlin, Germany with the name “dunkmeister” and the domain name “999.9999/basketball/players/bigleague-dunkmeister” could identify a basketball player with the nickname “bigleague-dunkmeister”. In some embodiments, unique entity identifiers may be derived from identifiers found in a parent name space. For example, a parent identifier name space may be defined as the set of all domain names and unique keywords within a domain name provide a new name space based on the domain name space. In another example, a namespace might be derived based on user names within a social media site to derive globally unique identifiers that have an identifier for the social media site as the parent name space. For example, if a social media social site is referred to with the unique identifier “socialbiz” within the universe of social media sites and has user “someperson,” then the unique identifier “socialbiz:someperson” may be a unique global identifier for the user. A domain name based framework or other unique name based framework has two main components: Data Collection implemented by the data collection module132and Request Processing implemented by the NL request processing module138. FIG.2illustrates the data collection module132included in the back-end service130ofFIG.1, according to an embodiment. As discussed above, the data collection module132collects information associated with users of the digital assistant110and stores the user information in the user data store134. The data collection module132includes a user configuration engine202and may include an ontology engine204. The user configuration engine202is a software module that captures configuration information associated with one or more users of the digital assistant110. The configuration information associated with one or more users is stored in the user data store134. The user configuration engine202may automatically learn the configuration information associated with a user based on requests received from and/or responses provided to the user. The user configuration engine202may also manually receive user configurations from users of the digital assistant110via a configuration interface (not shown). User configuration information may include user identity and demographic information, authentication credentials associated with different responders140and/or target entries, contacts information, and configuration information that influences how requests are interpreted and how requests are to be processed. An example of user configuration information is an alias definition that maps an alias to a domain name or other unique identifier that identifies a responder140or a target entity. In one embodiment, an alias has a natural language format different from the domain name format of the corresponding domain name. Aliases for domain names allow a user of the digital assistant110to assign familiar identifiers to refer to a given entity, a group of entities, a given action, or a given request. In one embodiment, the user configuration engine202may automatically determine an alias based on a user's address book information. For example, a name of a person may be an alias for a domain name associated with the person when the domain name is provided in an address book belonging to the user of the digital assistant110. For instance, the alias “Biff” may be a nickname found in an address book entry that identifies domain name “biffblabbertooth.tld” as a unique identifier associated with “Biff”. In other examples, terms that classify the entities may be used as aliases. For example, configuration information for user “John Doe” may indicate that John Doe's sister is uniquely identified by the domain name “janedeer.tld” and identify her as his sister, thus creating the alias “sister” for the domain name “janedeertld”. In another example, John Doe may have indicated that certain people found in his address list are cousins, this then being the basis for defining the alias “cousins” to include the entire group of people indicated as cousins. In another embodiment, the user configuration engine202may automatically determine an alias based on well-known or trademarked terms. For instance, the company “Gadgets-N-Widgets Inc.” with unique identifier “gadgetsNwidgets.tld” may have trademarked the term “Gadg Widz”. The user configuration engine may interact with a service that retrieves trademarked terms associated with a domain name and discover the trademark term “Gadg Widz” and then allow it to be used as an alias for “gadgetsNwidgets.tld”. In another embodiment, the user configuration engine202may automatically determine an alias based on analyzing past requests and responses to those requests handled by the digital assistant110. The user configuration engine202may determine that a user of the digital assistant110always or frequently refers to a given entity using a certain domain name and may configure a more user-friendly name of the entity as an alias for the domain name or an alias of a role associated with the entity. For example, a portion of a domain name may be used as an alias. For instance, the alias “NOAH” may be generated if a user frequently requests for weather updates from “noaa.gov”. In some embodiments, a responder140may provide suggestions for aliases to be associated with specific entities, such as suggesting the alias “Capital” for “dc.gov” as a responder for requests about government activities. Another example of a request interpretation configuration is the preferred responder configuration. A preferred responder configuration specifies a preferred responder140for fulfilling requests having a given request type, target entity, or an aspect of a target entity. For example, a given responder140having a weather information platform may be the preferred responder for weather-related requests. The preferred responder configurations may be automatically determined or may be manually provided by the user of the digital assistant110. In some circumstances, a user may “follow” a specific responder140or a specific entity. For example, a user may follow a domain name that serves as a unique identifier for a responder140or entity. In the case of following a responder140, “follow” serves as a mechanism for setting a preferred responder configuration. With the follow mechanism, the user configuration engine202may receive configurations that are associated with (i) request types that are frequently issued and preferred responders140for those request types, (ii) actions that are frequently performed and the preferred responders140for those actions, (iii) indication that aliases are to be created for a responder or entity (iv) default target entities for requests and/or responders, (v) the default aspects of followed target entities, (vi) default parameters to apply based on the requesting entity, the request type, the responder, or the target entity, and (vii) indication that the digital assistant110should proactively perform processing to pre-generate responses for anticipated future requests related to a followed responder or a followed entity. When following a specific entity, the user may specify the entity based on any of a number of identifiers or aliases associated with the entity that can be mapped back to a unique identifier for the entity. For instance, the name of a person in the user's address book could be used as an alias. When an entity is followed, a service identified by the unique domain name for the entity may have been configured to identify a responder140or group of responders140to interact with relative to specific request types that reference the entity. In this case, when the user later issues one of the specific types of requests associated with a followed entity, the service identified by the unique domain name for the entity identifies the relevant responders140for the entity so that the request can be routed to one or more of the configured responders140. A user may request that digital assistant110follow an entity using a command that specifies the unique identifier, such as a domain name, associated with the entity, such as “follow johndoe.tld”. The user may also use an alias that the digital assistant110and/or the back-end service130determine to be associated with the domain name based on user configurations stored in the user data store134. For instance, the user configuration engine220may process a user request to “follow John Doe” by searching the user data store134for an address book record for “John Doe”, and then after finding the record, determine the domain name associated with “John Doe” based on a domain name specified in the record. The user configuration engine220may then add the domain name to user data store134and create “John Doe” as an alias for the domain name specified in the record. In the case of following an entity, the back-end service130may optimize requests related to a followed entity in a number of ways. The back-end service130may disambiguate aliases based on giving preference to aliases associated with followed entities. For instance, a user may have several acquaintances in their address book with the first name of “John,” but may only be following one of the acquaintances relative to a particular request type. For example, the user may be following only one person with the first name “John” relative to “social” requests. This follow configuration is stored in the user data store134that links the unique identifying domain name for the followed “John” to social media pages. Based on these configurations, when the back-end service130receives a request for a social media update for “John,” the back-end service130determines that the user is referring to the followed acquaintance “John.” The back-end service130may use followed entities as defaults in situations where an entity is not specified in a request made to the digital assistant and a determination needs to be made as to which entity is associated with a request. For example, a user may specify that they want to follow business news about two entities, “somecorp.tld” and “anothercorp.tld”. The back-end service130may then proactively track ongoing activities related to the followed entities or may provide other optimizations and special processing functions related to the followed entities. The activities and other operations may be determined based on links associated with the domain name. Following might also be used to indicate to the back-end service130that generalized requests might be converted to multiple specific requests to responders140that provide capabilities related to an entity, with the responses from the responders aggregated and then provided to the requestor. In an example of following a responder140, a user action may configure the back-end service configuration130in the user data store134to record that user is following news updates from responders “funkystocks.tld” and “hotmoneytipsnow.tld” as responders for stock tips requests. When the user subsequently requests stock tips, the back-end service130, based on the configurations stored in the user data store134, determines that the request is to be fulfilled using the responders “funkystocks.tld” and “hotmoneytipsnow.tld”. In an example of following an entity, a user may follow “John Doe” and “Jane Deer” social media postings. When the user subsequently requests social updates without specifying any entity for which updates are requested, the back-end service130, based on the configurations stored in the user data store134, determines that the request is to be fulfilled by providing updates for “John Doe” and “Jane Deer” based on the user having previously followed those entities. In an example of following being used to generate aggregated responses for generalized requests about an entity, a user may follow “John Doe” on social media sites, “John Doe's” bicycling blog, and news releases about “John Doe” on the website of the company at which “John Doe” is employed. When the user subsequently requests updates about “John Doe”, the back-end service130, based on the configurations stored in the user data store134, determines that the user is following “John Doe” for multiple request types and multiple responders140including the social media sites, the blog website, and the company website. The back-end service130requests for updates about “John Doe” from all of the identified responders140and receives responses back from the responders140and generates an aggregated response that is presented to the user. A digital assistant110may provide other mechanisms for removing ambiguity relative to entities that a request applies to or for populating generated requests with parameter values for parameters values that cannot be explicitly derived from the natural language request a user submitted. In one embodiment, the digital assistant110may identify characteristics of a service associated with a unique identifier. For example, a user may follow both “weatherinfo.tld” and “marineweather.tld” as responders for weather information. A digital assistant110may identify that “weatherinfor.tld” provides robust weather responses that do not include some information specific to marine weather forecasts, such as wave heights. The digital assistant110may also identify that “marineweather.tld” provides less robust weather forecasts, but that do include wave heights. The digital assistant110may then disambiguate which of the two followed weather responders to use based on whether or not the user wants a forecast that includes wave heights. Also, in the case where the digital assistant110is creating a request for use with responder “marineweather.tld”, the digital assistant110may include a parameter indicating that wave heights should be included in the response. When generating a request for use with responder “weatherinfo.tld”, the digital assistant110may determine that using a parameter to request wave heights is not applicable to “weatherinfo.tld” and thus not include a parameter requesting wave heights in the generated request. To aid in determining responders140associated with an entity, a “hub” service identified by a domain name or some other unique identifier associated with the entity may be used to identify the associated responders140. For instance, user “John Doe” may use domain name “johndoe.tld” as the name of a hub service that identifies a set of social media responders140that respond to social media requests about John Doe. A hub configuration function is used to define the responders140to use for a particular request type. The hub configuration function may also allow a user to identify the responders140for a given entity. Automated mechanisms may also be used to discover and identify the responders140. For example, an automated tool could check various responders it is aware of to see if one or more of the responders are able to respond to requests about the user and then configure the hub to interact with responders that are able to respond. In some embodiments, a responder may provide a service that allows an automated tool or a digital assistant to retrieve a manifest that lists the various request types that the responder can respond to. The manifest may indicate optional elements or implementation specific details relative to request processing supported by the responder. In some embodiments where request and response are subject to version updating over time as request and response formats evolve, a responder may indicate version information that identifies which specific request and response versions the responder supports. Configuring a responder140on a hub may also include a function for configuring information needed for interacting with the responders140, such as a user's account information for a social media platform. In some embodiments, hub configuration information may be captured and used by the back-end service130. In some embodiments, a hub service may provide hub configuration information to the back-end service130upon request. In some embodiments, a chain of services may be configured and subsequently interacted with to provide the information a back-end service130needs to identify and interact with configured responders. In an example of using a domain name to identify a service to serve as hub, a user may set up a social medial hub. The user may first purchase a domain name such as “bigleague-dunkmeister.tld” to serve as a unique social media identifier. The user may then set up a webserver to provide hub services for “bigleague-dunkmeister.tld”. In this example, the user is a member of the social website “somesocialsite.tld” with user's identity being “dunkmeister” and the user is a member of “someothersocialsite.tld” with user's identity being “bigleague-star”. To enable responders140interacting with the hub to identify the social sites that the user is a member of and determine the user's identity on those sites, the user may operate a hub configuration function for “bigleague-dunkmeister.tld” to define a link to “somesocialsite.tld” as a social media responder for the user identifier “dunkmeister”. The user may also configure a link to identify “someothersocialsite.tld” as a social media responder for the user identifier “bigleague-star”. Once configured, a standard service interface provided by the hub “bigleague-dunkmeister.tld” would respond to requests for the user's social identities with a response that identifies the two social media sites and the corresponding user identifiers. For example, the standard service might be accessible via the HTTP REST endpoint “https://bigleague-dunkmeister.tld/social/membership.” Requests to this endpoint may return the following JSON response: { “SiteMembership”: [{“Site”: “somesocialsite.tld”, ID: “dunkmeister”},{“Site”: “someothersocialsite.tld”, “ID”: “bigleague-star”} ] } To support creation and configuration of hubs and data associated with hubs, a hub application may be provided by a digital assistant110or in association with a digital assistant110. Capabilities provided by a hub application could include hub creation, identification of hubs the digital assistant110should associate with a user, creation and updating of hub configuration data, and importing data from hubs into configuration data managed by the digital assistant. To support creation of a hub, a digital assistant110may detect that a user has not yet identified a hub to be associated with the user and then allow the user to invoke a hub application. A digital assistant110may also provide a hub application to a user in response to a request from the user. A hub application may interact with a user to determine if the user has an existing hub they would like to use with the digital assistant and have associated with the user. A hub application may also facilitate creation of a new hub to associate with the user. In the case where a user is identifying an existing hub to associate with the user, a hub application may update digital assistant configuration data associated with the user to indicate an association between the user and the unique identifier for the hub. In the case where a user is creating a hub, a hub application may direct the user to a provider or service that specializes in registering unique identifiers useable with a digital assistant, such as a domain name registrar when domain names are used as the unique identifiers. A hub application may then update digital assistant configuration data associated with the user to indicate the user is associated with the unique identifier for the hub. Creation and updating of hub configuration data could be performed through a number of mechanisms. In one mechanism, a user interacts with a hub application to explicitly define hub configuration data. In another mechanism, hub configuration data may be created or updated by importing data already associated with a user and where the data is provided by some other service. For example, imported data could consist of existing user configuration data managed by the digital assistant and which is provided to a hub application by the digital assistant. For example, a user may have previously provided a digital assistant with social configuration data that identifies that the user is a member of two social sites, “somesocialsite.tld” and “anothersocialsite.tld” and also identifies the user's user name on each of the two social sites. Later, when configuring a hub associated with the user, the digital assistant's social configuration data for the user could be imported into the hub's configuration data. In another example, data to be imported could be provided by services explicitly identified by the user or identified by configuration data managed by the digital assistant. For example, a user could interactively identify social sites they are a member of and then data from the identified social sites could be imported into a hub's configuration data. Once configuration data is imported into a hub, the configuration data is discoverable for use by other digital assistant users according to methods described herein. A digital assistant110may also provide a capability for importing hub configuration data and configuration data from external services into the user configuration data that is managed by the digital assistant110. In this case, the digital assistant110or an application associated with the digital assistant110would interact with services on a hub or other external services associated with the user to retrieve configuration data from the services. This data would then serve as the source for data that is created or updated by the digital assistant110and associated with the user. For example, a user may have previously configured a hub with data for their social configuration. Later, the user may begin using a new digital assistant110and wish to have their social configuration on the hub reflected in the user's configuration data that is managed by the digital assistant110. In this case, the user could provide the identity of the hub to the digital assistant110and the digital assistant110could then use a service interface provided by the hub to retrieve the user's social configuration data and then use that data in to populate the user's social configuration within the configuration data managed by the digital assistant110. While the description and examples of a hub application and digital assistant hub configuration data import functions described here-in focus on a user performing actions on their own behalf, in various embodiments other entities may perform these functions on behalf of a user. For example, a person with administrative power over a digital assistant may use a hub application on behalf of many users. In another example, a hub application may be periodically invoked by the provider of a digital assistant to perform hub configuration data import and export for a group of digital assistant users. The ontology engine204is a software module that provides ontology data to the back-end service and may capture configuration information associated with ontology defined categorizations of the responders140. The configuration information associated with the responders140may be stored in the responder data store136. An ontology associated with a responder140defines the keywords to be used for request types to be processed by the responder140, target entities that the responder140provides information regarding, aspects of target entities, request parameter types, and the allowable values for request parameters. An ontology may also define the elements of responses received from a responder140. In some embodiments, a given responder140may be associated with multiple ontologies. Ontologies provide a basis for standardizing keywords used for request types, entity aspects, and request parameters across multiple responders140. In some embodiments, ontologies may be provided by external services (not shown). In some embodiments, the ontology engine204may dynamically determine ontology configuration information based on information retrieved from an external ontology service. In the cases where an external ontology service is accessed by the ontology engine204, the external ontology service provides the benefit of ensuring consistency in ontology data and configuration. Compliance with ontologies by digital assistants, responders, and other services enables interoperability and provides consistency in both syntax and semantics for digital assistant request processing. In some embodiments, a standard service interface for a responder140may be derived from an ontology. For example, in the case where weather responder service “alltheweather.tld” is compliant with a HTTP REST service interface derived from the weather class of an ontology, a URI in the format of a URL endpoint “http://alltheweather.tld/service/weather” might be used for responding to weather requests and the allowable query parameters which could be included in the URL for weather requests and the allowable response values might be based on the attributes of the weather class: “location”, “date”, “time”, “forecast”, “temperature”, “wind_speed”, “wind_direction”, “relative_humidity”, “precipitation-type”, “precipitation-rate”, “barometric-pressure”, “requestkeys:location,date,time”. In a further example of how a request might be translated to a HTTP request to a REST service based on the weather, the user request “ask alltheweather.tld for the weather in Albany, New York” might be translated into request consisting of a URI as the following URL to invoke a response from responder “alltheweather.tld”: “http://alltheweather.tld/service/weather?location=albany_new_york.” In another embodiment, the configuration information associated with a responder140is handler information specifying one or more access points for accessing the responder140. The handler information may identify a software handler included in an application programming interface or may be a web-based interface. The access points may be determined based on the ontologies associated with the responder140or a URL template associated with a request type with which the responder140is associated. FIG.3illustrates a detailed view of the NL request processing module138included in the back-end service130ofFIG.1, according to an embodiment. As discussed above, the NL request processing module138receives natural language requests captured by the digital assistant110, determines one or more parameters associated with the request, and generates a responder request based on the parameters to be provided to the relevant responders140. The NL request processing module138may include a parameterization engine302, a uniform resource locator (URL) mapping engine304, and a response formatting engine306. The parameterization engine302receives a natural language request captured by the digital assistant110and parses the request in order to extract and/or semantically infer parameters associated with the natural language request. The parameterization engine302generates a parameterized representation of the natural language request based on the extracted or inferred parameters. As discussed above, the parameters associated with a natural language request include, but are not limited to, (i) identity of the user who spoke the request, (ii) the identity of the device owner, (iii) type of request, e.g., information retrieval generally, information retrieval for specific types of content, or performing an action external or internal to an environment in which the digital assistant110is placed, (iv) the identities of one or more responders140that are to be interacted with to fulfill the request, (v) identities of one or more target entities and aspects of those target entities to which the request applies, and (vi) how to programmatically handle the request. In some cases, parameters in a parameterized representation of the natural language request may be translated by the parameterization engine302to a canonical form or a unique identifier. In some embodiments, the canonical form or unique identifier for a parameter may be a domain name. The parameters may be determined by a number of means such as: being extracted directly from the request; based on user configuration and/or responder configuration; by using ontologies that define request elements; by inference based on the current location of the digital assistant110; and by associations and inferences based on past interactions with the digital assistant110. In one embodiment, the parameterization engine302parses the request and identifies strings of words that satisfy the domain name format, i.e., labels joined by dots. These are extracted as entity domain names. Based on the determined request type and request parameters, a semantic understanding of the request is formed. The parameterization engine302can then determine whether an extracted entity domain name identifies a responder140, a requesting user, or a target entity about which information is being requested or provided. For example, for the natural language request, “ask bigleague.tld for Baltimore Dunkers 2017 schedule,” the parameterization engine identifies the request type as an “inquiry” request for the attribute “schedule” from a specified responder (bigleague.tld) for a specified entity (Baltimore Dunkers). The parameterization engine302may use a variety of algorithms to determine constituent elements in a request and to map them to parameters including: position within the request; matching of the request against request templates; parsing the request according to a request “compiler” that is informed by a formalized request grammar; artificial intelligence mechanisms such as neural networks. For example, a location algorithm may determine that the term “bigleague.tld” is in a spot within the natural language request where a responder is expected to be specified and then determines “bigleague.tld” is an entity domain name identifying a responder140. In another example, a natural language request grammar may be specified in Backus-Naur form (BNF). During request processing, the natural language request is parsed using a request compiler for the applicable natural language request grammar. The compiler first tokenizes the request into its constituent elements, determines if the request is in a valid format relative to the grammar, and generates a parameterized representation of the request based on the tokens and token types found during tokenization of the natural language request. In some cases, the parameterization engine302does not find elements in a received natural language request from which to derive some of the elements in a parameterized request. In this case, default values may be used for the elements that could not be derived from the request. The values for the default elements may be determined by a number of mechanisms. In one example, default values may be based on configuration data that was specified to the user configuration engine202. In another example, default values may be based on values that have been frequently specified by the user in requests of the identified request type. In another example, an artificial intelligence capability may determine based on a series of requests from a particular user that the user is currently performing a particular type of task involving a particular set of entities and use this knowledge to identify default values. For words or strings of words that do not satisfy the domain name format, the parameterization engine302may determine parameters associated with those words based on the user configuration information stored in the user data store134, ontology data and ontology configuration provided by the ontology engine204, the responder configuration information stored in the responder data store136, and/or semantic inference techniques. For example, the parameterization engine302compares a word or a string of words against aliases to determines whether any of the words matches a previously determined alias or a dynamically identified alias. If a match is found, then a parameter associated with the request may be determined based on an entity's unique identifier, for example, a domain name that has been associated with the alias. As another example, the parameterization engine302compares a word or a string of words against target entities or related keywords specified in various ontologies. If any of the words matches a target entity handled by the responders140associated with the ontologies, then a parameter associated with the request may be based on the unique identifier associated with the entity, such as a domain name associated with the target entity. Further, if any of the words matches a keyword in an ontology, then the responder parameter associated with the request is set as the responder140associated with the request type associated with the keyword and ontology class it identifies. In one embodiment, the parameterization engine302, using configuration information associated with an ontology defined categorization of responders140, may identify the responder140to be used for a request. The ontology engine204provides the ontology elements associated with the request and responder to enable syntax checking of the user's request by the parameterization engine302to compare keywords in the request and the structure of the request with the ontology applicable to the request. After determining that a request is conformant with the ontology and the ontology-defined responder capability, the parameterization engine302creates a parametrization representation derived from the user's request in a format suitable for interacting with the responder. For example, an ontology applied to processing a “weather” request might define a “weather” class identified by the keyword “weather” with attributes “location”, “date”, “time”, “forecast”, “temperature”, “wind speed”, “wind direction”, “relative humidity”, “precipitation type”, “precipitation rate” and “barometric pressure.” On receiving a request for weather information, the NL request processing module138compares the request with the ontology to determine if elements of the weather request conform to the ontology. The NL request processing module138also determines whether identified responder140supports weather requests and may also determine whether or not the responder supports the requested weather attributes. Similarly, a responder140receiving a weather request could use similar ontology-related configuration and processing functions determine if the request is compliant with the ontology definition of the weather class and weather attributes supported by the responder. The URL mapping engine304generates one or more responder requests as URLs based on the parametrized representation generated by the parameterization engine302. In one embodiment, the URL mapping engine304determines the syntax and elements of a generated URL based on the elements of the parameterized request and a template URL that applies to the type of request being generated. Other methods of generating a URL based on the parameterized request may be used, such as using the parameterized request as an input to a finite state machine to generate URLs. In one embodiment, for each relevant responder140, the URL mapping engine304maps the parametrized representation onto a URL syntax based on the ontology associated with the relevant responder140. The URL syntax includes sub-domain names, path segments, hash tags, query keywords, and search values. The URL mapping engine304may determine, based on the applicable ontology, the values that need to be provided in the URL and fills those values based on the parametrized representation. In other embodiments, an alternate mapping engine may be used to produce requests that in a format appropriate for the protocol that is used in communicating request data to responders. The URL mapping engine304transmits a request to each relevant responder140based on the URL generated for the corresponding URL syntax applicable to each relevant responder. The request may be transmitted based on the handler information provided by the relevant responder140. For example, the request may be transmitted to a software handler included in an application programming interface (API) provided by the responder140. The response formatting engine306receives responses or response data based on responses that were returned by responders140to which requests were transmitted. In one embodiment, a response includes information requested about a target entity specified by the request. The response formatting engine306generates response representations based on the received responses or response data in a format that reflects the content of received responses and is suitable for processing and presentation by the digital assistant110. The response formatting engine306may generate the response representations based on ontology information and ontology configuration associated with the responders140from which the corresponding responses were received. The response formatting engine306transmits the response representations to the digital assistant110for delivery in response to the natural language request. The response representations may be delivered over audio output for digital assistants that are voice assistants. In other embodiments where a digital assistant is a chatbot, the response representations may be delivered as text displayed on a video monitor. In one embodiment, responses from different responders140are aggregated. In some cases, a response or response data may be used for additional purposes, such as being stored in the responder data store136and subsequently being used to influence processing by the NL request processing module138. FIG.4is a flow diagram of method steps for processing a natural language request captured by a digital assistant, according to an embodiment. Although the method steps are described with reference to the systems ofFIGS.1-3, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. As shown, a method400begins at step402, where the NL request processing module138receives a natural language request captured at the digital assistant110. In the case where a digital assistant is a voice assistant, the natural language request is typically captured over audio and specifies information to be retrieved or an action to be performed in association with a target entity. At step404, the NL request processing module138parses the natural language request to generate a parameterized representation of the natural language request. The parameterized representation includes unique identifiers, for example, domain names associated with at least one responder. The parameterized representation may also include unique identifiers as domain names associated with one or more target entities. The unique identifiers may be embedded in the natural language request or may be semantically inferred based on the user configuration and/or the responder information. Below are some examples of parametrized representations of natural language requests as processed by the parameterization engine302according to the process400depicted inFIG.4. As can be seen from the examples below, the parameters associated with a natural language request may be included in the request or may be semantically inferred by the back-end service130. Example A: The User “Andy” issues the natural language request “ask bigleague.tld for Baltimore Dunkers 2017 schedule.” The parameterized representation of the request is determined to be: user→“Andy”, handler→“askHandler”, request type→“inquiry”, responder→“bigleague.tld”, target entity→“Baltimore Dunkers”, aspect→“schedule”, parameter→“year=2017.” Example B: The User “Andy” issues the natural language request “what is the Baltimore Dunkers 2017 schedule.” The parameterized representation of the request is determined to be: user→“Andy”, handler→semantically determined to be “askHandler”, request type→“inquiry”, responder→semantically determined to require a responder and since one was not specified, a default responder “bigleague.tld” is used, target entity→“Baltimore Dunkers”, aspect→“schedule”, parameter→“year=2017.” Example C: The User “Andy” issues the natural language request “what is tomorrow's weather forecast.” The parameterized representation of the request is: user→Andy, handler→semantically determined to be “inquiry router”, request type→“inquiry”, responder→semantically determined to require a responder and since one was not specified, a default responder “weather.tld” is used, target entity→defaults to user's current location “Reston Va.”, aspect→“weather”, parameter→“date=tomorrow.” At step406, for each relevant responder140, the URL mapping engine304maps the parameterized representation onto a URL syntax based on the ontology associated with the relevant responder140. The URL syntax includes sub-domain names, path segments, hash tags, query keywords, and search values. In operation, the URL mapping engine304determines, based on the domain ontology, the values that need to be provided in the URL and fills those values based on the parameterized representation. For example, a parameterized representation of the natural language request “ask funkystocks.tld for lunarvacations.tld current stock price” is user→“Andy”, handler→“retrieveEndpoint”, request type→“inquiry”, responder→“funkystocks.tld”, target entity→“lunarvacations.tld”, aspect→“aspect→current stock price” The URL generated by the URL mapping engine304at step406is “https://funkystocks.tld/retrieve?entity=‘lunarvacations.tld’&aspect=‘current stock price’”. The domain name “funkystocks.tld” uniquely identifies the responder140in the parameterized representation of the natural language request, and the domain name “lunarvacations.tld” uniquely identifies the target entity in the parameterized representation. The aspect “current stock price” is defined by the ontology used by the responder for the “retrieve” class and the mechanism for mapping those aspects onto the URL syntax is also informed by the ontology. At step408, The URL mapping engine304transmits a request to each relevant responder140based on the URLs generated for each responder140. The request may be transmitted based on the handler information provided for the relevant responder140. For example, the request may be transmitted to a software handler included in an application programming interface (API) provided by the responder140. At410, the response formatting engine306receives responses from responders140to which requests were transmitted. In one embodiment, a response includes information requested about a target entity specified by the request. The response formatting engine306translates received responses to a format suitable for use by the digital assistant and transmits the responses to the digital assistant110to present as responses to the original natural language request. The responses may be delivered over audio output. In one embodiment, responses from different responders140are aggregated. In one embodiment, the response formatting engine306may format the responses to optimize the delivery of those responses over audio output. FIG.5is a flow diagram of method steps for processing a natural language request including an alias, according to an embodiment. Although the method steps are described with reference to the systems ofFIGS.1-3, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. As shown, a method500begins at step502, where the NL request processing module138receives a natural language request captured at the digital assistant110. In an embodiment where the digital assistant is a voice assistant, the natural language request is typically captured over audio and specifies information to be retrieved or an action to be performed in association with a target entity. At step504, the NL request processing module138parses the natural language request to identify a set of words included in the request. At step506, the NL request processing module138determines that the set of words does not satisfy the format that the digital assistant uses for unique identifiers, for example the domain name format. At step508, the NL request processing module138analyzes user configuration information to determine that the set of words is an alias for a given entity and is mapped to a unique identifier associated with that entity. If the set of words does satisfy the format that the digital assistant uses for unique identifiers, then the set of words is not an alias and, therefore, the NL request processing module138processes the request without the translation from the alias to the unique identifier performed at step508. At step510, the NL request processing module128generates a parameterized representation of the natural language request based on the unique identifier mapped to the alias. At step512, the NL request processing module138, generates a request, for example a URL encapsulating the parameterized representation based on a URL syntax associated with an ontology associated with a relevant responder140. At step514, the NL request processing module138transmits a responder request to the relevant responder140based on the generated URL. In sum, a digital assistant request is a natural language input provided to a digital assistant so as to instigate an interaction with responders who can fulfill the request. A digital assistant interface accepts natural language inputs and in turn provides responses in a human consumable format. To provide precise and complete responses to the natural language request, a unique-name based framework is used as described here-in. The unique-name based framework uses domain names or some other set of unique identifiers as identifiers for request types, requesting entities, responders, and target entities. Further, the framework enables interpreting natural language requests according to ontologies associated with different responders and request types. An ontology operates as an enhanced keyword dictionary for one or more responders and optionally defines the keywords and parameter types to be used for request types and request parameters. The domain-name based framework thus enables the digital assistant to interact with any responder that supports an ontology for a desired request type and to generate precise and complete responses to natural language based requests. 1. In some embodiments, a method comprises identifying a request type and one or more entity identifiers associated with a first natural language request received at a digital assistant, identifying an ontology that includes a keyword associated with the request type, wherein the ontology identifies a syntax for accessing a set of responders over a network, generating one or more requests that encapsulate the first natural language request based on the one or more entity identifiers and the syntax, and transmitting the one or more requests to the set of responders. 2. The method of clause 1, further comprising identifying one or more responder domain names. 3. The method of clauses 1 or 2, wherein the one or more entity identifiers comprise one or more domain names. 4. The method of clauses 1-3, wherein the one or more requests comprise one or more uniform resource locators (URLs). 5. The method of clauses 1-4, wherein the syntax specifies a first parameter name to be included in the one or more requests, wherein at least one value for the first parameter name is associated with at least one of the entity identifiers. 6. The method of clauses 1-5, further comprising selecting the set of responders based on the ontology. 7. The method of clauses 1-6, further comprising selecting a target entity associated with the first natural language request based on a user configuration defined using a follow function. 8. The method of clauses 1-7, further comprising selecting the set of responders based on one or more characteristics of prior responses received from the set of responders. 9. The method of clauses 1-8, further comprising, in response to a request transmitted to a hub service, receiving a response identifying the set of responders as associated with the request type. 10. The method of clauses 1-9, wherein the one or more requests are generated further based on at least one of the request type and the set of responders. 11. The method of clauses 1-10, further comprising translating the one or more entity identifiers into at least one target entity associated with the set of responders, wherein an entity identifier corresponding to the at least one target entity is included in the one or more requests. 12. The method of clauses 1-11, further comprising transmitting information received from at least one of the set of responders to the digital assistant for delivery via an output interface provided by the digital assistant. 13. In some embodiments, a computer readable medium stores instructions that, when executed by a processor, cause the processor to process natural language requests, by performing the steps of identifying a request type and one or more entity identifiers associated with a first natural language request received at a digital assistant, identifying an ontology that includes a keyword associated with the request type, wherein the ontology identifies a syntax for accessing a set of responders over a network, generating one or more requests that encapsulate the first natural language request based on the one or more entity identifiers and the syntax, and transmitting the one or more requests to the set of responders. 14. The computer readable medium of clause 13, further comprising identifying one or more responder domain names. 15. The computer readable medium of clauses 13-14, wherein the one or more entity identifiers comprise one or more domain names. 16. The computer readable medium of clauses 13-15, further comprising selecting the set of responders based on the ontology. 17. The computer readable medium of clauses 13-16, further comprising selecting the set of responders based on one or more characteristics of prior responses received from the set of responders. 18. The computer readable medium of clauses 13-17, further comprising, in response to a request transmitted to a hub service, receiving a response identifying the set of responders as associated with the request type. 19. The computer readable medium of clauses 13-18, wherein the syntax specifies a first parameter name to be included in the one or more requests, wherein at least one value for the first parameter name is associated with at least one of the entity identifiers. 20. In some embodiments, a system comprises a memory storing instructions, and a processor executing the instructions to perform the steps of identifying a request type and one or more entity identifiers associated with a first natural language request received at a digital assistant, identifying an ontology that includes a keyword associated with the request type, wherein the ontology identifies a syntax for accessing a set of responders over a network, generating one or more requests that encapsulate the first natural language request based on the one or more entity identifiers and the syntax, and transmitting the one or more requests to the set of responders. Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present invention and protection. The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable processors or gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 73,044 |
11861307 | DESCRIPTION OF EMBODIMENTS In the following description and in the drawings, the same reference characters represent the same or corresponding components. Therefore, detailed description thereof will not be repeated. First Embodiment <Configuration> «Overall Configuration» Referring toFIG.1, the overall question-answering system in accordance with the first embodiment of the present invention includes: a question-answering system58receiving a user input56and outputting an answer60as a response; a training unit62training a neural paraphrasing model94comprised of a deep neural network used in the question-answering system58and training a classification model98for classifying requests; and a training data storage unit50storing training data used for training by training unit62. Training data includes a plurality of training data items. Each training data item includes a combination of a first request, a second request having substantially the same meaning as the first request and having a higher probability of being answered by the question-answering system58than the first request, and a classification code indicating to which class the first request belongs. When the training data items are manually prepared, in order to standardize the results of paraphrasing as much as possible, it is recommendable to manually classify the first requests to “factoid questions,” “why questions” and the like and to designate, class by class, formats after paraphrasing. The classification code mentioned above indicates this classification. The classifications used in the present embodiment and the formats of the second requests after paraphrasing used in each class are as follows.(1) what question(2) what-if question(3) definition question(4) why question(5) how question A “what question” is typically paraphrased to the format of “interrogative+particle” or “subject+interrogative.” The interrogative may be “what,” “who” and “where.” By way of example, a question “I'm going to Miyazaki, and where do you recommend to visit?” will be paraphrased to “Where to go in Miyazaki?” A “what if” question is paraphrased to the format of “what happens if” For example, “What if I should have an accident?” is paraphrased to “What happens if I have an accident.” A definition question is paraphrased to the format of “what is . . . ?” For example, “What is the meaning of qi-stagnation?” is paraphrased to “What is qi-stagnation?” A why question is paraphrased to the format of “why does . . . ” For example, “I wonder the reason why Japan suffers from deflation?” is paraphrased to “Why does Japan suffer from deflation?” A how-to question is paraphrased to the format of “how . . . ?” For example, “Is there any measure to suppress the bitterness of bitter gourd?” is paraphrased to “How to suppress the bitterness of bitter gourd?” By utilizing these classifications as features to be input to models, accuracy can be improved. At the time of manual paraphrasing, reference to operations of an existing question-answering device122would be helpful to determine what format of requests is suitable for question-answering device122to provide good responses, and by working in this manner, the quality of training data items can be improved. «Training Data Storage Unit50» Training data stored in training data storage unit50includes a large number of training data items. Each training data item includes: a pre-paraphrase request in a natural language (request to be paraphrased); a request obtained by paraphrasing the pre-paraphrase request to a question sentence having a higher probability to get a right answer from question-answering device122than the pre-paraphrase request; and a classification code indicating a class to which the pre-paraphrase request belongs. «Training Unit62» Training unit62includes: a paraphrasing model training unit92for training a neural paraphrasing model94through machine learning using the training data stored in training data storage unit50; and a classification model training unit96for training a classification model98through machine learning such that a classification code, indicating one of the five classes to which an input request belongs, is output, using the first request and its classification code included in each training data item of the training data stored in training data storage unit50. «Question-Answering System58» Question-answering system58includes: the existing question-answering device122; and a request paraphrasing system120configured to receive a user input56for paraphrasing the user input56to a request to which question-answering device122has higher probability of generating an answer than the user input56. Inputs to question-answering device122include two types of sentences, that is, sentences requesting some information and sentences requesting some action. In the following description a “request” represents either of these. Request paraphrasing system120includes: a pre-processing unit130for performing a morphological analysis of user input56, converting each word to a word vector and thereby converting the user input56to a word vector sequence; a classification model98trained by training unit62for classifying, based on the word vector sequence output from a pre-processing unit130, to which of the five classes the request represented by user input56belongs and for outputting a classification code; and the afore-mentioned neural paraphrasing model94trained by training unit62such that using the word vector sequence output from pre-processing unit130and the classification code output from classification model98as inputs, the user input56is paraphrased to a request having a higher probability of getting a right answer from question-answering device122than the user input56. In the present embodiment, neural paraphrasing model94is a so-called sequence-to-sequence model and it has a configuration similar to that of a neural machine translation model using GRU (Gated Recurrent Unit), which is a type of RNN. The present embodiment uses so-called word embedded vectors of a fixed length as the word vectors. A so-called one hot vector may be used. Further, an LSTM (Long Short-Term Memory) may be used in place of GRU. Referring toFIG.1, in the present embodiment, a training data item is generated by (1) a first training data generating unit64for the manual generation of a pre-paraphrase sentence and a paraphrased request; (2) a second training data generating unit66for manual addition of a paraphrased request expressing substantially the same information as the question to a question to which question-answering device122could not give an answer, read from question-answering system log storage unit54storing requests to which question-answering system122was unable to give any answers; and (3) a third training data generating unit68for manual addition of a paraphrased request to a request candidate stored in a request candidate storage unit52. The manual generation of training data should be carried out by inputting each paraphrased request candidate to question-answering device122and confirming whether or not an answer is obtained. The request candidates stored in request candidate storage unit52are prepared by extracting sentences from the web pages having “?” at the tail position and having an interrogative such as “what” or “why” or a pattern from which it can be known to be a request by pattern matching, and further selecting those satisfying a condition that the number of words therein exceeds a certain threshold. Further, for each of the training data items, one of the five classes mentioned above to which it belongs is determined manually and a classification code is added. In paraphrasing, unnecessary portions may be deleted or a complex sentence may be replaced by a simple sentence, so that the paraphrased format becomes simpler and easier for the system to process. For instance, sentences such as “I have some eggs left at hand. and I wonder what can I do with them?” and “I have an ‘.ai’ file”, but I don't know what I should use to open this?” including a conditional expression or an anaphora cannot easily be processed appropriately by an existing system. If these are converted to simple sentences such as “What can I make from eggs?” and “With what can one open an .ai file?” it becomes possible for the question-answering system to provide answers. Therefore, in the process of paraphrasing, it is desirable that unnecessary expressions or colloquial or non-standard expressions are modified as much as possible and that formats after paraphrasing are standardized. Further, we avoid addition of a new content word or words. Instead of simply answering “yes” or “no” to inputs such as “Is vaccination effective?” and “Is a smart speaker good?”, these may be paraphrased to different questions such as “What if I fail to get a vaccination?” or “What if I use a smart speaker?” Such paraphrasing may lead to answers as information that the user potentially desires. Neural Paraphrasing Model94 Neural paraphrasing model94shown inFIG.1is, in the present embodiment, a system including a network formed by necessary numbers of copies of a common module, the module being an RNN, which is a type of neural network, or, more specifically, a GRU which is a type of RNN. At the end of each input sentence, an end-of-sentence sign <S> is added. Referring toFIG.2, neural paraphrasing model94includes: an encoder144configured to receive a word vector sequence140representing the first request, for outputting an intermediate node vector152representing the meaning of the first request; a decoder154configured to receive the intermediate node vector152output from encoder144as an input, for outputting a word sequence156of the paraphrased request; and an attention layer160for calculating a context vector used by the decoder154in calculating each word of word sequence156, from values referred to as attention and hidden states of each GRU in encoder144and applying it to decoder154. Encoder144includes a forward GRU sequence146arranged to receive word vectors of word vector sequence140in order from the first one, and a backward GRU sequence148arranged to receive word vectors of word vector sequence140in order from the last one. GRUs in the forward GRU sequence146are connected in the forward direction such that each receives a word vector of word vector sequence140and a hidden state of an immediately preceding GRU. Backward GRU sequence148similarly includes a plurality of GRUs connected to receive word vectors of word vector sequence140in the backward direction. To the GRUs at the heads of forward GRU sequence146and backward GRU sequence148, the hidden state of the GRU in the initial state is applied. The numbers of GRUs forming the forward and backward GRU sequences146and148are the same as the number of word vectors forming word vector sequence140. Forward GRU sequence146is formed in response to an input of word vector sequence140, by copying the same GRU by the number in accordance with the number of word vectors. Similarly, backward GRU sequence148is formed in response to an input of word vector sequence140, by copying the same GRU by the number in accordance with the number of word vectors. Encoder144further includes a combining unit150for combining and linearizing the output of the last GRU of the forward GRU sequence146, the last output of backward GRU sequence148, and a classification code from classification model98, and outputting the result as intermediate node vector152to decoder154. Decoder154includes a plurality of pairs170as components. A pair170includes a GRU172and an integrating unit174for integrating an output of GRU172and an output of attention layer160. Though a plurality of pairs are shown following the pair170inFIG.2, this shows the time sequence of the inputs and the outputs of pair170, and actually, one pair170is sufficient. These pairs170are arranged such that the input of GRU172of pair170at the head position receives the end-of-sentence sign <s> of the first request, and at other positions, each GRU172receives a word vector converted from a word output from integrating unit174of the immediately preceding pair170. The outputs of GRU172of pair170are connected to the input of integrating unit174of the same pair170. Further, GRU172is connected to attention layer160so that the attention is calculated using the hidden state of GRU172of pair170. The input of integrating unit174is connected to receive the output of GRU172. Further, integrating unit174is connected to receive a context vector168from attention layer160. Integrating unit174calculates, using the hidden state calculated from the output of GRU172and the context vector168received from attention layer160, probabilities of words to be output, and outputs a word having the highest probability. This word is converted to a word vector and also applied to the input of GRU172of the next pair. Attention layer160includes: a coefficient calculating unit162for calculating, as the attention, a coefficient indicating a degree of importance of hidden state of each GRU in encoder144in the hidden state of GRU172in the processing of hidden states of decoder154; and a context vector generating unit164for calculating a context vector168as a weighted average of hidden states of respective GRUs using the coefficient calculated for each GRU by coefficient calculating unit162, and supplying it to integrating unit174of the pair170. In training neural paraphrasing model94, for example, words of the first request to be subjected to paraphrasing of the training data are used as the input to corresponding GRUs of encoder144, and intermediate node vector152is calculated further using the classification code of the first request. When the sign <s> indicating the end of the input sentence is given, the sign <S> is input to GRU172of pair170of decoder154, and its output and the context vector168calculated by attention layer160are used in integrating unit174to predict the word at the start of the request after paraphrasing. Using the difference between the prediction and the teacher signal of the word at the start of the second request after paraphrasing of the training data, a parameter set is trained by error back propagation. Thereafter, the parameter set is trained using a word in the second request as an input to GRU172of pair170, and the next word as well as a sign <S> indicating the end of the second request as teacher signals. This process is executed for each training data item. The object of training neural paraphrasing model94is to learn the values of a set of parameters defining the paraphrasing function realized by the basic GRU. Once the training is completed, words of the object of paraphrasing as the input sentence are successively input to neural paraphrasing model94followed by the sign <S>, which causes neural paraphrasing model94to output a word, which is the first word of the paraphrased request. Then, the word output from neural paraphrasing model94is input to neural paraphrasing model94as a next input and the word thus obtained from neural paraphrasing model94will be the next word of the request sentence. This process is repeated until the sign <S> is eventually obtained as the output of neural paraphrasing model94, when the paraphrased request is determined. The configuration of neural paraphrasing model94is not limited to that shown inFIG.2and other configuration may be used. «Classification Model98» FIG.3schematically shows a configuration of a convolutional neural network180implementing the classification model98shown inFIG.1. Referring toFIG.3, for the purpose of a clearer description, it is assumed that the convolutional neural network180of classification model98consists simply of an input layer190, a convolutional layer192and a pooling layer194, while the network may consist of a plurality of sets of these three layers. To the input layer190, word vector sequences X1, X2, . . . , X|t|, representing each word of the first request are input. The word vector sequences X1, X2, . . . , X|t|are represented as a matrix T=[X1, X2, . . . , X|t|]T. To the matrix T, M feature maps are applied. The feature map is a vector and a vector O, an element of each feature map, is computed by applying a filter represented by fj(1≤j≤M) to an N-gram200comprised of consecutive N word vectors, while shifting N-gram200. N is an arbitrary natural number, while N=3 in this embodiment. Specifically, O is given by the equation below. O=f(Wfj·xi′j:N-1+bij) (1) where · represents elementwise multiplication followed by summation of the results, and f(x)=max (0, x) (normalized linear function). Further, if the number of elements of word vector is d, weight Wfjis a real matrix of d×N dimensions, and bias bijis a real number. It is noted that N may be the same for the entire feature maps or N may be different for some feature maps. The relevant value of N may be something like 2, 3, 4 or 5. Any filter may be used for the convolutional neural network180. A filter for image processing may be conveniently used. For each feature map, the subsequent pooling layer194performs so-called max pooling. Specifically, pooling layer194selects, from elements of feature map fM, for example, the maximum element210and takes it out as an element220. By performing this process on each of the feature maps, elements220, . . .222are taken out, and these are concatenated in the order of f1to fMand output as a vector230to a final layer182. The final layer182applies the vector230to Softmax layer184. In the present embodiment, the number of outputs of classification model98are five that correspond to the five classes, and the respective probabilities are obtained at these outputs. Regarding pooling layer194, one that performs max-pooling is said to have a higher accuracy than one that adopts average-pooling. It is possible, however, to adopt average-pooling, or other type of pooling techniques may be used if that well represents characteristics of the lower layer. The training data item consists of a word vector sequence obtained from user input56mentioned above and a label indicating whether or not the user input56is a request. During training, to the input of classification model98, word vector sequence as the object of classification is applied, the output of classification model98is compared with the label of its text, and the difference is calculated. Each of the weights and biases forming classification model98are adjusted to reduce the value of the error function by general back propagation. Program Structure FIG.4is a flowchart representing a control structure of a program realizing the function of training unit62shown inFIG.1in cooperation with a computer. Referring toFIG.4, the program includes: a step240of performing a training process242of training neural paraphrasing model94for each of the training data items stored in training data storage unit50; a step244, following step240, of determining whether or not a prescribed end condition is satisfied, and branching the control flow depending on the result of determination; and a step246, executed if the determination at step244is positive, of saving the parameters of neural paraphrasing model94in a prescribed storage device and ending the process. If the result of determination at step244is negative, the control returns to step240and the next training is executed. Training process242includes: a step250of inputting the first request of a training data item as the object of processing and its classification code to neural paraphrasing model94; a step252of calculating a difference between the output resulting from neural paraphrasing model94and the second request of the training data item as the object of processing; and a step254of updating parameters of neural paraphrasing model94by error back propagation based on the difference obtained at step252. The end condition at step244may be any of the following:(1) The accuracy of paraphrasing by neural paraphrasing model94has attained a prescribed threshold;(2) the difference between the accuracy of paraphrasing by neural paraphrasing model94and the accuracy of the last verification becomes equal to or smaller than a prescribed threshold;(3) the number of repetition of the training and verification exceeded (or reached) a prescribed threshold number. Of these, in the present embodiment, either the condition if the difference between the accuracy of paraphrasing by neural paraphrasing model94of verifying data and the accuracy at the time of the last training becomes equal to or smaller than a prescribed threshold (first threshold) or the condition if the number of repetition of the training and verification exceeded a prescribed number (second threshold) is satisfied, the process ends. FIG.5is a flowchart representing a control structure of a program realizing classification model training unit96shown inFIG.1in cooperation with a computer. Referring toFIG.5, the program includes: a step260of executing a training process262of training classification model98for each training data item stored in training data storage unit50; a step264of determining whether or not an end condition of training classification model98is satisfied and branching the control flow; and a step266, executed if the determination at step264is positive, of saving the parameters of classification model98in a prescribed storage device and ending the execution of this program. If the determination at step264is negative, the control returns to step260and the next training cycle is executed. As to the determination at step264, in the present embodiment, the classification accuracy by classification model98is measured by using a verification data separately prepared beforehand, and if a difference between the measurement and the value at the time of the last training becomes equal to or smaller than a prescribed threshold (third threshold), or if the number of repetition of training exceeds a prescribed number (fourth threshold), it is determined that the end condition is satisfied. Training process262includes: a step270of inputting the first request of the training data item as the object of processing to classification model98; a step272of calculating a difference between a classification code output from classification model98and a classification code in the training data item as the object of processing; and a step274of updating parameters of classification model98by error backward propagation based on the difference calculated at step272. FIG.6is a flowchart representing a control structure of a program causing a computer to function as the question-answering system58shown inFIG.1in cooperation with a computer. Referring toFIG.6, the program includes: a step290of doing a morphological analysis on a natural language input sentence input by user input56and converting it to a morpheme sequence; a step292of converting words obtained by the morphological analysis at step290to word vectors and thereby converting the input sentence to a word vector sequence; a step294of applying the word vector sequence obtained by the process of step292to classification model98and thereby estimating a classification code of the request represented by user input56; a step296of inputting the word vector sequence obtained by the process of step292and the classification code estimated at step294to neural paraphrasing model94to be converted to a word sequence of a request to be output from neural paraphrasing model94; and a step298of inputting the paraphrased request obtained by the process at step296to question-answering device122. The program further includes: a step300of determining whether or not an answer is given from question-answering device122as a result of step298, and branching the control flow depending on the result of determination; a step302, executed if the determination at step300is positive, of outputting the answer of question-answering device122as an answer60(seeFIG.1) to the outside of question-answering system58and ending the execution of the program; and a step304, executed if the determination at step300is negative, of generating a list of web sites as web search results using the input sentence or the paraphrased request by a search engine, not shown, outputting the list as an answer60and ending the execution of the program. Operation The training unit62and question-answering system58in accordance with the first embodiment of the present invention operate as follows. First, training data items are generated and stored in training data storage unit50. Training data items are generated as follows. (1) Pairs of pre-paraphrasing requests and paraphrased requests are prepared manually to be used as training data items. (2) From logs of question-answering device122, any request to which no answer could be found by question-answering device122is collected. A paraphrased request expressing substantially the same information as such request and to which an answer can be obtained from the question-answering device122is manually prepared. This paraphrased request and the pre-paraphrasing request are paired to be a training data item. (3) A request candidate is extracted from the web, and stored in request candidate storage unit52. For the stored request candidate, a paraphrased request to which an answer can be given by question-answering device122is manually prepared. The stored request candidate and the paraphrased request are paired as first and second requests, respectively, to be used as a training data item. To each training data item prepared by any of the procedures above, one of the afore-mentioned five classes is manually determined and a corresponding classification code is added. Referring toFIG.1, training unit62trains neural paraphrasing model94using the training data items stored in training data storage unit50. In the present embodiment, once the training of neural paraphrasing model94by paraphrasing model training unit92ends (step240ofFIG.4), the accuracy of neural paraphrasing model94is calculated using separately prepared verification data. If the difference between this accuracy and the accuracy of the last training becomes equal to or smaller than the first threshold, or if the number of training by training unit62exceeds the second threshold, the training of neural paraphrasing model94ends (YES at step244ofFIG.4), a group of parameters defining the function by neural paraphrasing model94is stored in the storage device, and the process ends (step246). If the difference of accuracy is larger than the threshold and the number of trainings by training unit62is equal to or smaller than the second threshold (NO at step244), the training of neural paraphrasing model94is again executed by training unit62(path from step244to step240). The above-described process is executed until the end condition is satisfied at step244and thereby the training of neural paraphrasing model94is completed. The trained neural paraphrasing model94consists of the program realizing the configuration shown inFIG.2and the parameters of the GRUs forming encoder144and decoder154. Classification model98shown inFIG.1is also trained in a similar manner. Specifically, when the training of classification model98by classification model training unit96once ends (step260ofFIG.5), the accuracy of classification by classification model98is calculated using separately prepared verification data. If the difference between this accuracy and the accuracy of the last training is equal to or smaller than the third threshold or if the number of trainings by classification model training unit96exceeds the fourth threshold, training of classification model98ends (YES at step264ofFIG.5), a parameter group defining the function of classification model98is stored in the storage device, and the process ends (step266). If the difference of accuracy is larger than the threshold and the number of trainings by training unit62is equal to or smaller than the second threshold (NO at step264), training of classification model98is again executed by training unit62(path from step264to step260). The above-described process is executed until the end condition is satisfied at step264, and training of classification model98is completed. The trained classification model98consists of the program realizing the configuration shown inFIG.3and parameters defining the function represented by convolutional neural network180. In a running (test) phase after the end of training, question-answering system58shown inFIG.1operates as follows. Neural paraphrasing model94of request paraphrasing system120included in question-answering system58is the one trained by paraphrasing model training unit92. Similarly, classification model98included in request paraphrasing system120is the one trained by classification model training unit96. Referring toFIG.1, when a user input56is given, pre-processing unit130does a morphological analysis on user input56and converts it to a morpheme sequence (step290ofFIG.6). Further, pre-processing unit130converts the morpheme sequence to a word vector sequence (step292), and using classification model98, estimates the classification code of user input56(step294). Further, the word vector sequence obtained at step292and the classification code obtained at step294are input to neural paraphrasing model94(step296). At the end of word vector sequence, end-of-sentence sign <s> is added. Referring toFIG.2, each word forming this word vector sequence140is applied to encoder144of neural paraphrasing model94. By forward GRU sequence146and backward GRU sequence148, encoder144calculates respective final hidden states, which are combined with the classification code by combining unit150and linearized to be an intermediate node vector152and input to decoder input158. When the end-of-sentence sign <s> is applied to GRU172of pair170at the decoder input158, GRU172changes the hidden state in accordance with the intermediate node vector152and the end-of-sentence sign <s>, generates an output vector and applies it to integrating unit174of the same pair170. Further, the hidden state of GRU172is also applied as a hidden state vector166to coefficient calculating unit162of attention layer160. Coefficient calculating unit162calculates, as an attention, a coefficient indicating degree of importance of the hidden state of each of the GRUs in encoder144, in the hidden state of GRU172as the object of processing of decoder154. Using the coefficients calculated for respective GRUs by coefficient calculating unit162, context vector generating unit164calculates weighted mean of hidden states of the GRUs to provide context vector168, which is fed to integrating unit174of the pair170. Using the hidden state calculated by the output of GRU172and the context vector168received from attention layer160, integrating unit174calculates probabilities of words to be output, and outputs the word having the highest probability. This word is converted to a word vector and also fed to the input of GRU172of the next pair. Thereafter, the same process for the end-of-sentence sign <s> is repeated by the next pair on the output of integrating unit174of the pair170of a preceding step, and thus words of word sequence156will be output successively from decoder154. As a result of such a process, at the time point when the end-of-sentence sign <s> is output from decoder154, the word sequence156of the paraphrased request is determined, and the output of request paraphrasing system120is obtained. The output is the paraphrased request, namely, the request in an natural language as user input56paraphrased to have substantially the same meaning but a higher probability of obtaining an answer from question-answering device122. The paraphrased request is input to question-answering device122(step298ofFIG.6). Question-answering device122generates an answer to the paraphrased request and outputs it as an answer60(step302). The request output from neural paraphrasing model94is the one paraphrased to have a higher probability of getting an answer from question-answering device122and, therefore, the probability of obtaining a right output as an answer to user input56from question-answering device122is higher than when the request is not paraphrased. If an answer is still not obtained, however, a search engine, not shown, is used to perform a web search using the paraphrased request as key words, and the search results are generated and output (step304). Effects of the First Embodiment As described above, according to the first embodiment, a request input by the user is paraphrased, using neural paraphrasing model94, to a request having a higher probability of getting an answer from question-answering device122, and input to question-answering device122. Therefore, even when a user input includes a complex sentence, a colloquial expression or unnecessary information, the probability that question-answering device122outputs a right answer becomes higher. Further, by well adjusting the training data for neural paraphrasing model94, it becomes more likely that question-answering device122provides an answer including such information that the user potentially desires, though not necessarily in a conscious way. The pre-paraphrasing request is classified by classification model98and used as a feature input to neural paraphrasing model94. Therefore, it is possible to paraphrase the user input56to a request in a right format in accordance with the type of the request and having a higher probability of getting an answer from question-answering device122. The probability of obtaining a right answer to user input56in accordance with the type of the question from question-answering device122becomes higher. Needless to say, such a classification is not necessarily used as a feature. Second Embodiment <Configuration> «Overall Configuration» The first embodiment described above relates to a question-answering system. Therefore, there is no problem to process assuming that the input sentence is a request. In a more general dialogue system, however, an input may or may not be a request. It is generally unpredictable what type of input will be received. In such a situation, unconditional paraphrasing using neural paraphrasing model94as in the first embodiment may not be reasonable. It is necessary to apply the neural paraphrasing model only when it is a request. In the second embodiment, this is determined by a determination model, which is implemented by a convolutional neural network, an example of deep neural network, as is the case of classification model98used in the first embodiment, and only if the determination is positive (the input is some request), the input sentence is paraphrased by using the neural paraphrasing model and applied to the question-answering system. Referring toFIG.7, this system includes: a training data adding device320used by an operator for manually creating and adding training data items of a request determining model326; a training data storage unit322for storing training data items created by using training data adding device320; a request determining model training device324for training a request determining model326using the training data comprised of the training data items stored in training data storage unit322; and a dialogue system330outputting a right response as a response332when a user input328is a request and when not, using the request determining model326trained by request determining model training device324. When the training data items are formed, sentences having request-like patterns may be extracted from web sites on the Internet and whether or not these sentences are truly requests may be manually determined. «Request Determining Model326» Request determining model326has a configuration similar to that of classification model98shown inFIG.3. It is noted that request determining model326and classification model98are different in that request determining model326provides two outputs indicating a probability of being a request and a probability of not being a request, respectively. Though request determining model326and classification model98have different number of layers, different number of feature maps and different arrangements thereof, basically, their configurations are the same. Further, the method of training is also the same except that different training data are used. Therefore, detailed description thereof will not be repeated here. The training data item used for training request determining model326consists of a pre-paraphrasing first request, and a label indicating whether or not the first request is a request. At the time of training, a word vector sequence as an object of request determination is given to an input of request determining model326, an output of request determining model326(probabilities of it being a request and not being a request) is compared with the label of the text (if it is a request, (1, 0), if not, (0, 1)), and a difference is calculated. By common error back propagation, various weights and bias values forming request determining model326are adjusted to make the error function value smaller. «Dialogue System330» Referring toFIG.7, dialogue system330includes: request determining model326trained by request determining model training device324; a request determining unit340configured to receive a user input328for determining whether or not user input328is a request or not using request determining model326and branching the user input328to either one of two outputs depending on the result of determination; a question-answering system342configured to receive, when the determination by request determining model326indicates a request, a word vector sequence from request determining unit340and providing an answer to a question represented by the word vector sequence; a separate responding system344different from the question-answering system, configured to receive the word vector sequence from request determining unit340and to provide an appropriate response, when the user input328is determined by request determining model326not to be a request; and a selecting unit346for selecting either the answer output from question-answering system342or the response output from separate responding system344, which is right in accordance with the result of determination by request determining unit340, as a response332. «Request Determining Unit340» Request determining unit340does, as pre-processing, a morphological analysis on user input328and converts each word to a word vector, so that a word vector sequence is generated. Request determining unit340applies the word vector sequence to request determining model326and obtains an output of request determining model326. If the output of request determining model326is true (user input328is a request), request determining unit340applies the word vector sequence to question-answering system342. Otherwise, request determining unit340applies the word vector sequence to the separate responding system344. «Question-Answering System342» Question-answering system342includes: a request paraphrasing system350having a configuration similar to that of request paraphrasing system120in accordance with the first embodiment; and a question-answering device352configured to output, as a response, an answer to a request paraphrased by request paraphrasing system350, to selecting unit346. «Request Determining Model Training Device324» Referring toFIG.7, request determining model training device324is for training request determining model326using the training data stored in training data storage unit322. Each of the training data items stored in training data storage unit322includes a pair of a natural language sentence and a label indicating whether or not the sentence is a request. The training data items are mainly prepared manually. In the present embodiment, this function is realized by the cooperation of the computer hardware and a computer program. The structure of this computer program is the same as those shown inFIGS.4and5. «Program Structure» FIG.8shows a control structure of a program causing a computer to function as dialogue system330shown inFIG.7. Referring toFIG.8, the program includes: a step360of doing a morphological analysis on user input328to convert it to a morpheme sequence; a step362of converting each word of the morpheme sequence obtained at step360to a word vector and thereby outputting a word vector sequence; a step364of applying the word vector sequence obtained at step362to request determining model326to obtain an output of request determining model326; and a step366of determining, based on the output of request determining model326at step364, whether or not the user input328is a request and branching the control flow depending on the result of determination. The program further includes: a step370, executed when the determination at step366is positive, of inputting the word vector sequence obtained at step362to request paraphrasing system350; a step372of inputting the word vector sequence output from request paraphrasing system350as a result of the process at step370, to question-answering device352; and a step374, responsive to the process of step372, of determining whether or not an answer is provided from question-answering device352and branching the control flow depending on the result of determination. The program further includes: a step376, executed when the determination at step374is positive, of selecting the answer of question-answering device352, outputting it as a response332and ending execution of the program; a step378, executed when the determination at step374is negative, of searching the web using the user input328as an input, outputting the search results and ending execution of the program; a step380, executed when the determination at step366is negative, of giving the word vector sequence as an input not to the question-answering system342but to the separate responding system344; and a step382of selecting and outputting the response output from separate responding system344as a result of step380, and ending execution of this program. <Operation> Assuming that the training of the request paraphrasing system350has been already completed, the second embodiment has two operation phases. The first is a training phase of training request determining model326by request determining model training device324, and the second is an interactive response phase of dialogue system330using the trained request determining model326. «Training Phase» Referring toFIG.7, in the training phase, request determining model training device324operates as follows. In training data storage unit322, the training data prepared manually in advance is stored. Request determining model326is prepared in a prescribed initial state. Specifically, a program for realizing the convolutional network is loaded to the memory, an area for storing parameters defining the function represented by the convolutional network is secured in the memory and initialized, respectively. Request determining model training device324trains request determining model326using the training data stored in training data storage unit322(corresponding to step260ofFIG.5). The scheme of training thereafter is the same as the training of classification model98shown inFIG.5. Therefore, detailed description thereof will not be repeated here. «Interactive Response Phase» In the interactive response phase, dialogue system330operates as follows. Referring toFIG.7, request determining unit340of dialogue system330receives a user input328and converts it to a word vector sequence (steps360and362ofFIG.8). Request determining unit340inputs this word vector to request determining model326for the purpose of request determination, and determines whether or not the user input328is a request (steps364and366). Request determining unit340sorts the user input328to either one of two outputs. Specifically, if the determination by request determining model326is positive (YES at step366), request determining unit340applies the word vector sequence to question-answering system342(step370). Request paraphrasing system350of question-answering system342paraphrases the word vector sequence applied from request determining unit340to a request having a higher probability of getting an answer from question-answering device352, and inputs the paraphrased request to question-answering device352(step372). Question-answering device352tries to generate an answer to the request. If there is an answer (YES at step374), question-answering device352generates the answer and outputs it to selecting unit346(step376). If there is no answer, question-answering device352performs a web search using the paraphrased request as keywords on a web search engine and outputs the search results (step378). By contrast, if the determination by request determining unit340is negative (NO at step366), request determining unit340applies the word vector sequence to separate responding system344(step380). Separate responding system344generates a response to the word vector sequence and outputs it to selecting unit346(step382). If the determination by request determining unit340is positive, selecting unit346selects the output of question-answering device352, and otherwise the output of separate responding system344, as response332. Effects of the Second Embodiment According to the second embodiment, not only in the question-answering system but also in the general dialogue system, requests and non-requests are sorted, and only those appropriate as requests to a question-answering system are provided as inputs to the question-answering system. Therefore, an answer appropriate for a dialogue can be generated. Further, as in the first embodiment, as a pre-stage to an input to the question-answering system, a request is paraphrased to have a higher probability of obtaining an answer from the question-answering system than before paraphrasing. As a result, for a request included in a dialogue, as in the first embodiment, even if the user input includes a complex sentence or unnecessary information, the probability that an appropriate answer is output from the dialogue system can be improved. Further, by adjusting the training data for the neural paraphrasing model, it becomes more likely that the question-answering system provides such information that the user potentially desires, though not necessarily in a conscious way. Third Embodiment <Configuration> In the first embodiment, the training data items to be stored in training data storage unit50are generated by (1) first training data generating unit64manually generating a pre-paraphrase sentence and a paraphrased request; (2) second training data generating unit66manually adding, to a question to which question-answering device122could not give an answer, read from question answering system log storage unit54storing requests to which question-answering system122was unable to give any answer, a paraphrased request expressing substantially the same information as the request; and (3) third training data generating unit68manually adding a paraphrased request to a request candidate stored in a request candidate storage unit52. Using the training data prepared in this manner, neural paraphrasing model94is trained without changing or adding the training data themselves. The present invention, however, is not limited to such embodiments. Neural paraphrasing model94may be trained while adding training data to the above, as follows. FIG.9is a block diagram showing a request paraphrasing model training system400in accordance with a third embodiment. Referring toFIG.9, request paraphrasing model training system400includes a training data storage unit50, and a training unit62for training a request paraphrasing system412using training data storage unit50. In the present embodiment, request paraphrasing system412has a function of outputting not only one paraphrased request but N best paraphrased request candidates. The N bests can be generated by combining, for each word forming word sequence156shown inFIG.2, not only the candidates having the highest scores (probabilities) but also word candidates having the second or third highest scores. Request paraphrasing model training system400further includes: a paraphrasing candidate storage unit414for storing as paraphrasing candidates N-bests obtained by inputting a request410to request paraphrasing system412; question-answering device122, which is the same as that described in the first embodiment, configured to receive each of the paraphrasing candidates stored in paraphrasing candidate storage unit414and to generate and output answers; an answer storage unit416for storing answers output from question-answering device122; an answer evaluating unit418configured to evaluate each of the answers stored in answer storage unit416by some means (for example, manually) and to calculate a score; and a training data generating unit420configured to combine an answer that obtained a score equal to or higher than a prescribed threshold at answer evaluating unit418with request410, to generate a training data item having the answer as the second request and request410as the first request, and to add it to training data storage unit50to be stored. «Program Structure» FIG.10shows a control structure of a computer program realizing, in cooperation with a computer, the process of adding the training data in request paraphrasing model training system400shown inFIG.9.FIG.10shows a control flow for adding training data by using request paraphrasing system412of which training has been once completed. Referring toFIG.10, the program includes: a step456of giving one or more requests (request410shown inFIG.9) to request paraphrasing system412; a step458of obtaining N bests of paraphrased requests as outputs of request paraphrasing system412to each of request410and storing them in paraphrasing candidate storage unit414; and a step460of applying each of N bests obtained for each of request410at step458to question-answering device122, obtaining an output from question-answering device122to each request (answer to the request) and saving them in answer storage unit416. The program further includes: a step462of evaluating, for example manually, quality of each answer stored in answer storage unit416as an answer to request410; and a step464of repeating the following process466for each answer determined to have the quality equal to or higher than a certain threshold at step462, and ending execution of this program. The process466includes: a step480of generating a new training data item by combining a request as a source of the answer (request410ofFIG.9) as the first request, an output of request paraphrasing system412for the request as the second request and a classification code determined by request paraphrasing system412for the first request; and a step482of adding the training data item generated at step480to the training data in training data storage unit50. <Operation> The request paraphrasing model training system400in accordance with the third embodiment operates as follows. Initial training data is manually prepared and stored in training data storage unit50. Using the training data, training unit62trains request paraphrasing system412. By some means, for example manually, one or more requests410are prepared and each of them is input to request paraphrasing system412(step456ofFIG.10). From request paraphrasing system412, N bests of paraphrasing candidates to each request are output and saved in paraphrasing candidate storage unit414(step458). Each of the N best request paraphrasing candidates is input to question-answering device122(step460). As a result, answers are obtained from question-answering device122and saved in answer storage unit416. Using answer evaluating unit418, the quality of the answer is evaluated manually for each combination of an answer and the request410as its source (step462). For each of the answers evaluated to be of a high quality, a new training data item is generated by combining, as one set, the source request410as the first request, the paraphrased request output from request paraphrasing system412for request410as the second request and the classification code based on the result of classification done in request paraphrasing system412for the first request (steps480). This training data item is added to the training data stored in training data storage unit50(step482). By executing such a process, the new training data items are added to training data storage unit50. By training request paraphrasing system412using the training data with added data items, the accuracy of paraphrasing by request paraphrasing system412is expected to be higher. [Computer Implementation] Training data storage unit50, request candidate storage unit52, question-answering system log storage unit54, question-answering system58, training unit62, the first training data generating unit64, the second training data generating unit66, the third training data generating unit68, neural paraphrasing model94, classification model98, training data adding device320, training data storage unit322, request determining model training device324, request determining model326, dialogue system330, request paraphrasing model training system400and so on can each be realized by the computer hardware and computer program or programs executed by a CPU (Central Processing Unit) and GPU (Graphics Processing Unit) on the hardware.FIGS.11and12show computer hardware realizing the devices and systems mentioned above. A GPU is generally used for image processing, and a technique utilizing the GPU for common computing process is referred to as GPGPU (General-purpose computing on graphics processing unit). A GPU is capable of executing a plurality of operations of the same type simultaneously in parallel. In these premises, when a neural network is trained or tested, the calculations of weights for each node are simple product-sum operations and are often simultaneously executable. Therefore, the GPGPU is good for neural paraphrasing model94and classification model98shown inFIG.2, request determining model326shown inFIG.7and request paraphrasing system412shown inFIG.9. Referring toFIG.11, computer system530includes a computer540having a memory port552and a DVD (Digital Versatile Disc) drive550, a keyboard546, a mouse548and a monitor542. Referring toFIG.12, in addition to memory port552and DVD drive550, computer540includes a CPU556and GPU557, a bus566connected to CPU556, GPU557, memory port552and DVD drive550, a read-only memory (ROM)558for storing a boot program and the like, a random access memory (RAM)560, which is a computer readable storage, connected to bus566and storing program instructions, a system program and work data, and a hard disk554. Computer540further includes a network interface (I/F)544providing a connection to a network568, enabling communication with other terminals, and a speech I/F570for speech signal input from/output to the outside, all connected to bus566. The program causing computer system530to function as various functional units of the devices and systems of the embodiments above is stored in a DVD562or a removable memory564, both of which are computer readable storage media, loaded to DVD drive550or memory port552, and transferred to hard disk554. Alternatively, the program may be transmitted to computer540through network568and stored in hard disk554. The program is loaded to RAM560at the time of execution. The program may be directly loaded to RAM560from DVD562, removable memory564, or through network568. The data necessary for the process described above may be stored at a prescribed address of hard disk554, RAM560, or a register in CPU556or GPU557, processed by CPU556or GPU557, and stored at an address designated by the program. Parameters of neural paraphrasing model94, classification model98, request determining model326and request paraphrasing system412of which trainings are eventually completed may be stored, for example, in hard disk554, or stored in DVD562or removable memory564through DVD drive550and memory port552, respectively. Alternatively, these may be transmitted through network I/F544to another computer or a storage device connected to network568. The program includes an instruction sequence of a plurality of instructions causing computer540to function as various devices and systems in accordance with the embodiments above. The numerical value calculating process in the various devices and system described above are done by using CPU556and GPU557. Though the process is possible by using CPU556only, GPU557realizes higher speed. Some of the basic functions necessary to cause the computer540to realize this operation are provided by the operating system running on computer540, by a third party program, or by various dynamically linkable programming tool kits or program library, installed in computer540. Therefore, the program itself may not necessarily include all of the functions necessary to realize the devices and method of the present embodiments. The program has only to include instructions to realize the functions of the above-described systems or devices by dynamically calling appropriate functions or appropriate program tools in a program tool kit or program library in a manner controlled to attain the desired results. Naturally, all the necessary functions may be provided by the program alone. Effects of the Embodiments The above-described embodiments expand the breadth of the acceptable inputs that can be addressed by existing question-answering systems or dialogue systems. Natural language inputs to the systems may be in various styles, including those comprised of only fragmental keywords commonly used as inputs to search engines, and those with colloquial expressions used in chatting. By using the request paraphrasing system in accordance with the embodiments above as pre-processing for the question-answering systems and the dialogue systems, it becomes possible to absorb such differences in styles. As a result, the request paraphrasing system described above can be used directly without necessitating any change to existing systems. Since it is unnecessary to present the results of paraphrasing to the user, the user is unaware of the request paraphrasing system. The embodiments above do not limit input domains and accept natural language inputs of various styles including colloquial expressions. Therefore, it is particularly effective to use the request paraphrasing system and the request determining system in accordance with the embodiments above for the daily-use dialogue systems, such as a dialogue system for common households and an in-vehicle dialogue system. Further, the power of the embodiments will be best exhibited when connected to a system that provides appropriate information and operates in cooperation with a so-called IoT device and other software or knowledge database, rather than to a simple chatting system. Neural paraphrasing model94used in the embodiments above has a configuration similar to that of a neural machine translation. The reason for this is that the lengths of the inputs to the input and output sentences are not fixed. The neural paraphrasing model94, however, is not limited to such a model. Any machine learning model may be used provided that it accepts input and output sentences of unfixed length. Further, the convolutional neural network is used for classification model98of the first embodiment and for request determining model326of the second embodiment. The present invention, however, is not limited to such embodiments. A model that is trained through machine learning to determine whether or not an input sentence is a request, for example an SVM (Support Vector Machine), may be used. Other than the above, any currently available model or any model that will be available in the future that can be used as the neural paraphrasing model, the classification model and the request determining model of the present invention may be used. The embodiments as have been described here are mere examples and should not be interpreted as restrictive. The scope of the present invention is determined by each of the claims with appropriate consideration of the written description of the embodiments and embraces modifications within the meaning of, and equivalent to, the languages in the claims. INDUSTRIAL APPLICABILITY The present invention is applicable to a question-answering system using a computer, which is a complicated system including combinations of questions and possible answers, for a user to effectively navigate the question-answering system. REFERENCE SIGNS LIST 50,322training data storage unit52request candidate storage unit54question-answering system log storage unit56,328user input58,342question-answering system60answer62training unit64first training data generating unit66second training data generating unit68third training data generating unit92paraphrasing model training unit94neural paraphrasing model96classification model training unit98classification model120,350,412request paraphrasing system122,352question-answering device130pre-processing unit140word vector sequence144encoder146forward GRU sequence148backward GRU sequence150combining unit152intermediate node vector154decoder156word sequence158decoder input160attention layer162coefficient calculating unit164context vector generating unit166hidden state vector168context vector170pair180convolutional neural network182final layer184Softmax layer190input layer192convolutional layer194pooling layer200N gram210maximum element220,222element240,244,246,250,252,254,260,264,266,270,272,274,290,292,294,296,298,300,302,304,360,362,364,366,370,372,374,376,378,380,382,450,452,454,456,458,460,462,464,470,480,482step242,262training process320training data adding device324request determining model training device326request determining model330dialogue system332response340request determining unit344separate responding system346selecting unit400request paraphrasing model training system410request414paraphrasing candidate storage unit416answer storage unit418answer evaluating unit420training data generating unit466process | 62,618 |
11861308 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the drawings. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation. DETAILED DESCRIPTION Graph structures generally represent relationships between data and operations as connections between nodes in a graph, where the nodes represent data provided by a user of an application and/or operations performed by an application. These graph structures may be established as directed graphs, in which nodes representing inputs to a given node are connected to the given node using directional constructs, such as unidirectional arrows or connections that point from a source node to the given node. Because graphs can be used to define input and output relationships for a function in an application, graphs may be a useful mechanism by which an application can be defined. In some cases, an application may be defined using a knowledge graph structure. In a knowledge graph structure, nodes in the knowledge graph may encode various rules for performing an operation. For example, a node may include rules that define required and optional inputs for a function and specify the output generated based on the required and optional inputs. Further, in the knowledge graph, nodes may be connected in a continuous path from a root node, which may represent the entry point into an operation or a series of related operations in an application, to a terminating node representing the final actions to be performed for and the results generated from executing the operation or series of related operations. For example, in an accounting application, a knowledge graph may define an operation for tracking accounts payable as a series of connected nodes encoding rules that, when executed, results in a summation of amounts in unpaid invoices received during a given time period. In another example, in a time tracking application, a knowledge graph may define an operation for tracking overtime for any given week as a series of connected nodes encoding rules that, when executed, results in a summation of hours worked for each day of a week, less an amount of time expected to be worked during that week. Because knowledge graphs describe operations in terms of inputs and rules applied to those inputs (and any intermediate calculations) to generate a result, knowledge graphs may be used in various applications to allow users to request the result of an operation, given some set of inputs. Conversational user interfaces allow users to pose questions against a knowledge graph using natural language inputs. A conversational agent in a conversational user interface may use a natural language understanding model to answer received natural language questions. The natural language understanding model may be trained using a training data set of utterances to map words in various natural language utterances to various nodes in the knowledge graph. For example, the natural language understanding model can be trained using a corpus of questions commonly posed by users of a software application and information about the content associated with nodes in a knowledge graph to map various keywords in natural language utterances to the appropriate nodes in the knowledge graph. Some questions received from a user of a software application may not be mapped by a natural language understanding model to specific nodes in the knowledge graph. These questions may be, for example, questions that are asked infrequently (“long-tail” queries), questions that involve performing operations using data from multiple nodes in the knowledge graph, and the like. When conversational agents encounter such questions and determine that the natural language understanding model is unable to handle such questions (e.g., provide an answer to such a question), conversational agents may revert to other techniques for providing information to a user in response to such questions. For example, a conversational agent can use keyword searches to search a help repository and identify articles in the help repository that are potentially relevant to a received question. However, generating lists of potentially relevant articles by searching a help repository may not provide a satisfactory answer to a user's query. Aspects of the present disclosure provide techniques for processing “long-tail” queries by extracting operators and operands from a received natural language utterance and generating a result of such queries using the extracted operators and operands. By extracting operators and operands from a received natural language utterance using machine learning models trained to identify words in the received natural language utterance that are mapped to operations and nodes in the knowledge graph, respectively, various operations may be performed against data in the knowledge graph based on the extracted operators and operands. These machine learning models may be used to augment machine learning models trained to match an intent of a query to nodes in a knowledge graph in order to generate usable answers to received queries instead of suggested articles or other information that may be tangentially related to a received query or otherwise not provide a satisfactory answer to the received query. Beneficially, then, for queries that are frequently encountered and for which a machine learning model is able to identify a matching intent, these queries may be answered using mappings learned between intents and nodes in a knowledge graph. Further, for “long-tail” queries that are infrequently encountered, an answer can be generated based on nodes in the knowledge graph rather than a list of suggested articles or other information that may be generated by a machine learning model trained to answer queries based on mappings learned between intents and nodes in a knowledge graph. Nodes need not be defined in the knowledge graph to satisfy “long-tail” queries, as these queries can be satisfied using information extracted from the knowledge graph by a machine learning model trained to match words in an utterance to nodes containing relevant data in the knowledge graph and operations associated with the operators extracted from the received natural language utterance by a machine learning model trained to match words in an utterance with an operation to be performed. By processing queries in natural language utterances using extracted operands and operators, conversational user interfaces may be made highly scalable. For example, in intent-based systems, nodes may need to be manually defined (e.g., in a 1-to-1 mapping of intent to knowledge graph node) for each specific operation or question that may be posed through a conversational user interface, which is not scalable, as different questions may be posed by different users to a conversational user interface and an exhaustive list of these questions (and the appropriate mappings to nodes in the knowledge graph) may be difficult to maintain. With 1-to-1 mappings of intents or utterances to specific operations over specific inputs, the number of combinatorial mappings may be impracticably large, making answering questions in a conversational user interface an intractable problem in practice. Further, questions of varying degrees of complexity may be answered with a result derived from user-provided data in a knowledge graph rather than a list of articles that may not be relevant to the question posed by a user and may not use data already provided by the user to the knowledge graph. Example Mapping of Natural Language Utterances to Operators and Operands in a Knowledge Graph for Answering a Query FIG.1illustrates an example computing environment100in which a conversational user interface in a query processor extracts operators and operands from a natural language utterance to answer a query. As illustrated, computing environment100includes a query processor110, a knowledge graph repository120, and an operation mapping repository130. Query processor110generally exposes a conversational user interface through which a query including a natural language utterance is received for processing. The natural language utterance may be received as a text string or as an audio file from which a text string can be extracted. The natural language utterance may be received, for example, from an application executing on a client device (e.g., a desktop computer, laptop computer, smartphone, tablet computer, etc.) or from an application instance executing on a server or cluster of servers. Generally, the natural language utterance includes a query about an operation performed by an application associated with query processor110or data generated by the application. To satisfy the query, query processor110can initially process the natural language utterance through an intent-based query resolver112. Intent-based query resolver112generally uses a natural language understanding model trained against a knowledge graph (e.g., a knowledge graph stored in knowledge graph repository120) to determine whether the query can be satisfied by retrieving data from a node in the knowledge graph. To do so, intent-based query resolver112can extract information from the natural language utterance, such as an intent of the natural language utterance, and attempt to match the extracted information to one or more nodes in the knowledge graph. The intent of the natural language utterance may be extracted based on a machine learning model that has been trained to recognize mappings between the meaning of a natural language utterance (e.g., the specific actions that a user has requested to be performed and/or the specific data that the user has requested to be retrieved) and specific nodes in a knowledge graph representing data that has been input into the knowledge graph or the results of calculations performed based on the data input into the knowledge graph. This machine learning model may be a natural language understanding model trained to extract information from a natural language utterance and match the extracted information to nodes in the knowledge graph. For example, to extract an intent, the machine learning model used by intent-based query resolver112can use techniques such as part-of-speech tagging to identify entities in the natural language utterance and match the extracted entities to entities specified as answerable by nodes in the knowledge graph. The machine learning model can identify these entities based, for example, on word matching techniques, generating embedding values for each word relative to the names of nodes in a knowledge graph, or other word-recognition/comparison techniques. If intent-based query resolver112identifies a match between an extracted intent from the natural language utterance and a node in the knowledge graph, intent-based query resolver112can return a value associated with the matching node as a response to the natural language utterance. In some cases, intent-based query resolver112can identify a match between an extracted intent and a node in the knowledge graph based on a match score. If the match score exceeds a threshold value, intent-based query resolver112can determine that a match exists between an intent of the natural language utterance and a node in the knowledge graph. Otherwise, intent-based query resolver112can determine that the received natural language utterance is a “long-tail” query and proceed to process the query based on extracting operators and operands from the natural language utterance. Generally, operands included in a natural language utterance may correspond to data nodes in a knowledge graph on which an operation is to be performed, and operators included in a natural language utterance may correspond to a function to be performed on the data from the data nodes in the knowledge graph associated with the operands. In some cases, intent-based query resolver112can determine that the received natural language utterance is a “long-tail” query based on an examination of a response database associated with a chatbot application from which the natural language utterance was received. If the database associated with the chatbot application does not include a response for the long-tail query, intent-based query resolver112can proceed to process the query based on extracting operands and operators from the natural language utterance using operand extractor114and operator extractor116, respectively. In some embodiments, query processor110can receive the natural language utterance from a chatbot application. If query processor110determines that a response database associated with the chatbot application does not include a response to a query included in the natural language utterance, query processor110can proceed to process the natural language response by extracting operands and operators from the natural language utterance. Query processor110can determine that the response database associated with the chatbot application does not include a response to the query included in the natural language utterance implicitly (e.g. based on receiving the natural language utterance from the chatbot application for processing) or by querying a database associated with the chatbot application. In some embodiments, intent-based query resolver112may use the same models and databases used by a chatbot application to resolve a natural language utterance, and a determination by intent-based query resolver that the query in the received natural language utterance is a “long-tail” query may serve as a determination that no response to the query exists in the response database associated with the chatbot application. To extract operands from a natural language utterance, operand extractor114can generate a similarity score for words or groupings of words to names of nodes in a knowledge graph. Nodes in the knowledge graph may generally represent values input into a knowledge graph or values calculated from data input into the knowledge graph. Operand extractor114can tokenize the received natural language utterance (or a text representation of the received natural language utterance, if received as an audio file) into a plurality of tokens for analysis. To determine whether a token (corresponding to a word or group of words in the received natural language utterance) identifies an operand against which an operation is to be performed, operand extractor114can generate embedding values for each token relative to the names of each node in the knowledge graph. The embedding values may be generated by operand extractor114as a vector having a length corresponding to the number of nodes in the knowledge graph, with each element of in the vector corresponding to a specific node in the knowledge graph. The embedding values may be calculated, for example, as cosine similarity values between a word in the natural language utterance and nodes in the knowledge graph, frequency-based scores between the word in the natural language utterance and nodes in the knowledge graph, match probability scores, or the like. A natural language model used by operand extractor114may be trained using only information from the knowledge graph, such as the names of nodes in the knowledge graph or the like. The natural language model may be trained using, for example, word distance models, word embedding models, or other natural language models that can predict the likelihood that a word matches or otherwise references a specific node in the knowledge graph. Words that are not likely to correspond to nodes in the knowledge graph may have embedding values closer to a defined minimum value in a range of embedding values, while words that are likely to correspond to nodes in the knowledge graph may have embedding values closer to a defined maximum value in the range of embedding values. To determine whether a token corresponding to a word or series of words is likely to correspond to a node in the knowledge graph (and thus, to an operand on which an operation is to be performed), operand extractor114can determine whether an embedding value for a token and the name of a node in the knowledge graph exceeds a threshold value. If none of the embedding values in the vector generated for a token by operand extractor114exceed a threshold value, operand extractor114can determine that the word in the natural language utterance represented by the token does not represent an operand on which an operation is to be performed. If at least one embedding value in the vector generated for a token exceeds the threshold value, operand extractor114can determine that the word in the natural language utterance represented by the token corresponds to an operand on which an operation is to be performed. Operand extractor114can extract a value associated with the node having an embedding value exceeding a threshold value, associate the word or series of words in the natural language utterance represented by the token with the extracted value, and identify the word-value pairing as an operand against which an operation is to be performed. If there are multiple nodes with embedding values that exceed a threshold value, operand extractor114can perform further processing to identify the node in the knowledge graph that is most likely to be an operand on which an operation is to be performed. For example, if multiple nodes have embedding values that are above a threshold value and within an amount from each other, operand extractor114can use other contextual information to identify a specific node in the knowledge graph to be deemed an operand. For example, operand extractor114can examine embedding values for different groups of consecutive words including the word on which the previous embedding values was generated (e.g., the word and an immediately preceding word, the word and an immediately succeeding word, and the like) using the techniques discussed above. In some embodiments, for nodes associated with numerical values, operand extractor114can determine that a word in the natural language utterance corresponds to multiple nodes in the knowledge graph by performing mathematical operations with respect to embedding values generated for a word in the natural language utterance. For example, if a mean (average) word embedding value for a token (e.g., a word or series of words) in the natural language utterance is close in value to a sum of embedding values for a plurality of other graph nodes, operand extractor114can determine that an operand corresponds to a sum of values associated with the plurality of other graph nodes. Operator extractor116can execute sequentially or in parallel with operand extractor114to identify one or more operations to be performed with respect to the extracted operands in the natural language utterance. To identify the one or more operations included in the natural language utterance, operator extractor116can use a predefined mapping of words to various mathematical operations (e.g., stored in operation mapping repository130) to identify operations to perform in response to a received natural language utterance. These mathematical operations may include addition operations (i.e., to add values associated with multiple nodes), subtraction operations (i.e., to determine a difference between values associated with multiple nodes), multiplication operations (i.e., to generate a product of values associated with multiple nodes), division operations (i.e., to determine a quotient from values associated with multiple nodes), comparison operations, and the like. For example, the mappings of words to mathematical operations may be defined based on Table 1 below: TABLE 1Word to Operation MappingsOperationMapped WordsAdditionSum, total, addition, add, plusSubtractionDifference, decrease, deduct, subtractMultiplicationProduct, multiplyDivisionDivide, percentEqualityEqual, same, identicalGreater thanGreater, exceeds, more than, larger thanLess thanFewer, less than, smaller thanUnion of setsTogether, groupIntersection of setsCommon to, member of bothSet differenceMember of only one, in A but not in BSet complementNot included in, not a member of In some embodiments, operator extractor116can use various natural language processing techniques, such as word embeddings, to determine whether a word in a natural language utterance corresponds to a predefined operation. For example, each operation (e.g., addition, subtraction, multiplication, division, comparison operations, etc.) may be associated with various words that indicate that the operation is to be performed on identified data in the natural language utterance. Each operation may thus be treated as a vector over which natural language processing operations can be performed to determine whether a word in the natural language utterance implicates the operation. For example, embeddings can be generated for a token (e.g., word or series of words) to determine whether a word in the natural language utterance indicates that an operation is to be performed. A word in the natural language utterance that does not correspond to any of the words mapped to operations may thus result in the generation of a vector having embedding values below a threshold value for each word associated with the operations. In contrast, a word in the natural language utterance that corresponds to one or more of the words mapped to an operation may result in the generation of a vector having at least one embedding value above the threshold value. For example, assume that a received natural language utterance takes the form of “What percent of X is Y.” The word percent may imply a division operation on data specified in the natural language utterance. Thus, at least one embedding value for the division operation may be a value over a threshold value and indicate that a division operation is to be performed, and embedding values for words associated with the other operations may be below the threshold value. In some cases, after identifying operators in the natural language utterance, operand extractor114can determine relationships between the operands in the natural language utterance to determine how an operation is to be performed. These relationships may be identified for operations that are sensitive to the order in which operands are input into the operation. For example, these relationships need not be identified where an ordering is irrelevant, such as addition or multiplication operations where A+B=B+A or A*B=B*A. However where ordering affects the value generated by an operation, operand extractor114can map different operands to specific positions in an equation. For example, in a subtraction operation, operand extractor114can identify a minuend (the value from which other values are to be subtracted) and one or more subtrahends (the values that are to be subtracted from the minuend) based on contextual information in the natural language utterance (e.g., an ordering in which operands are specified in the natural language utterance). In another example, in a division operation, operand extractor114can identify the dividend and the divisor based on contextual information in the natural language utterance. Query result generator118generally uses the extracted operands and operators to generate a result of the natural language query input by a user into query processor110. To do so, query result generator118performs one or more calculations defined by the extracted operators on the values associated with the extracted operands. As discussed, the extracted operators may be associated with an ordering in which operands are to be acted upon or an identity within a given operation that each operand assumes. Query result generator118can thus perform the one or more operations according to the order in which operands are to be acted upon to generate a result of an operation. In some embodiments, where a natural language utterance includes multiple operations, query result generator118can determine whether the results of one operation are an operand to another operation. If so, query result generator118can order the execution of operations so that intermediate operations whose results are operands to a higher-level operation are performed before these higher-level operations are performed. After query result generator118performs the one or more operations specified in the natural language utterance, query result generator118can output the results of the one or more operations to an application for display to a user of the application. In some embodiments, query result generator118can output the results as a natural language response to a natural language utterance. The natural language response may, for example, include the requested information as well as the data associated with the extracted operands. In some embodiments, the natural language response may further include information about how the result was generated. A level of detail included in the natural language response may be modified, for example, using one or more verbosity flags in a configuration message or in a request transmitted to the query processor110. A verbosity flag specifying a lower level of verbosity in the natural language response may include the calculated results of the operation(s) specified in the natural language utterance, while a verbosity flag specifying a higher level of verbosity may include the calculated results of the operation(s) specified in the natural language utterance, the raw data from which the results were calculated, and an explanation of how the results were generated. Additional information in the natural language response may be extracted from a knowledge graph that identifies how the values of the operands was calculated or other relevant information. By extracting operators and operands from a natural language utterance, query processor110can generate an answer to rarely encountered questions for which intent-based query resolver is unable to generate an answer. For any given calculation to be performed on data associated with nodes in the knowledge graph, the calculation can be performed based on the extracted operators and operands rather than relying on an operation defined in a knowledge graph. Thus, chatbots or other support agents may be made more scalable, as various operations not defined in the knowledge graph may be answered by identifying an operation to perform on specified data in the knowledge graph. Example User Interface for Generating a Response to a Query in a Natural Language Utterance by Extracting Operators and Operands from the Natural Language Utterance FIG.2illustrates an example user interface200in which a query posed through a conversational user interface is answered by extracting operators and operands from a natural language utterance. As illustrated, user interface200includes a first field for entering a natural language utterance202and a second field for display a result204calculated by extracting operators and operands from the natural language utterance202. While natural language utterance202is illustrated as a text string, it should be recognized that the first field for entering the natural language utterance202may alternatively or additionally allow for a natural language utterance to be input into user interface202as an audio file. Where the first field allows for entry of a natural language utterance as an audio file or live audio stream, the audio file live audio stream may be transcribed into a textual string displayed in user the first field. In this example, the natural language utterance202is “What percent of my income is wage income and what percent of my income is investment income?” To process natural language utterance202and generate the result204, a query processor (e.g., query processor110illustrated inFIG.1) can first attempt to map the natural language utterance to a node in a knowledge graph containing an answer to the query in the natural language utterance202. If no node exists in the knowledge graph that maps to an intent of the natural language utterance, then the query processor proceeds to attempt to answer the query by extracting operators and operands from the natural language utterance based on mappings between words and operations and words and nodes in the knowledge graph, as discussed above. Based on extracting operators and operands from the natural language utterance, a query processor can perform one or more operations on data associated with nodes in a knowledge graph (e.g., numerical data calculated based on rules in a knowledge graph or elsewhere defining relationships between nodes in the knowledge graph representing different pieces of data used by an application to perform one or more operations). These operations may be output in result204, which may include the calculated result and, in some cases, additional detail illustrating how the result was generated, information about the data used to generate result204, and the like. Example Identification of Operators and Operands in a Natural Language Utterance FIG.3illustrates an example identification of operators and operands in a natural language utterance (e.g., natural language utterance202illustrated inFIG.2) to generate an answer to a “long-tail” query. To identify operators and operands in natural language utterance202, a query processor (e.g.,110inFIG.1) can tokenize the natural language utterance202to isolate individual words or sequences of words in the natural language utterance202for analysis. For example, natural language utterance202can be initially deconstructed into tokens of individual words. To identify operators in the received natural language utterance, the query processor can generate embedding values for each token against predefined sets of words associated with different operations. In this example, tokens202A and202D may be mapped to a division operation, as the word “percent” (or a permutation of the word “percent”) may be included in a set of words associated with a division operation. That is, the word “percent” may result in the generation of embedding values that are below a threshold value for words associated with addition, subtraction, multiplication, and comparison operations, but may result in the generation of an embedding value for at least one word associated with a division operation. To identify operands on which the extracted operations (represented by tokens202A and202D) are to be executed, the query processor can generate a vector including embedding values for each token relative to the names of nodes in a knowledge graph300to identify nodes that are related to relevant data to be used in performing one or more operations in the natural language utterance. Knowledge graph300, as illustrated, defines nodes corresponding to inputs that can be used to calculate total income. In this example, the value of income node302represents a total income calculated as a sum of the value of wage income node304and the value of investment income node306. In turn, the value of wage income node304may be represented as the sum of the value of the W-2 income node308and 1099 income node310. The value of investment income node306may be represented as a sum of the interest income node312and the dividend income node314. Knowledge graph300may be organized into a vector including the names of each of income node302, wage income node304, investment income node306, W-2 income node308, 1099 income node310, interest income node312, and dividend income node314. The vector need not maintain information about the specific structure of the knowledge graph (e.g., relationships between nodes), but may maintain a list of nodes in the knowledge graph, as the vector is used to identify which nodes in the knowledge graph are associated with data used by an operation identified in a natural language utterance. In some embodiments, elements in the vector representing knowledge graph300may also include other information describing the data represented by a node in the knowledge graph that can be used to determine whether a token in the natural language utterance corresponds an operand represented by a node in the knowledge graph. In this example, tokens202B (“income”),202C (“wage”),202E (“income”), and202F (“investment”) may map to nodes in the knowledge graph. That is, for tokens202B and202E, an embedding value in a vector of embedding values including embedding values for each node in the knowledge graph300may have a value that exceeds a threshold value for income node302and embedding values that do not exceed the threshold value for wage income node304, investment income node306, W-2 income node308, 1099 income node310, interest income node312, and dividend income314. For token202C, an embedding value in a vector generated over the nodes in the knowledge graph300may have a value that exceeds a threshold value for wage income node304and embedding values that do not exceed the threshold value for income node302, investment income node306, W-2 income node308, 1099 income node310, interest income node312, and dividend income node314. Finally, for token202F, an embedding value in a vector generated over the nodes in the knowledge graph300may have a value that exceeds a threshold value for investment income node306and embedding values that do not exceed the threshold value for income node302, wage income node304, W-2 income node308, 1099 income node310, interest income node312, and dividend income node314. Based on these embedding values and identification of nodes associated with tokens in the natural language utterance202, a query processor can retrieve the values associated with nodes302,304, and306for use in performing the operations identified by tokens202A and202D. Semantic information about the ordering of words in the natural language utterance can be used to determine specific roles in an operation that each operand performs. A natural language understanding model can be trained with various sentences or phrases that indicate, for certain operations, relationships between words in a natural language utterance. In this example, the natural language understanding model may be trained to recognize that the phrase “percent of X is Y” specifies that X serves as the divisor in a division operation and Y serves as the dividend in the division operation. Thus, in natural language utterance202, the query processor can determine that token202B corresponds to the divisor and token202C corresponds to the dividend in a division operation identified by token202A, such that a percentage of income that is wage income is represented by the equation: wageincometotalincome. Likewise, me query processor can determine that token202E corresponds to the divisor and token202F corresponds to the dividend in a division operation identified by token202D, such that a percentage of income that is investment income is represented by the equation: investmentincometotalincome. In some embodiments, the query processor can further use information in the knowledge graph300to provide additional information to the user of the software application in response to receiving the query in natural language utterance202. For example, the query processor can use relationships showing the child nodes of a given node to provide an explanation of how values of operands used in an operation were generated or to otherwise provide more granular detail about the value(s) generated by performing the operation(s) specified in the natural language utterance202. FIG.4illustrates an example mapping of a token (word) in a natural language utterance to a vector of embedding values for nodes in a knowledge graph. As illustrated, token202C represents the word “wage” for which a vector of embedding values400is generated. Each embedding value in the vector400may be mapped to the names of nodes in the knowledge graph, illustrated by node vector410. That is, the first embedding value in vector400may be associated with the “income” node, the second embedding value in vector400may be associated with the “wage income” node, the third embedding value in vector400may be associated with the “investment income” node, the fourth embedding value in vector400may be associated with the “W-2 income” node, the fifth embedding value in vector400may be associated with the “1099 income” node, the sixth embedding value in vector400may be associated with the “dividend income” node, and the seventh embedding value in vector400may be associated with the “investment income” node. As illustrated, an embedding value, representing a similarity between token202C and the names of knowledge graph nodes in vector410, may be generated for each node in the knowledge graph. In this example, the token “wage” may have a highest embedding values for the “wage income” node and may have embedding values that are much smaller for each of the other nodes in the knowledge graph. Thus, the query processor can determine that the token202C maps to the wage income node in the knowledge graph and identify the word “wage” in token202C as an operand on which a specified operation is to be performed. Example Computer-Implemented Method for Generating a Response to a Query in a Natural Language Utterance by Extracting Operators and Operands from the Natural Language Utterance FIG.5illustrates example operations500for generating a response to a “long-tail” query based on operators and operands extracted from a natural language utterance. The operations described herein may be performed by a natural language query processor (e.g., query processor110illustrated inFIG.1) or on a computing device on which an application and a query processor used by the application are deployed. As illustrated, operations500begin at block510, where a system receives a long-tail query as a natural language utterance from a user of an application. The “long-tail” query may be received, in some cases, from a chatbot application. As discussed, a “long-tail” query generally represents a query for which an intent-based query resolver is unable to generate an answer (e.g., due to the rarity of such a question being asked and a lack of an exact mapping between an intent of a query to a node in a knowledge graph). In some cases, to determine that the query received in a natural language utterance is a long-tail query, the system can attempt to extract an intent of the query and match the extracted intent of the query to a node in a knowledge graph. If the system determines that the query cannot be matched to a node in the knowledge graph (e.g., that embedding values for the extracted intent over the names of the nodes in the knowledge graph are below a threshold value or otherwise that no match exists between the extracted intent and a node in the knowledge graph), the system can determine that the query in the natural language utterance is a “long-tail” query, and operations500may proceed to block520. At block520, the system extracts operands of an operation from the received natural language utterance. The operands may be extracted from the received natural language utterance based on mappings between words in the received natural language utterance and nodes in a knowledge graph. In some embodiments, in order to identify and extract operands from the natural language utterance, the system can tokenize the natural language utterance into a plurality of tokens and analyze each token using a natural language model trained using training data obtained from the knowledge graph (e.g., names of nodes in the knowledge graph, relationships between nodes in the knowledge graph, and the like). Generally, tokens (words or sequences of words) that correspond to a node in the knowledge graph, and thus to an operand on which an operation can be performed, may be associated with embedding values, similarity values, or the like that exceed a threshold value, and tokens that do not correspond to any node in the knowledge graph may be associated with embedding values or similarity values that do not exceed the threshold value. In some embodiments, a natural language model can generate embedding values that exceed the threshold value for multiple nodes in the knowledge graph in a set of potential matches to the token. The threshold value may be defined a priori based on an analysis of threshold values against results of performing inferences on a ground truth data set such that the selected threshold value corresponds to the threshold value that results in maximum accuracy with respect to determining correct mappings between words and operands. In such a case, the system can map a token (and thus, an operand), to a node having the largest embedding value. In some cases, the system can use other contextual information, such as word(s) immediately preceding or following the word represented by a token, to identify a node in the knowledge graph to which the token is to be mapped. For example, inFIG.3above, the word “income” might have embedding values that exceed the threshold value for many of the nodes in knowledge graph300. Thus, the natural language model can select an immediately previous word to generate a token for analysis. For example, the token generated for the word “income” in natural language utterance202may be “wage income” or “investment income,” which may have embedding values above a threshold value for a specific node in knowledge graph300(e.g., to nodes304and306, respectively). In some embodiments, a system can map a token representing a word in a natural language utterance to multiple nodes in a knowledge graph. To do so, the system can calculate an average word embedding value for an utterance and identify nodes in the knowledge graph associated with embedding values that, when combined, approach the average word embedding value. The average word embedding value may be the average of each of the individual embedding values generated for each word or sequence of words in the natural language utterance. The sum of the values associated with the identified nodes may be determined to be the value of an operand represented by the token. At block530, the system extracts operators of an operation from the received natural language utterance. The system can extract operators from the natural language utterance based on a mapping between words in the received natural language utterance and functions to be performed on data extracted from the knowledge graph. In some embodiments, in order to extract operators from the natural language utterance, the system can tokenize the natural language utterance to generate a plurality of tokens associated with individual words or sequences of words in the natural language utterance. Each token may be compared, using a natural language model, to predefined sets of words mapped to various operations (e.g., addition, subtraction, multiplication, division, comparison, etc.). In some embodiments, the tokens may be compared to these predefined sets of words by generating embedding values for each token over vectors of the words associated with each operation. Generally, when a token does not correspond to an operation, the embedding values for the token over the words associated with the operation may each be a value below a threshold value. Meanwhile, when a token corresponds to an operation, at least one embedding value over the words associated with the operation may be a value above a threshold level. In some embodiments, the system can identify word-operation pairings in a mapping of words to operations having a word that matches (or substantially matches) the word or words in a token. Based on the identification, the system can identify inputs to the operation specified in the natural language utterance. The operation may be performed a number of times specified in the natural language utterance, and the system can identify a number of iterations of the operation is to be performed and inputs for each iteration of the operation based on a semantic analysis of the received natural language utterance. For example, suppose that the natural language utterance requests the sum of data from n nodes in the knowledge graph. The system can process the natural language utterance as a series of n−1 addition operations, where the results of one addition operation are used as an input into another addition operation. In some embodiments, the system can identify for an operation, one or more intermediate operations to perform before performing the extracted operation. For example, a system can use information about the operation and contextual information in the natural language utterance to identify operands for the operation. If the system determines that an operand is missing, the system can determine that the result of an intermediate operation is an operand for the operation and orchestrate the execution of various operations to satisfy the request such that the intermediate operation is executed prior to executing the operation. A system can determine that an operand is missing based on an expected number of operands for a given operation. For example, assume that at least two operands are needed to perform an operation. If only one operand is identified in a natural language utterance, the system can determine that the result of another operation represents the other operand needed to perform an operation. At block540, the system executes the functions associated with the extracted operators based on data extracted from the nodes in the knowledge graph associated with the extracted operands. At block550the system returns a result of executing the functions associated with the extracted operators as a response to the received long-tail query. In some embodiments, the result may be transmitted to a user through a chatbot application from which the long-tail query was received. Example System for Generating a Response to a Query in a Natural Language Utterance by Extracting Operators and Operands from the Natural Language Utterance FIG.6illustrates an example system600configured to perform the methods described herein, including, for example, method500ofFIG.5. In some embodiments, system600may act as a query processor, such as query processor110illustrated inFIG.1. As shown, system600includes a central processing unit (CPU)602, one or more I/O device interfaces604that may allow for the connection of various I/O devices614(e.g., keyboards, displays, mouse devices, pen input, etc.) to the system600, network interface606through which system600is connected to network690(which may be a local network, an intranet, the internet, or any other group of computing devices communicatively connected to each other), a memory608, storage610, and an interconnect612. The I/O devices614and/or network interface606may be used to receive a query in a natural language utterance through a chatbot application and output a response to the query generated based on extracting operators and operands from the natural language utterance. CPU602may retrieve and execute programming instructions stored in the memory608. Similarly, the CPU602may retrieve and store application data residing in the memory608. The interconnect612transmits programming instructions and application data, among the CPU602, I/O device interface604, network interface606, memory608, and storage610. CPU602is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, and the like. Memory608is representative of a volatile memory, such as a random access memory, or a nonvolatile memory, such as nonvolatile random access memory, phase change random access memory, or the like. As shown, memory608includes an operator extractor620, operand extractor630, and query result generator640. Operator extractor620generally parses a received natural language utterance to identify and extract operators in a long-tail query. As discussed, to extract operators from a natural language utterance, operator extractor620can tokenize the received natural language utterance to generate a plurality of tokens corresponding to words or sequences of words for analysis. Operator extractor620can use mappings of words to operations stored in operation mapping repository660to identify words in the received natural language utterance that correspond to operators (and thus, operations to perform on specified data in the received natural language utterance). In some cases, operator extractor620can use embedding values generated for each token over the words mapped to each operation to determine whether a token corresponds to an operation and, if so, identify the corresponding operation. Operand extractor630generally parses the received natural language utterance to identify and extract operands in a long-tail query. To extract operands from a received natural language utterance, operand extractor630can tokenize the received natural language utterance to generate a plurality of tokens corresponding to words or sequences of words in the natural language utterance. Operand extractor630can compare the tokens to information about nodes in a knowledge graph stored in knowledge graph repository650to identify tokens corresponding to nodes in the knowledge graph and thus to data used by the operations extracted by operator extractor620to generate a result for a received query. Operand extractor630can use embedding values over the names of nodes in the knowledge graph to identify operands in the natural language utterance and retrieve values associated with these operands from the knowledge graph for use in generating a response to the long-tail query. Query result generator640generally uses the extracted operands and extracted operators to generate a result for a received long-tail query. The result may be included in a natural language response output by system600to a requesting device (e.g., via network interface606). In some embodiments, the natural language response may include additional information explaining how the result was calculated, information about the operands used to generate the result, and the like. Storage610is representative of a non-volatile memory, such as a disk drive, solid state drive, or a collection of storage devices distributed across multiple storage systems. Although shown as a single unit, the storage610may be a combination of fixed and/or removable storage devices, such as fixed disc drives, removable memory cards or optical storage, network attached storage (NAS), or a storage area-network (SAN). Storage610, as illustrated, may include a knowledge graph repository650and an operation mapping repository660. Knowledge graph repository650generally represents a data repository in which knowledge graphs defining functions executed within an application are defined. As discussed, these knowledge graphs may include names or other characteristics that can be used to identify operands in a long-tail query. Operation mapping repository660generally represents a data repository in which mappings of words and operations are maintained. These mappings generally identify sets of words that, when included in a natural language utterance, identifies an operation to be performed on operands in the natural language utterance. In some embodiments, knowledge graph repository650and operation mapping repository560may be stored in one or more repositories remote from system600but connected via a network (e.g., network690). WhileFIG.6illustrates an implementation in which a user interacts directly with a system that resolves natural language queries, it should be recognized that one or more servers may interact with system500to provide a natural language utterance to system500for analysis. For example, these servers may host an application that allows a user to input a query as a natural language utterance, and these servers may provide the natural language utterance to system500for resolution. System500can generate a response (e.g., by extracting operations and operands from the natural language utterance) and transmit the response back to the application (e.g., via a chatbot application) for display or other provision to a user of the application. Additional Considerations The preceding description is provided to enable any person skilled in the art to practice the various embodiments described herein. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and input/output devices, among others. A user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media, such as any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the computer-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the computer-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the computer-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. The following claims are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims. Within a claim, reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. | 61,764 |
11861309 | DETAILED DESCRIPTION An image rendering device, such as a printer, plotter, photocopier or fax machine may comprise a fuser unit to form impressions on a medium. The fuser unit comprises a pair of rollers which are heated to melt a printing material, such as toner. The melted printing material is then deposited on the medium, as the medium is passed through the rollers, to form the impressions on the medium. Apart from melting the printing material, the rollers also apply pressure on the medium so that the melted printed material adhere to the medium. Since fuser units experience heat and pressure during their working, the fuser units are prone to malfunction and failure. Thus, the fuser units may undergo periodic maintenance and sometimes be replaced. There may be a plurality of error events which may lead to servicing or replacement of a fuser unit. For example, a fuser unit may undergo servicing after the fuser unit has printed 20,000 pages. In another example, the fuser unit may need replacement after three consecutive overheating events. Analysis of such plurality of error events and their corresponding service activities, for instance, for patterns of events that lead to failure of the fuser units, may enable a prediction of failure of a given fuser unit to be made. Accordingly, fuser event prediction engines may be implemented to predict failure of the fuser units. The fuser event prediction engines may be trained to predict malfunction of the fuser units based on the error events and corresponding service activities recorded for a plurality of image rendering devices in the past. Thus, service notes describing error events and corresponding service activities relating to the fuser units may serve as training data for the fuser event prediction engines. Generation of the training data from the service notes, involves manual processing of the service notes by experts to categorize the service notes into various categories of failures of fuser units. Thus, each of the service notes is analyzed manually and is tagged with a label corresponding to a category of failure. Given that the volume of the service notes is large, manual labeling of the service notes is very time consuming. According to various aspects of the present subject matter, methods and systems for automated processing of service notes are described. In an example, the methods and systems for processing of service notes enable labeling of the service notes, thereby eliminating manual labeling of the service notes. According to example implementations of the present subject matter, service notes comprising natural language text describing error event and corresponding service activities, associated with fuser units of a plurality of image rendering devices, are obtained. The service notes are assigned a label based on a category of failure of the fuser units to obtain labeled service notes. In an example, the labeling of the service notes may be based on user inputs. The labeled service notes are processed, for example, by a natural language processing engine to generate a vector corresponding to each of the labeled service notes. A vector corresponding to a labeled service note may be understood as a numerical representation of the service note generated based on natural language processing of the service note. A relationship between vectors and labels corresponding to the labeled service notes may be determined. For example, a learning engine may generate a function, mapping the relationship between the vectors and the labels. Based on the determined relationship, a label for an unlabeled service note may be determined. Thus, based on a set of service notes that is labeled by users, automatic labeling of further service notes may be performed without user intervention. As will be apparent, the set of service notes used by the learning engine to learn the relationship between vectors and labels, may be small in volume initially and may increase in volume as the further service notes are labeled. In an example, the service notes, labeled based on the above described techniques, may be used to train fuser event prediction engines to predict malfunction of the fuser units. Thus, the techniques to label service notes described herein provide for large volumes of training data to be generated for efficient training of fuser event prediction engines in a time-efficient manner. The above described methods and systems for processing of service notes are further described with reference toFIGS.1to6. It should be noted that the description and figures merely illustrate the principles of the present subject matter along with examples described herein and, should not be construed as a limitation to the present subject matter. It is thus noted that various arrangements may be devised that, although not explicitly described or shown herein, describe the principles of the present subject matter. Moreover, all statements herein reciting principles, aspects, and examples of the present subject matter, as well as specific examples thereof, are intended to encompass equivalents thereof. FIG.1illustrates a print environment100incorporating a machine learning device102, in accordance with an example of the present subject matter. In an example, the print environment100comprises a plurality of image rendering devices104-1,104-2, . . . ,104-n. Examples of the image rendering device104-1,104-2, . . . ,104-ninclude printers, plotter, scanners, photocopier and any other electronic devices that incorporate a fuser unit to fuse a print material for deposition on a medium. Further, examples of the image rendering device104-1,104-2, . . . ,104-nalso include 3D printers that may print three dimensional objects based on an additive manufacturing process. In case of the 3D printers, the fuser unit may be employed to fuse materials used in the additive manufacturing process. In the print environment100, each of the image rendering devices104-1,104-2, . . . ,104-nmay be accessed by multiple users through their respective user devices106-1,106-2, . . . ,106-n. Examples of the user devices106-1,106-2, . . . ,106-ninclude, but are not limited to, electronic device, such as a desktop computer, a laptop, a smartphone, a personal digital assistant (PDAs), and a tablet that may allow a user to communicate with the image rendering devices106-1,106-2, . . . ,106-n. In an example, the user devices106-1,106-2, . . . ,106-nmay communicate with the image rendering devices104-1,104-2, . . . ,104-nover a network108. The network108may be a single network or a combination of multiple networks and may use a variety of different communication protocols. The network108may be a wireless or a wired network, or a combination thereof. Examples of such individual networks include, but are not limited to, Global System for Mobile Communication (GSM) network, Universal Mobile Telecommunications System (UMTS) network, Personal Communications Service (PCS) network, Time Division Multiple Access (TDMA) network, Code Division Multiple Access (CDMA) network, Next Generation Network (NON), Public Switched Telephone Network (PSTN). Depending on the technology, the communication network108includes various network entities, such as gateways, routers; however, such details have been omitted for the sake of brevity of the present description. In the print environment100, the image rendering devices104-1,104-2, . . . ,104-nand the user devices106-1,106-2, . . . ,106-nmay also communicate with a server110. The server110may be implemented as any of a variety of conventional computing devices, including, a desktop, a personal computer, a notebook or portable computer, a workstation, a mainframe computer, and a laptop. Further, in one example, the server110may be a distributed or centralized network system in which different computing devices may host one or more of the hardware or software components of the server110. The server110, for example, may be implemented by a service provider or administrator associated with the image rendering devices104-1,104-2, . . . ,104-nto perform a variety of functions relating to the image rendering devices104-1,104-2, . . . ,104-n. For example, the server110may manage subscription plans and maintain subscriber details relating to the users and their respective user devices106-1,106-2, . . . ,106-n. In an example, the server110may also define rights for the users, such that each user may use the image rendering devices104-1,104-2, . . . ,104-nin accordance with their respective rights. In another example, the server110may track scheduled maintenance activities for the image rendering devices104-1,104-2, . . . ,104-nin the print environment100. Similarly, in an example implementation, the server110may receive and store user inputs relating to working of the image rendering devices104-1,104-2, . . . ,104-n. In an example, in case a user faces an issue, such as occurrence of error events in any of the image rendering devices104-1,104-2, . . . ,104-n, such as the image rendering device106-1, the user may provide the user inputs relating to the issue to the server110. For the purpose, the user may communicate with the server110via the user device106-1. Various techniques may be used by the users to provide the inputs to the server110. In an example, a web-based interface of the server110may be used. In another example, an electronic mail or message may be sent to the server110. Further, indirect modes of communication, such as a mode where a user may telephonically provide the information to another user, such as a helpdesk representative of the service provide who in turn enters the inputs in the server110, are also possible. The user inputs may be in natural language. In an example implementation, when an error event relating to any of the image rendering devices104-1,104-2, . . . ,104-nis reported by a user or detected by the server110even otherwise, for example, based on an error message generated by any of the image rendering devices104-1,104-2, . . . ,104-n, a service activity corresponding to the error event may be performed. For instance, based on the user inputs relating to the error event pertaining to the image rendering device106-1, a technician may service the image rendering device106-1to address the error event. The technician's remark regarding the service activity may also be recorded. The technician's remark, alike the user inputs, may also be in natural language. The user input pertaining to an error event and the remarks of a technician in relation to the service activity corresponding to the error event comprises a service note associated with the image rendering device106-1. Accordingly, as will be apparent a service note comprises natural language text describing an error event and a corresponding service activity associated with a fuser unit of the image rendering device106-1. In an example implementation, a database112is coupled to the server110to store service notes associated with the plurality of image rendering devices104-1,104-2, . . . ,104-n. In an implementation, the database112may be implemented as a hierarchical database, a network database, an object-oriented database, or a relational database. In accordance with an example of the present subject matter, the machine learning device102, may be coupled to the database112and/or the server110to obtain and process the service notes, for instance, to enable prediction of an error event associated with the image rendering devices104-1,104-2, . . . ,104-n. In an implementation, the machine learning device102may be implemented as any of a variety of computing devices, including, a server, a mainframe computer, a workstation, a desktop, a personal computer, a notebook or portable computer, or a laptop. In accordance with an example of the present subject matter, the machine learning device102processes the service notes associated with fuser units of the plurality of image rendering devices104-1,104-2, . . . ,104-n. For the purpose, the machine learning device102may retrieve service notes from the database112. While the database112may include service notes associated with various parts of the image rendering devices104-1,104-2, . . . ,104-n, the machine learning device102may filter the service notes associated with fuser units of the image rendering devices104-1,104-2, . . . ,104-n. Also, service notes comprising error event, such as failure and malfunctioning of the fusers that lead to replacement to the fuser may be filtered. In an example, the filtered service notes associated with fuser units of the image rendering devices104-1,104-2, . . . ,104-nare labeled. In the present context, labeling of a service note may be understood as assigning a label to the service note based on a category of failure of the fuser unit. To elaborate labeling of a service note, consider that the service note describes, for instance, an error event, such as print quality related issue, a paper jam issue, or a noise related issue. Given that the error event has resulted in replacement to the associated fuser, the cause of failure of the fuser is the print quality related issue, paper jam issue, or noise related issue, as the case may be. The service note may accordingly be labeled as ‘print quality related issue’, ‘paper jam issue’, or ‘noise related issue’. Thus, a tag indicative of the category of failure of the fuser unit is associated with each of the service notes, thereby generating labeled service notes. The labeling of the service notes may be performed by users, for example, experts who may determine a cause of failure of the fuser unit based on the natural text description included in the service notes. In some example implementations, the labeling of the service notes may be performed by the machine learning device102based on predefined rules defined by users. The machine learning device102thereafter processes the labeled service notes to generate a vector corresponding to each of the labeled service notes. As explained previously, a service note comprises description of an error event and a service activity associated with a fuser unit in a natural language. The machine learning device102performs natural language processing of the service note to obtain a numerical representation of the corresponding natural language description so as to render the natural language description comprehendible by the machine learning device102. As explained previously, each of the vectors is tagged with a corresponding label. The vectors and labels corresponding to the labeled service notes may be used to determine a label for an unlabeled service note. To determine the label, the unlabeled service note may be converted into its corresponding vector. In an example, the vector of the unlabeled service note may be similar to a vector of a labeled service note if the labeled service note includes a natural language description that is similar to the description in the unlabeled service note. In an example, the similarity in the two descriptions and in turn the similarity in the two vectors, may enable the machine learning device102to determine the label associated with the labeled service note to be the label for the unlabeled service note. Thus, the vectors and labels corresponding to the labeled service notes may comprise a relationship there between that may enable labeling of unlabeled service notes. In an example, the machine learning device102generates a function mapping a relationship between vectors and labels corresponding to the labeled service notes. Based on the relationship, the machine learning device102may determine a label for an unlabeled service note. Thus, using a set of service notes that is labeled based on user intervention, automatic labeling of further service notes may be performed. In an example, the service notes labeled by the machine learning device102, may be used to train a fuser event prediction engine114to predict malfunction of fuser units. In an example, the fuser event prediction engine114may be a device implemented to predict an error event for a fuser, based on service notes associated with past error events relating to the fuser. In an example, the fuser event prediction engine114may be, a server. Further, in an example the server may be a computing device, such as a workstation, a mainframe computer, a desktop, a personal computer, a notebook, a laptop or a portable computer. Further, in one example, the fuser event prediction engine114may be a distributed or centralized network system in which different computing devices may host one or more of the hardware or software components of the fuser event prediction engine114. FIG.2shows the machine learning device102, according to an example of the present subject matter. According to an implementation of the present subject matter, the machine learning device102comprises a natural language processing engine202and a learning engine204. In an example, the natural language processing engine202processes labeled service notes associated with fuser units of a plurality of image rendering devices104-1,104-2, . . . ,104-nto generate a vector corresponding to each of the labeled service notes. As explained in the foregoing description, a labeled service note comprising natural language text describing an error event and a corresponding service activity associated with a fuser unit and the label assigned to the labeled service note is based on a category of failure of the fuser unit. The learning engine204learns a relationship between the vectors and the labels corresponding to the labeled service notes. Unlabeled service notes may then be labeled based on the relationship between the vectors and the labels. FIG.3illustrates the machine learning device102, according to another example of the present subject matter. The machine learning device102, among other things, includes and a memory302, interface(s)304, and engine(s)306. The memory302may include any computer-readable medium including, for example, volatile memory (e.g., RAM), and/or non-volatile memory (e.g., EPROM, flash memory, etc.). The interface304may include a variety of software and hardware interfaces that allow the machine learning device102to interact with other devices, such as the user devices106-1,106-2, . . . ,106-nor other input/output (I/O) devices that may be used to provide inputs, such as user inputs and technician remarks to the machine learning device102. The engine(s)306may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement certain functionalities of the engine(s)306, such as processing the service notes. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the engine(s)306may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the engine(s)306may include a processing resource (for example, implemented as either a single processor or a combination of multiple processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement engine(s)306. In such examples, the machine learning device102may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to machine learning device102and the processing resource. In other examples, engine(s)306may be implemented by electronic circuitry. In an example, in addition to the above-mentioned natural language processing engine202and the learning engine204, the engine(s)306may also comprise other engine(s)310that supplement functions of the machine learning device102. The data308serves, amongst other things, as a repository for storing data that may be fetched, processed, received, or generated by the engine(s)306. The data308comprises other data312corresponding to the other engine(s)310. In the illustrated example, the data308of the machine learning device102also comprises natural language data314, learning data316and translation data318. The other data312may store the data pertaining to the other engine(s)310. In an implementation, the machine learning device102may include a communication engine320. In operation, the communication engine320retrieves inputs of users pertaining to the error events associated with the image rendering devices104-1,104-2, . . . ,104-nand remarks of technicians servicing the image rendering devices104-1,104-2, . . . ,104-nupon occurrence of the error events. For the purpose, the communication engine320may communicate with a database, such as the above-described database112that stores details of error events and service activities relating to the plurality of image rendering devices. In an example, the database112may store the user inputs pertaining to the error events in a first column the database112. In an example, the user input may also include error codes relating to the error events. The database112may also store the corresponding remarks of technicians addressing the error events in a second column the database112. The remarks of technicians may also include the error codes. For example, the database may have numerous rows each comprising a user input and corresponding remarks of a technician in two different columns. The communication engine320may retrieve the first column and the second column from the database112and may combine the inputs of users pertaining to the error events with corresponding remarks of technicians, for example, by concatenating the text retrieved from two columns, to obtain the service notes. The service notes that the communication engine320thus obtains from the database112may be stored as in natural language data314in data308of the machine learning device102. The database112may store information regarding the error events and service activities relating to the various parts or components of the image rendering devices104-1,104-2, . . . ,104-n. Thus, in an example, entries in the numerous rows in the database112may relate to the various parts or components. The database112may include a third column to record an identifier, such as a part number to identify a component of the image rendering device to which a row pertains. In an example, the part number may be recorded in the database112by a technician or user. In an example, the communication engine320filters the service notes associated with the fuser units of the image rendering devices104-1,104-2, . . . ,104-nfrom amongst the various service notes stored in the database112. The filtering may be based on the part number available in the database112. Various other filtering techniques, such as a filtering by a user or filtering based on error codes, may also be employed. The filtered service notes may be stored as natural language data314in data308of the machine learning device102. The filtered service notes may be labeled as explained above. A label input engine322of the machine learning device102may be implemented to assign labels to the service notes associated with the fuser units to obtain the labeled service notes. The labeling of a service note may be performed by a user who may determine a cause of failure of a corresponding fuser unit by analyzing the natural text description included in the service note. For example, to label a service note, a user may use a user device, such as user device106-1to access the filtered service notes stored in natural language data314via interface(s)304. The user may analyze a filtered service notes and indicate a label which the label input engine322then assigns to the service note. It will be appreciated that initial labeling of few of the filtered service notes may generate a finite number of labels that may then be assigned to further filtered service notes. In some example implementations, the labeling of a service note may be performed by the machine learning device102based on predefined rules defined by users. In an example, the labeled service notes may be stored as the learning data316in data308of the machine learning device102. The labeled service notes are processed by the natural language processing engine202. In an example, the labeled service notes are cleaned to remove the letters, symbols, syntaxes etc. For example, symbols, such as ‘[, {circumflex over ( )}, >, *, /’ are removed from the labelled service notes. In another example, standalone numbers not occurring with any word, standalone character that appears alone are also removed while cleaning. In an example, after cleaning the data, the natural language processing engine202tokenizes the data such that the data is represented as strings. Consider an example of “printer hardware branch scanning problem troubleshoot etb and fuser Bx.xx.01” as a service note obtained from combining a user input “printer hardware branch scanning problem” and a technician remark “troubleshoot etb and fuser Bx.xx.01”. The tokenized string may be ‘printer’, ‘hardware’, ‘branch’, ‘scanning’, ‘problem’, ‘troubleshoot’, ‘fuser’, ‘Bx.xx.01’. In an example, error codes are extracted from the cleaned labeled service notes. For extracting error codes from each of the cleaned labeled service notes, the cleaned service notes are filtered based on a predetermined length and format of the error codes. For example, if the predetermined length and format of error codes comprises six characters, in the format XX.XX.XX, the labeled service notes having six characters appearing in three consecutive groups of two characters each are filtered and error codes are extracted from them. In an example, the error code or a part thereof, such as first two characters of the error codes are combined with the respective cleaned labeled service notes to obtain combined labeled service notes. For the purpose, the error code may be concatenated with the tokenized string obtained from the respective cleaned labeled service notes. In an example, if there are no error codes in a labeled service note, the service note may be marked with a token to indicate the same. For example, the labeled service note without the error code may be marked as ‘no_error_code’. After the combined service notes are generated, features are created for the labeled service notes. In an example, techniques, such as term frequency and inverse document frequency may be used to create features. The features are thereafter converted into vectors. As the vectors are generated from the labeled service notes, the vectors also have their corresponding labels. The vectors along with their corresponding labels may be stored in the learning data316. The learning engine204learns a relationship between the vectors and labels corresponding to the labeled service notes. For instance, occurrence of a given set of words, such as ‘high temperature’, ‘hot’, ‘heat’, in a service note may indicate that the service note is related to a label indicative of a cause of failure to be ‘over-heating’. Similarly, a vector corresponding to a service note that contains terms, such as ‘noise’, ‘sound’, ‘loud’ may be related to a label named ‘roller issues’ indicative of a cause of failure to be related to malfunctioning of rollers or the fuser unit. In an example, the learning engine204generates a function indicative of the relationship between the vectors and labels. The function may also be stored in the learning data316. In an example, the learning engine204may use an algorithm, such as gradient boosting, random forest, deep learning to generate the function indicative of the relationship between the vectors and labels corresponding to the labeled service notes. In several examples, it is also feasible for the learning engine204to generate more than one functions, wherein in each function is generated based on a different algorithm or a different combination of algorithms. As will be understood, implementations where the machine learning device102incorporates multiple learning engines each operating based on a different algorithm or a different combination of algorithms are also possible. The machine learning device102, in an implementation, includes a label prediction engine324is to determine a label for an unlabeled service note based on at least one of the relationships learnt by the learning engine322. Since the relationship learnt by the learning engine322maps vectors to their corresponding labels, to label the unlabeled service note, a vector corresponding to the unlabeled service note is generated by the label prediction engine324. The unlabeled service note is converted into a vector in accordance with the techniques described in the foregoing description. In an example, the label prediction engine324may obtain the vector of the unlabeled service note in coordination with the natural language processing engine202. Thereupon, the label prediction engine324uses the vector of the unlabeled service note and at least one of the functions generated by the learning engine322to assign the label. In examples where the learning engine204may generate more than one functions, the label prediction engine324may determine a label using each of the functions. To determine the label that may be finally assigned to the unlabeled service note, a vote of the more than one functions may be considered. For instance, if the learning engine204uses three different algorithms to generate three different functions, the label determined by majority of the three functions may be considered. In an example, if the unlabeled service note is not in English language, the same may be translated into English prior to converting the unlabeled service note into its corresponding vector. For the purpose, the machine learning device102may include a translation engine326that may determine the unlabeled service note to be in a language other than English, it identifies the non-English language of the unlabeled service note and accordingly provides for the translation. For example, in case the translation engine326identifies the non-English language to be French, the translation engine326use French to English translation engine, to translate the unlabeled service note into English. Thus, the translation engine may identify the non-English language in which the service note is written and invoke a corresponding translation engine. In an example, the translated unlabeled service note may be stored as the translation data318in data308of the machine learning device102. Once translated, the vector of the unlabeled service note is generated and assigned a label as explained above. In an example, the machine learning device102may also incorporate a fuser event prediction engine328. The labeled service notes may serve as a training data for the fuser event prediction engine328such that the fuser event prediction engine328is enabled to predict an error event for a fuser, based on service notes associated with past error events relating to the fuser. As evident, the fuser event prediction engine328may also reside external to the machine learning device102, similar to the above described fuser event prediction engine114. FIG.4illustrates a method400for processing service notes, according to an example of the present subject matter. Although the method400may be implemented in a variety of devices, for the ease of explanation, the present description of the example method400to process the service notes is provided in reference to the above-described machine learning device102implemented in the print environment100. The order in which the method400is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the method400, or an alternative method. It may be understood that blocks of the method400may be performed by the machine learning device102. The blocks of the method400may be executed based on instructions stored in a non-transitory computer-readable medium, as will be readily understood. The non-transitory computer-readable medium may include, for example, digital memories, magnetic storage media, such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. Referring toFIG.4, at block402, inputs of users pertaining to the error events associated with the plurality of image rendering devices is retrieved. Similarly, at block404, remarks of technicians servicing the plurality of image rendering devices upon occurrence of the error events is also retrieved. The inputs of users and remarks of technicians may be retrieved from a database, such as database112that stores details of error events and service activities relating to the image rendering devices. In an example, the communication engine320of the machine learning device102may obtain the inputs of users and remarks of technicians from the database112. At block406, the inputs of users pertaining to the error events may be combined with corresponding remarks of technicians to obtain service notes. Combining the inputs of users and remarks of technicians to obtain the service notes may be performed by the communication engine320in an example. From amongst the service notes describing events and service activities relating to the various parts or components of the image rendering devices, the service notes associated with the fuser units of the image rendering devices are filtered at block408. For example, the service notes retrieved from the database112may be filtered based on part numbers as explained before. The block410, the filtered service notes are labeled with a label corresponding to a category of failure of a fuser unit. The labeling of the filtered service notes, in an example, involves manual processing of the filtered service notes by users to categorize the service notes into various categories of failures of fuser units. In an example, the labeling may be performed by the machine learning device102based on predefined rules inbuilt in the machine learning device102. The block412, the labeled service notes are natural language processed to learn a relationship between vectors and labels corresponding to the labeled service notes. Based on the relationship determined at block412, at block414, labels are generated for unlabeled service notes. The unlabeled service notes thus labeled at block414are used to train a fuser event prediction engine, at block416. Reference is now made toFIG.5that illustrates a method of processing service notes associated with fuser units of image rendering devices, in accordance with another example of the present subject matter. Although the method500may be implemented in a variety of devices, for the ease of explanation, the present description of the example method500to process the service notes is provided in reference to the above-described machine learning device102implemented in the print environment100. The order in which the method500is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the method500, or an alternative method. It may be understood that blocks of the method500may be performed by the machine learning device102. The blocks of the method500may be executed based on instructions stored in a non-transitory computer-readable medium, as will be readily understood. The non-transitory computer-readable medium may include, for example, digital memories, magnetic storage media, such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. Referring toFIG.5, at block502, labeled service notes associated with fuser units of a plurality of image rendering devices are obtained. In an example, the communication engine320of the machine learning device102may obtain the labeled service notes from a database, such as the above described database112that stores details of error events and service activities relating to a plurality of image rendering devices. At block504, the labeled service notes are processed. The labeled service notes are processed to convert the labeled service notes into their corresponding vectors. In an example, the natural language processing engine202of the machine learning device102may be generate the vectors. Since examples of techniques of converting a labeled service notes into its corresponding vector is previously explained, the same is not elaborated here for sake of brevity of the present description. Once the vectors and labels corresponding to the labeled service notes are obtained, at block506, the vectors and labels are analyzed to generate a function mapping, a relationship between vectors and labels. In an example, the learning engine204may determine the function. Based on the function, at block508, a label for an unlabeled service note is determined. For instance, the label prediction engine324of the machine learning device102may determine a label for the unlabeled service note based on the function determined by the learning engine204. FIG.6illustrates a computing environment600implementing a non-transitory computer-readable medium602for processing service notes associated with the fuser units of image rendering devices, according to an example of the present subject matter. In an example implementation, the computing environment600may comprise a computing device, such as the machine learning device102. The computing environment600includes a processing resource604communicatively coupled to the non-transitory computer-readable medium602through a communication link606. In an example, the processor resource604may be a processor of the computing device, that fetches and executes computer-readable instructions from the non-transitory computer-readable medium602. The non-transitory computer-readable medium602can be, for example, an internal memory device or an external memory device. In an example implementation, the communication link606may be a direct communication link, such as any memory read/write interface. In another example implementation, the communication link606may be an indirect communication link, such as a network interface. In such a case, the processing resource604can access the non-transitory computer-readable medium602through a network608. The network608may be a single network or a combination of multiple networks and may use a variety of different communication protocols. The processing resource604and the non-transitory computer-readable medium602may also be communicatively coupled to data sources610. The data source(s)610may be used to store the learning data, translation data, natural language data in an example. In an example implementation, the non-transitory computer-readable medium602comprises executable instructions for processing service notes associated with the fuser units of image rendering devices. For example, the non-transitory computer-readable medium602may comprise instructions executable to implement the previously described engines, such as communication engine, label input engine, label prediction engine, translation engine etc. In an example, the instructions cause the processing resource604to obtain a service note associated with a fuser unit of an image rendering device. As explained earlier, the service note comprises natural language text describing an error event and a corresponding service activity relating to the fuser unit. The instruction may further determine if the service note is in English language and in case the service note is not in English language, the instructions612may cause the service notes to be translated into English. Thereafter, the instructions cause the processing resource604to determine a label for the service note, wherein the label is indicative of a category of the failure of the fuser unit. In an example, the category may be ‘print quality related issue’, ‘paper jam issue’, ‘noise related issue’, ‘heating relating issue’. The label is generated based on a function mapping a relationship between labels and corresponding labeled service notes associated with fuser units of a plurality of image rendering devices. In an example, the mapping relationship may be stored in the learning data, such as the learning data316of the machine learning device102as explained above. The instructions cause the processing resource604to access the learning data and thereafter determine a label for the service note based on the mapping relationship. The mapping relationship may be generated by natural language processing of the service notes as explained above. Thus, the methods and systems of the present subject matter provide techniques for processing service notes associated with fuser units of image rendering devices. Although implementations of processing the service notes have been described in a language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations for processing the service notes. | 41,506 |
11861310 | DETAILED DESCRIPTION The current subject matter is directed to advanced, computer-implemented techniques for characterizing lexical concreteness in narrative essay responses (i.e. situations when human writers write short stories; for example, without limitation, as school writing assignments). With the current subject matter, a quantitative measure is provided that utilizes per-word concreteness ratings. The current subject matter is informed by an investigation to determine whether better stories are more concrete and whether the story type (e.g. hypothetical situation versus personal narratives) influences the concreteness trends. In addition, in connection with the current subject matter, a fine-grained analysis by parts-of-speech (nouns, verbs, adjectives and adverbs) was performed to explore how their concreteness varies with story quality. As part of the investigation efforts used herein, a corpus of narrative essays 940 narrative essays written by school students from grade levels 7-12 was utilized. Each essay was written in response to one of 18 story-telling prompts. The total size of the corpus was 310K words with an average essay length of 330 words. The writing prompts were classified according to the type of story they are calling for, using the three-fold schema from Longobardi et al. (2013)—Fictional, Hypothetical and Personal. Table 1 presents the prompt titles, story types and essay counts. TABLE 1Essay counts for 18 prompts and their text-type classifications.PromptCount essaysText TypeA Fork in the Road47FictionalAt First Glance69FictionalFinding Your Way Home2FictionalMessage in a Bottle31FictionalMovie Sequel12FictionalPitch Session6FictionalSpecial Object37FictionalThe Antique Trunk8FictionalThe Quest6FictionalDifferent Country47HypotheticalElectricity-Free32HypotheticalLiving Art3HypotheticalTrading Places22HypotheticalWeirdest Day Ever!78HypotheticalYou are the Teacher121HypotheticalTravel75PersonalMemorable School Day153PersonalProudest Moment191PersonalTotals171Fictional303Hypothetical466Personal Data Essay Scores Example prompts for three types of text styles:Personal Experience: “Proudest Moment”—There are moments in everyone's lives when they feel pride and accomplishment after completing a challenging task. Write a story about your proudest moment.Hypothetical Situation: “You are the Teacher”—Pretend that one morning you wake up and find out that you've become your teacher for a day! What happened? What do you do? Do you learn anything? Write a story about what happens. Use your imagination!Fictional Story: “Message in a Bottle”—Throughout the years, many have placed messages in sealed bottles and dropped the bottles into the ocean where they eventually washed up on foreign shores. Occasionally the finder has even contacted the sender. Write a story about finding your own message in a bottle. For training purposes, all essays in the corpus were manually scored by experienced research assistants using a rubric that was created by education experts and teachers, and presented by Smarter Balanced assessment consortium, an assessment aligned to U.S. State Standards for grades K-12 (Smarter Balanced, 2014b,a). The essays were scored along three traits (dimensions): Organization, Development and Conventions. Organization is concerned with event coherence, whether the story has a coherent start and ending, and whether there is a plot to hold all the pieces of the story together. It is scored on a scale of 0-4 integer points. Development evaluates whether the story provides vivid descriptions, and whether there is character development. It is also scored on a scale of 0-4 integer points, with 4 being the highest score. The Conventions dimension evaluates language proficiency, and is concerned with aspects of grammar, mechanics, and punctuation. Scores are on a scale of 0-3 integer points (3 is the highest score). In addition, Narrative and Total composite scores were computed for each essay. The Narrative score (range 0-8) is the sum of Organization and Development scores. Total score (range 0-11) is the sum of Organization, Development and Conventions. Not surprisingly, the Organization, Development, Narrative and Total scores are highly intercorrelated. With the current subject matter, the Narrative scores were used thereby focusing on essay narrative quality and de-emphasizing grammar and mechanics. With the current subject matter, the focus can be on calculating concreteness of only the content words in the essays while ignoring all function words. Each essay in the corpus was tagged for parts of speech (POS) using the Apache OpenNLP tagger, and further analysis filtered in only nouns, verbs, adjectives and adverbs. Those content words were checked against the database of concreteness scores. The database provides real-valued ratings in the 1-5 range, from very abstract (score 1.0) to very concrete (score 5.0). For words that were not matched in the database, it was checked if the lemma or an inflectional variant of the word was present in the database (using a morphological toolkit). The database does not include names, but the essays often include names of persons and places. For the scoring of concreteness, any names (identified by POS-tags NNP or NNPS), that were not found in the database, were assigned a uniform concreteness score of 4.0. Concreteness scores were accumulated for all relevant words for each essay as described above. Average and median concreteness score was computed for each essay, separately for each of the categories (nouns, verbs, adjective and adverbs), and also jointly for all content-words. The total numbers of content words are given in Table 2. The concreteness-ratings coverage for our data is 97.8%. TABLE 2Content word counts by part-of-speech, with counts and proportion oftokens that did not have concreteness scores, for 940 essays.POSCountMissing valuesnouns64,3742,113(3.3%)verbs66,718753(1.1%)adjectives19,090658(3.45%)adverbs19,399212(1.1%)all content words169,5813,736(2.2%) Pearson correlations of essay scores with per-essay levels of concreteness are presented in Table 3. Overall, the correlation of average-concreteness with essay score is r=0.222, which is considered a weak correlation. Breakdown by parts of speech shows that adjectives have the highest correlation of concreteness with score (0.297), followed by that for nouns (0.251), and adverbs (0.231). The correlation is weakest for verbs, only 0.122. Results for median-concreteness per essay show a similar pattern, though nouns now overtake adjectives. TABLE 3Pearson correlations of essay narrative scores with per-essay levels ofconcreteness, for 940 essays.Average C.Median C.nouns0.2510.284verbs0.1220.113adjectives0.2970.242adverbs0.2310.132all content words0.2220.188All correlations are significant, p <. 001. C. = concreteness score Table 4A below presents the correlations of concreteness levels with essay scores for each of the six prompts that have more than 50 essays. For two of the prompts, Travel and At First Glance, average concreteness of nouns is moderately correlated with essay narrative score (r=0.4). For four prompts, adjectives show weak correlation with essay scores (from 0.21 to 0.35), while for the Travel prompt, average concreteness of adjectives is moderately correlated with essay narrative score (r=0.4). For four prompts, the average concreteness of adverbs is weakly correlated with essay score (0.24 to 0.33). For verbs, only one prompt, Weirdest Day Ever shows some correlation of concreteness with essay score (0.33). NNounsVerbsAdjectivesAdverbsAll CW(A) PromptTravel750.400**−0.0170.401**0.268*0.371**At First Glance690.404**0.0060.326**0.286*0.240†Memorable School Day1530.0800.0400.212**0.239**0.089Proudest Moment1910.207**0.0720.1180.0600.137Weirdest Day Ever780.1250.326**0.355**0.330**0.322**You are the Teacher1210.218*0.1020.298**0.1310.071(B) Story typeFictional1710.465**0.164†0.417**0.384**0.413**Hypothetical3030.263**0.222**0.287**0.143*0.217**Personal4660.199**0.0450.237**0.209**0.138**Tables 4(A), 4(B): Pearson correlations of essay narrative scores with per-essay average levels of concreteness;(A) for prompts that have above 60 essays,(B) for all essays, grouped by story-type.Significance of correlation **: p < 0.01, *: p < .03, †: p < .05.CW = content words. Table 4B above shows the results of grouping essays by three types of story that their prompts were classified into (which allows the data from all essays to be used). The Fictional story type has the highest correlation of concreteness and essay score (r=0.413), and it also has the highest correlation for nouns, for adjectives and for adverbs (as compared to other story types). Stories of the Hypothetical type show weak (yet significant) correlation of concreteness with scores, for nouns, verbs, adjectives and overall. Interestingly, the Personal story type shows the least relation of concreteness to scores, 0.138 overall; the adjectives there have correlation of 0.237, adverbs 0.209, and the nouns barely reach 0.2. The results above suggest that the relation of concreteness to essay score varies for different story types. The essays from three story types were also checked to confirm whether they differ in concreteness or quality. An analysis of variance of narrative scores for three groups, F(2,937)=1.427, p=0.241, reveals that they did not differ in the average quality of stories. The average per-essay concreteness was also compared for the three groups. Mean concreteness for Fiction essays is 2.91, for Hypothetical essays it is 2.99, and 2.90 for Personal. An analysis of variance, F(2,937)=19.774, p<0.0001, shows that average concreteness is not equal in those groups. Post hoc comparisons indicated that only the Hypothetical group differed significantly from the two other groups. Those results indicate that the different strength of correlation between lexical concreteness and essay score that we observe in the three groups is not due to between-group differences in either narrative scores or lexical concreteness. As will be appreciated, the current subject matter provides novel computer-implemented methodologies for calculating per-text lexical concreteness scores. For student-written stories, lexical concreteness is weakly to moderately positively correlated with narrative quality. Better essays score higher on lexical concreteness and use relatively less abstract words. While those results support the old adage ‘prefer the concrete to the abstract’, it was also found that this relation varies for different story-types. It is prominent for Fictional stories, less pronounced for Hypothetical stories, and rather weak for Personal stories. Nouns and adjectives carry this relation most prominently, but it is also found for adverbs and verbs. FIG.1is a process flow diagram100for characterizing lexical concreteness in narrative text in which, at110, data is received that encapsulates narrative text having a plurality of words. Words of the text can be optionally tagged, at120, with a corresponding part-of-speech (POS). Thereafter, at130, function words are removed from the narrative text to result in only content words. A concreteness score is then assigned, at140, to each content word. Such assigning can include polling a database to identify matching words and using concreteness scores associated with such matching words as specified by the database. Data characterizing the assigned concreteness scores can, at150, be provided (e.g., displayed, transmitted, stored in disk, loaded into memory, etc.). FIG.2is a diagram200illustrating a sample computing device architecture for implementing various aspects described herein. A bus204can serve as the information highway interconnecting the other illustrated components of the hardware. A processing system208labeled CPU (central processing unit) (e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM)212and random access memory (RAM)216, can be in communication with the processing system208and can include one or more programming instructions for the operations specified here. Optionally, program instructions can be stored on a non-transitory computer-readable storage medium such as a magnetic disk, optical disk, recordable memory device, flash memory, or other physical storage medium. In one example, a disk controller248can interface with one or more optional disk drives to the system bus204. These disk drives can be external or internal floppy disk drives such as260, external or internal CD-ROM, CD-R, CD-RW or DVD, or solid state drives such as252, or external or internal hard drives256. As indicated previously, these various disk drives252,256,260and disk controllers are optional devices. The system bus204can also include at least one communication port220to allow for communication with external devices either physically connected to the computing system or available externally through a wired or wireless network. In some cases, the at least one communication port220includes or otherwise comprises a network interface. To provide for interaction with a user, the subject matter described herein can be implemented on a computing device having a display device240(e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information obtained from the bus204via a display interface214to the user and an input device232such as keyboard and/or a pointing device (e.g., a mouse or a trackball) and/or a touchscreen by which the user can provide input to the computer. Other kinds of input devices232can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback by way of a microphone236, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The input device232and the microphone236can be coupled to and convey information via the bus204by way of an input device interface228. Other computing devices, such as dedicated servers, can omit one or more of the display240and display interface214, the input device232, the microphone236, and input device interface228. One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores. In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. | 19,266 |
11861311 | DETAILED DESCRIPTION The present disclosure is more particularly described in the following examples that are intended as illustrative only since numerous modifications and variations therein will be apparent to those skilled in the art. Various embodiments of the disclosure are now described in detail. Referring to the drawings, like numbers, if any, indicate like components throughout the views. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control. Certain terms that are used to describe the disclosure are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the disclosure. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term is the same, in the same context, whether or not it is highlighted. It will be appreciated that same thing can be said in more than one way. Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and in no way limits the scope and meaning of the disclosure or of any exemplified term. Likewise, the disclosure is not limited to various embodiments given in this specification. As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. As used herein, the terms “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A or B or C), using a non-exclusive logical OR. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure. As used herein, the term “module” or “unit” may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor. The term “code”, as used herein, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, and/or objects. The term shared, as used above, means that some or all code from multiple modules may be executed using a single (shared) processor. In addition, some or all code from multiple modules may be stored by a single (shared) memory. The term group, as used above, means that some or all code from a single module may be executed using a group of processors. In addition, some or all code from a single module may be stored using a group of memories. The term “interface”, as used herein, generally refers to a communication tool or means at a point of interaction between components for performing data communication between the components. Generally, an interface may be applicable at the level of both hardware and software, and may be uni-directional or bi-directional interface. Examples of physical hardware interface may include electrical connectors, buses, ports, cables, terminals, and other I/O devices or components. The components in communication with the interface may be, for example, multiple components or peripheral devices of a computer system. The present disclosure relates to computer systems. As depicted in the drawings, computer components may include physical hardware components, which are shown as solid line blocks, and virtual software components, which are shown as dashed line blocks. One of ordinary skill in the art would appreciate that, unless otherwise indicated, these computer components may be implemented in, but not limited to, the forms of software, firmware or hardware components, or a combination thereof. The apparatuses, systems and methods described herein may be implemented by one or more computer programs executed by one or more processors. The computer programs include processor-executable instructions that are stored on a non-transitory tangible computer readable medium. The computer programs may also include stored data. Non-limiting examples of the non-transitory tangible computer readable medium are nonvolatile memory, magnetic storage, and optical storage. The present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the present disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to those skilled in the art. In certain aspects, the present disclosure provides a unified framework integrating the knowledge recognizing, mapping, triple extraction and missing link prediction and inference. The unified framework inventively combines capsule network, set transformer, and entity and relation spaces to construct knowledge graph (KG) from unstructured text. The disclosure represents entities and relationships in knowledge graph with capsule neurons, learns the embeddings with the set transformers and predicts missing links in the unified framework. The set transformer is responsible for constructing high level entities and relations from low level entities and relations. The capsule network and set transformers enable the system to effectively handle the various semantic surface forms and the interaction between complex entities and relations. Based on this representation, the disclosure further provides a learning strategy, which provides the corresponding neural network the capability of automatically learning the representation of entities and relationships in the knowledge graph simultaneously. The use of the capsule network, the set transformer, and the separate entity and relation spaces are advantages over related art in KG construction. FIG.1schematically depicts a system for knowledge graph construction and utilization according to certain embodiments of the present disclosure. As shown inFIG.1, the system100includes a computing device110. In certain embodiments, the computing device110may be a server computer, a cluster, a cloud computer, a general-purpose computer, a headless computer, or a specialized computer, which generates a content plan. The computing device110may include, without being limited to, a processor112, a memory114, and a storage device116. In certain embodiments, the computing device110may include other hardware components and software components (not shown) to perform its corresponding tasks. Examples of these hardware and software components may include, but not limited to, other required memory, interfaces, buses, Input/Output (I/O) modules or devices, network interfaces, and peripheral devices. The processor112may be a central processing unit (CPU) which is configured to control operation of the computing device110. In certain embodiments, the processor112can execute an operating system (OS) or other applications of the computing device110. In certain embodiments, the computing device110may have more than one CPU as the processor, such as two CPUs, four CPUs, eight CPUs, or any suitable number of CPUs. The memory114may be a volatile memory, such as the random-access memory (RAM), for storing the data and information during the operation of the computing device110. In certain embodiments, the memory114may be a volatile memory array. In certain embodiments, the computing device110may run on more than one processor112and/or more than one memory114. The storage device116is a non-volatile data storage media or device. Examples of the storage device116may include flash memory, memory cards, USB drives, solid state drives, or other types of non-volatile storage devices such as hard drives, floppy disks, optical drives, or any other types of data storage devices. In certain embodiments, the computing device110may have more than one storage device116. In certain embodiments, the computing device110may also include a remote storage device116. The storage device116stores computer executable code. The computer executable code includes a knowledge graph application118. The knowledge graph application118includes the code or instructions which, when executed at the processor112, generates knowledges for knowledge graph construction, constructs knowledge graph, and utilizes the knowledge graph to perform related functions. In certain embodiments, the knowledge graph application118may not be executable code, but in a form of circuit corresponding to the function of the executable code. By providing a circuit instead of executable code, the operation speed of the knowledge graph application118is greatly improved. In certain embodiments, as shown inFIG.1, the knowledge graph application118includes, among other things, a data preparation module120, a knowledge learning module130, a knowledge graph construction module150, a function module160, and a user interface170. The data preparation module120is configured to prepare training data for training the knowledge learning module130and provide inference data to the knowledge learning module130to infer knowledge, and send the training data or the inference data to the knowledge learning module130. For each knowledge graph, a large number of entities and a small number of relations are predefined. The training data includes training documents, each training document may include one or more sentences, each sentence may have labeled entities and relations between the labeled entities, when available. The training sentences, the entity labels, and the relation labels can be used as input to train the knowledge learning module130. After well training, the well-trained knowledge learning module130can be used for entity and label prediction and inference. Accordingly, the data preparation module120is further configured to prepare inference data. The inference data includes a large number of documents, and each documents may have one or more sentences. The sentences are not labeled with entities and relations. In certain embodiments, the training data and the inference data are the same type, such as product description data, customer comments data, or customer service data. The knowledge learning module130is configured to, upon receiving the training data from the data preparation module120, perform training, and after well-trained and upon receiving the inference data, infer entities and relations, and send the inferred entities and relations to the knowledge graph construction module150.FIG.2schematically depicts a detailed architecture of the knowledge learning module130according to certain embodiments of the present disclosure. As shown inFIG.2, an text input131is inputted. The text input131, for example, is a sentence of the training data or inference data. The sentence includes a number of sequential words, and the number of the words in each sentence in the training data or the inference data may be different. The knowledge learning module130is configured to convert the sentence into one-hot encoding representations {w1, w2, . . . wi, . . . , wT}, where T is the number of tokens in the sentence, and the tokens include words and punctuations. In certain embodiments, the knowledge learning module130is configured to embed the one-hot representations to a sequence of word embedding: E={e1, e2, . . . , ei, . . . , eT}. Each one-hot word representation has a corresponding word embedding. Here eiis a vector standing for a d dimensional word embedding for the i-th word in the document. To get the word embedding representation E, the knowledge learning module130first looks up the embedding matrix Wwrd∈demb|V|, where V is a fixed-sized vocabulary, and dembis the size of word embedding. The matrix Wwrdis the parameters to be learned with model, and dembis a hyper-parameter to be chosen by the user. The knowledge learning module130transforms a word wiinto its word embedding eiby using matrix-vector product: ei=Wwrdvi, where viis a vector of word wiindex in V. In certain embodiments, the word embedding layer may be word2vec. After word embedding, the knowledge learning module130is further configured to feed the word embeddings {e1, e2, . . . , ei, . . . , eT} to the long Short-term memory (LSTM) encoder132, to capture dependency between words in the sequence, and the output from the LSTM encoder132is a set of feature vectors {u1, u2, . . . , ui, . . . , uT}, where uiencodes the semantics of the i-th word in the give sentence. The LSTM encoder132may be a unidirectional LSTM or a bidirectional LSTM. Alternatively, instead of using the LSTM layer, the knowledge learning module130is configured to use a transformer encoder to capture dependency between words in the sentence, which may use an attention mechanism. Specifically, the one-hot encoding representations {w1, w2, . . . , wi, . . . , wT} are inputted to the transformer encoder132, and the transformer encoder132outputs the set of feature vectors133{u1, u2, . . . , ui, . . . , uT}. In certain embodiments, the transformer encoder132is bidirectional encoder representations from transformers (BERT). The set of feature vectors133{u1, u2, . . . , ui, . . . , uT} are inputted to the self-structure attention134, and the self-structure attention134converts the feature vectors to fixed length sentence embedding135, which are directly used as fixed length of primary capsules142. In particular, the self-structure attention134takes the whole feature embedding U={u1, u2, . . . , ui, . . . , uT} as input, and outputs a vector of weights A: A=softmax(Ws2tanh(Ws1UT)) (1). Here Ws1is a weight matrix with a shape of t-by-u (unidirectional LSTM encoder132) or t-by-2u (bidirectional LSTM encoder132), where t or T is the total number of tokens, u is the number of hidden state vectors of the LSTM encoder132, Ws2is an r-by-t weight matrix, and r is a hyperparameter that can be set arbitrarily. In this setting, the disclosure uses a softmax classifier to predict ŷ label from a discrete classes Y for the sentence S. The classifier takes the word embeddings E as input, and U is hidden state vector of the LSTM encoder132: {circumflex over (p)}(y|S)=softmax(A(S)U,b(S)) (2), and y^=argmaxyp^(y|S).(3) Here A(S)U is the fixed length sentence embedding135, and b(s) is a bias parameter. The fixed length sentence embedding135is in a form of 2D matrix, and is regarded as the primary capsules136. Each row of the matrix attends part of the sentence. After obtaining the primary capsules136, the knowledge learning module130is further configured to use a set transformer mechanism to learn the relationship between the primary capsules136and abstract entity and relation capsules, and obtain entity/relation capsules142. As shown inFIG.2, the set transformer include an encoder137and a decoder139. The encoder137includes multiple self-attention blocks (SABs)138, and the decoder139includes a pooling by multi-head attention (PMA) block140and multiple SABs141. The encoder137encodes the primary capsules136to obtain encoded primary capsules, and the decoder139uses the encoded primary capsules and the entity/relation seeds embedding to calculate entity/relation capsules. Kindly note that the value of the predefined number of entity/relation seeds embedding may be set randomly to initialize the training, and the number of entity/relation seeds equals to the number of obtained entity/relation capsules. There is no need to input the entity/relation seeds embedding in the following training process because information of the entity/relation seeds embedding is stored in the model. SAB is a special type of multi-head attention block (MAB). MAB is an adaptation of the encoder block of the Transformer [19] without positional encoding and dropout. The attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. The input consists of queries and keys of dimension dk, and values of dimension dkv: Attention(Q,K,V)=softmax(QKTdk)V,and(4)MultiHead(Q,K,V)=Concat(head1,…headh)WO,(5)where headi=Attention(QWiQ, KWiK, VWiV), and the projections are parameter matrices WiQΣdmodel×dk, WiKΣdmodel×dk, and WiKΣdmodel×dk, WiVΣdmodel×dk, WiOΣdmodel×dk. MAB(X,Y)=LayerNorm(H+rFF(H)) (6),where H=LayerNorm(X+Multihead(X, Y, Y)). rFF is any row-wise feedforward layer (i.e., it processes each instance independently and identically), and LayerNorm is layer normalization [20]. SAB(X):=MAB(X,X) (7).PMA with seed assumption vectors is defined as: PMAk(Z)=MAB(S,rFF(Z)) (8),where k refer to the k assumption entity or relation vectors, and Z refer to the input set vectors. After obtaining the entity/relation capsules142, the knowledge learning module130is further configured to regulate the model by relation inference regulation143. In particular, during the learning process, the knowledge learning module130tries to optimize the loss function combing the classification loss and relation inference loss: L=Lc+Lr(9). For the classification loss, the disclosure uses the cross-entropy loss. For multiclass classification, the disclosure uses multiclass cross-entropy loss. For multilabel classification, the disclosure uses binary cross-entropy loss. As a novel feature, the knowledge learning module130is configured to model entities and relations in distinct spaces, i.e., entity space and relation spaces, and performs translation in relation space. For each triple (h, r, t), entities embeddings are set as h, t E Rk and relation embedding is set as r E Rd. For each relation r, the disclosure sets a projection matrix MrΣk×dwhich may project entities from entity space to relation space, and is learned during training. With the mapping matrix, the disclosure defines the projected vectors of entities as: hr=hMr(10), and tr=tMr(11). The score function is correspondingly defined as: fr(h,t)=∥hr+r−tr∥22(12). The disclosure defines the following margin-based score function as objective for training: Lr=Σ(h,r,t)∈sΣ(h′,r,t′)∈S, max(0,fr(h,t)+γ−fr(h′,t′)) (13),where max(x, y) aims to get the maximum between x and y,7is the margin, S is the set of correct triples and S′ is the set of incorrect triples. FIG.3schematically depicts the basic idea of relation inference mechanism according to certain embodiments of the present disclosure. As shown inFIG.3, for each triple (h, r, t), entities in the entity space are first projected into r-relation space as hrand trwith operation Mr, and then hr+r=tr. The relation-specific projection can make the head/tail entities that actually hold the relation (denoted as solid circles) close with each other, and also get far away from those that do not hold the relation (denoted as solid triangles). In certain embodiments, the closeness is predefined, for example, within 5% of the value of the relation vector r. After obtaining the head-tail-relation triples, the knowledge learning module130is further configured to send the triples to the knowledge graph construction module150. In brief, the knowledge learning module130is configured to train the learning model by inputting text, encoding the text to primary capsule layers, performing set transformer to obtain entity/relation capsule layers, performing relation inference to infer relations, comparing the inferred relations with the labeled relations (calculating losses), so as to adjust parameters for encoding, set transformer, and the projection matrix. After training, the knowledge learning module130can use the newly inputted text to infer relations, and the newly inferred relations are sent to the knowledge graph construction module150. The knowledge graph construction module150is configured to, upon receiving the triples, construct a new knowledge graph or complete an available knowledge graph using the obtained triples. The constructed or completed knowledge graph is accessible to the function module160. The function module160is configured to, when the knowledge graph is constructed or is substantially complete, use the knowledge graph to perform certain functions. The function module160may be stored in the computing device110or any other computing devices that are in communication with the computing device110. In certain embodiments, the function is garment recommendation, and the knowledge graph is constructed using garment fitting documents. The knowledge graph includes garment entities, the edges represent fitting, and the garments belonging to a same suit are linked by the edges. When a customer purchases a garment in an e-commerce platform, the function module160is configured to query the knowledge graph using the purchased garment, obtain garments that fit with the query garment, and recommend the queried garments to the customer. In certain embodiments, the function is to provide answers to customer questions, and the knowledge graph is constructed using customer question and answer documents. The knowledge graph includes question entities and answer entities, the edges represent a suitable answer to a question. When a customer purchases a product in an e-commerce platform, the function module160is configured to query the knowledge graph using the question, obtain answers tailored to the question, and provide the answer to the customer. In certain embodiments, the knowledge graph include sub-graphs corresponding to different type of products, such that the answers provided to the customer can be more accurate. In certain embodiments, the function is to provide service to customer requests, and the knowledge graph is constructed using customer service documents. The knowledge graph includes service request entities and service entities, and optionally a link to a service provider for a specific service. The edges represent a suitable service to a service request. When a customer seeks for a service of a product, the function module160is configured to query the knowledge graph using the service request, obtain a service tailored to the request, and links the service provider to the request. In certain embodiments, the function may also include making a service appointment for the customer with the service provider. In certain embodiments, the function may further include a trouble shooting process. The knowledge graph includes trouble shooting entities related to service request, and the function module160is configured to provide the trouble shooting instruction corresponding to the service request. The customer may be able to solve his problem according to provided instructions before seeking help from the service provider. The user interface170is configured to provide a user interface or graphic user interface on the computing device110or a remote terminal. In certain embodiments, the user or the administrator of the system can configure parameters for the computing device110, especially the parameters used in the knowledge graph application118, which include, for example, hyperparameters of the knowledge learning module130, the number of primary capsule layers, the number of SABs in the encoder137, the number of SABs in the decoder139, the entities and relations, etc. FIG.4schematically depicts training of knowledge graph learning according to certain embodiments of the present disclosure. In certain embodiments, the method400as shown inFIG.4may be implemented on a computing device110as shown inFIG.1. It should be particularly noted that, unless otherwise stated in the present disclosure, the steps of the method may be arranged in a different sequential order, and are thus not limited to the sequential order as shown inFIG.4. At procedure402, the data preparation module120prepares training data for constructing or completing a knowledge graph, and sends the training data to the knowledge learning module130. The entities and labels for the knowledge graph are predefined. The training data includes a number of documents, and each documents have one or more sentences. The entities and the relations in each sentence are labeled. The data preparation module120may provide the training data for training by batches, and each batch includes, for example, 10 labeled sentences. At procedure404, upon receiving the batch of training data, the knowledge learning module130coverts the T number of tokens (including words and punctuations) in each of the sentences into sequential one-hot encoding representations {w1, w2, . . . ,wi, . . . ,wT}, and sends the one-hot encoding representations to a word embedding module of the knowledge learning module130. Each sentence in the training is processed substantially independently, however, the information from the batch of sentences are used collaboratively to adjust model parameters in each iteration of training. At procedure406, the word embedding module embeds the sequential one-hot encoding representations {w1, w2, . . . , wi, . . . , wT} into a sequence of word embeddings {e1, e2, . . . , ei, . . . , eT}, and sends the sequence of word embeddings to the LSTM encoder132. The word embeddings are extracted from a vocabulary and each of the word embedding has predefined dimensions. The word embedding parameters are to be learned during the training. The word embedding may be word2vec. At procedure408, upon receiving the word embeddings, the LSTM encoder132encodes the word embeddings {e1, e2, . . . , ei, . . . , eT} to the feature vectors {u1, u2, . . . , ui, . . . , uT}, and sends the feature vectors to the self-structure attention134. The feature vectors are hidden vectors of the LSTM encoder132, and include the feature of the words and the semantic relation of the words in the sentence. In certain embodiments, the procedures406and408may also be performed by a transformer, which learns the feature vectors {u1, u2, . . . , ui, . . . , uT} directly from the one-hot vectors {w1, w2, . . . , wi, . . . , wT}. In certain embodiments, the transformer is BERT. At procedure410, upon receiving the feature vectors, the self-structure attention134converts the feature vectors {u1, u2, . . . , ui, . . . , uT} into sentence embedding135. The sentence embedding135is regarded as the primary capsules136, and the primary capsules136is subjected to the encoder137. The sentence embedding135has a fixed length independent from the lengths of the sentences. The sentence embeddings135are used as the primary capsules136, which has a fixed number of primary capsules. The importance of the capsules in the primary capsules136may be different for the different sentences. At procedure412, the encoder137encodes the primary capsules136to obtain encoded primary capsules, and sends the encoded primary capsules to the decoder139. The encoder137includes multiple SABs138, and the number of SABs138may vary depending on the knowledge to be learned. At procedure414, the PMA140of the decoder139processes the encoded primary capsules and the seed entity/relation embeddings, and after further application of SABs141, obtains the entity/relation capsules. The seed entity/relation embeddings defines all the entities and relations for the knowledge graph to be constructed, and the total number of entity/relation capsules is the total number of predefined number of entities and relations. In certain embodiments, the seed entity/relation embeddings may be random valued embeddings or empty embeddings. The seed entity/relation embeddings are used as input during the initiation of the training, and there is no need to input the seed entity/relation embeddings after the initiation. At procedure416, after obtaining the entity/relation capsules, for each head entity-relation-tail entity triple, the knowledge learning module130projects the head entity and the tail entity from the entity space to the corresponding relation space using the operation Mrto obtain the projected head entity hrand the projected tail entity tr, and if hr+r=tr, and determines that the relation between the head entity and the tail entity exist. In certain embodiments, the value of the operation Mrmay be random or empty during initiation of the training, and will be learned during the following training process. Kindly note that there is one entity space for all the entities, and each of the relations has its own relation space. At procedure418, the knowledge learning module130calculates loss based on the obtained head entity-tail entity relations, adjusts the parameters of the models based on the loss, and runs another iteration of training to minimize the loss. The procedures402to418may be performed iteratively for the same batch of training data for a predetermined iterations, or until the parameters converge. Then the knowledge learning module130uses another batch of training data for the training. After training using all the training data is completed, the models are well trained. In certain embodiments, the system100may use certain criteria to evaluate the training, and the models are regarded as being well-trained if the criteria are met. FIG.5schematically depicts constructing or completing a knowledge graph and using the knowledge graph according to certain embodiments of the present disclosure. In certain embodiments, the method500as shown inFIG.5may be implemented on a computing device110as shown inFIG.1. It should be particularly noted that, unless otherwise stated in the present disclosure, the steps of the method may be arranged in a different sequential order, and are thus not limited to the sequential order as shown inFIG.5. At procedure502, the data preparation module120prepares learning data for constructing or completing the knowledge graph, and sends the learning data to the knowledge learning module130. The learning data includes a large number of documents, and each document has one or more sentences. The learning data may provide the learning data by batches, and each batch may include, for example, 10 sentences. At procedure504, the knowledge learning module130converts the words in each sentence to one-hot vectors, embeds the one-hot vectors into sequential word embeddings, encodes the word embeddings by the LSTM to obtain feature vectors, perform self-structure attention on the feature vectors to obtain fixed length sentence embeddings, regards the fixed length sentence embeddings as primary capsules, and perform set transformer to the primary capsules to obtain entity/relation capsules, and extracts head entity-relation-tail entity information from the obtained entity/relation capsules using relation inferences. The procedure504substantially corresponds to the procedures404-416. But there is no ground truth entity and relation labels for comparison, and there is no need to adjust model parameters. By the above process, the entities and relations from each sentence can be learned by one operation of the learning process, and there is no need to run one round of learning for each triple, or each entity, or each relation. Therefore, the learning process is efficient. At procedure506, after learning head entity-relation-tail entity triples from the learning data, the KG construction module150constructs or completes the knowledge graph using the learned triples. The constructed or completed knowledge graph is available to the function module160. At procedure508, the function module160uses the knowledge graph to perform a function. In certain embodiments, the knowledge graph is about garment fitting, each entity is a garment, and the relations or edges indicate whether the garments belong to the same suit of garments. When a customer reviews or purchases a garment from an e-commerce platform, the function module160uses the reviewed or purchased garment as a query to query the knowledge graph, finds the garments that fit with the reviewed or purchased garment, and recommends the garments found from the knowledge graph to the customer, for example, by pushing a message of the garments to the customer, or displaying the recommended garments to the customer when he enters the e-commerce platform. In certain embodiments, the knowledge graph is about question and answers of products, the entities include products and features of the products, and the relations or edges are whether the products have the corresponding feature. The function module160provides a question and answer interface, such as a chat box, to customers. When a customer is interested in a product and asked questions about a feature of the product, the function module160uses the product and the question as query against the knowledge graph to obtain corresponding feature of the product, and includes the corresponding feature in the answer to the customer. In certain embodiments, the knowledge graph is about service requests of products, the entities include products, service request on the products, and service solutions to the service requests. The relations or edges link the product entities and the corresponding service solutions. The service solution may include instructions to a service request or a contact of service providers to the service request. The function module160provides a service interface to customers. When a customer has a service request for his purchased product, he may describe the service request via the interface, the function module160finds solutions to the service request, and includes the query result as the answer to the customer. The answer may instruct the customer to trouble shooting the service problem by himself, or provide the customer with a contact information of a customer service provider. In certain embodiments, the function module160may further schedule an appointment for the service between the customer and the service provider. In certain aspects, the present disclosure is related to a non-transitory computer readable medium storing computer executable code. The code, when executed at a processer112of the computing device110, may perform the methods as described above. In certain embodiments, the non-transitory computer readable medium may include, but not limited to, any physical or virtual storage media. In certain embodiments, the non-transitory computer readable medium may be implemented as the storage device116of the computing device110as shown inFIG.1. Certain embodiments of the present disclosure, among other things, have the following advantages. (1) Capsule network[12]-[14] is firstly proposed to achieve viewpoint equivariance in image classification tackling the bag of feature problem due to the use of pooling operation in CNNs. They can generalize to recognize the same objects with different viewpoints in images through routing algorithms between low level features and high level features. The present disclosure uses such generalization capability of the capsule network to learn hierarchical relationships and complex patterns and abstract away from different surface realizations in NLP applications, and applies the capsule network in solving knowledge graph construction from unstructured text. (2) Set transformer aims to solve set input problems which satisfy two properties: first, the output of model remains the same when the order of input instances changes, and second, the model can take input of any sizes. Set transformers capture the pairwise and high order interactions between elements in the set through a self-attention mechanism. Accordingly, the present disclosure uses the feature of the set transformer to solve complex problems such as knowledge recognition and missing link prediction. (3) The disclosure takes the advantages of the self-attention mechanism in set transformer to aggregate features, which is advantageous to effectively capture the interaction between entities and relations and learn more accurate representations. The incorporation of set transformer to process text is novel, in contrast to using set transformer to process images. (4) Given the fact that entities typically are sophisticated and contain multiple aspects with relations centered on corresponding aspects, projecting entities and relations into the same embedding space cannot differentiate the various aspect of entities and their relations effectively. To solve the problem, the disclosure models entity and relation embeddings into separate entity and relation spaces based on capsule net and set transformer. The disclosure then performs translation of entities to relation spaces during the learning process and uses this as a regularizer. The representation of the entities and relations by vectors are also novel. (5) The system provides a unified and integrated framework. The framework uses an end-to-end capsule neural network to learn all the representations at once. The three components of the framework includes: a) a text encoder such as LSTM or transformer with self-structure attention to encode the raw text into primary capsules; b) a set transformer mechanism to learn the relationship between the primary capsules and abstract entity and relation capsules; c) during the learning process, the disclosure uses the relation inference as the regularization. The foregoing description of the exemplary embodiments of the disclosure has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to explain the principles of the disclosure and their practical application so as to enable others skilled in the art to utilize the disclosure and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present disclosure pertains without departing from its spirit and scope. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the foregoing description and the exemplary embodiments described therein. REFERENCES [1] Natthawut Kertkeidkachorn, and Ryutaro Ichise, T2KG: an end-to-end system for creating knowledge graph from unstructured text, The AAAI-17 Workshop on Knowledge-Based Techniques for Problem Solving and Reasoning, 2017, 743-749.[2] Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu, Learning entity and relation embeddings for knowledge graph completion, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015, 2181-2187.[3] Ryan Clancy, Ihab F. Ilyas, and Jimmy Lin, Knowledge graph construction from unstructured text with applications to fact verification and beyond, Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), 2019, 39-46.[4] Antonine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko, Translating embeddings for modeling multi-relational data, Advances in Neural Information Processing Systems, 2013, 2787-2795.[5] Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen, Knowledge graph embedding by translating on hyperplanes, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014, 1112-1119.[6] Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao, Knowledge graph embedding via dynamic mapping matrix, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 2015, 687-696.[7] Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson, STransE: a novel embedding model of entities and relationships in knowledge bases, Proceedings of NAACL HLT 2016, 2016, 460-466.[8] Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao, Knowledge graph completion with adaptive sparse transfer matrix, Proceedings of the Thirtieth AAAI conference on artificial intelligence, 2016, 985-991.[9] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng, Embedding entities and relations for learning and inference in knowledge bases, 2014, arXivl412.6575.[10] Theo Trouillon, Johannes Welbl, Sebastian Riedel, eric Gaussier, and Guillaume Bouchard, Complex embeddings for simple link prediction, Proceedings of Machine Learning Research, 2016.[11] Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung, A capsule network-based embedding model for knowledge graph completion and search personalization, Proceedings of NAACL-HLT 2019, 2019, 2180-2189.[12] Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton, Dynamic routing between capsules, NIPS 2017, 2017, 3857-3867.[13] Geoffrey Hinton, Sara Sabour, and Nicholas Frosst, Matrix capsules with EM routing, ICLR, 2018, 1-15.[14] Yao-Huang Hubert Tsai, Nitish Srivastava, Hanlin Goh, and Ruslan Salakhutdinov, Capsules with inverted dot-product attention routing, 2020, arXiv:2002.04764.[15] Wei Zhao, Haiyun Peng, Steffen Eger, Erik Cambria, and Min Yang, Towards scalable and reliable capsule networks for challenging NLP applications, 2019, 1549-1559.[16] Zhuang Chen, and Tieyun Qian, Transfer capsule network for aspect level sentiment classification, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, 547-556.[17] Wei Zhao, Jianbo Ye, Min Yang, Zeyang Le1, Soufe1, and Zhou Zhao, Investigating capsule networks with dynamic routing for text classification, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, 3110-3119.[18] Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, and Yee Whye Teh, Set Transformer: a framework for attention-based permutation-invariant neural networks, Proceedings of the 36thInternational Conference on Machine Learning, 2019.[19] Ashish Vaswani et al., Attention is all you need, NOIOS 2017, 2017, 5999-6009.[20] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton, Layer normalization, 2016, arXiv:1607.06450. | 43,414 |
11861312 | DETAILED DESCRIPTION Embodiments of the present disclosure provide techniques for dynamically evaluating the lexical knowledge and understanding of users in order to tailor materials to an appropriate level of complexity. Additionally, embodiments disclosed herein can be used to monitor and evaluate user engagement in order to dynamically alter the complexity of the presented content, which ensures that users remain engaged and attentive, and improves retention and comprehension of the materials. In some embodiments, documents or other content are analyzed and evaluated in order to determine semantic similarity between each documents, as well as lexical complexity of each document. In an embodiment, one or more machine learning (ML) models can thereafter be trained based in part on the complexity of each document or set of documents, in order to recommend or select an appropriate complexity to a given audience of users. Once the model is trained, in some embodiments, the engagement of the audience is actively monitored and provided to the model(s) in order to dynamically tailor the content of the presentation and maximize engagement. FIG.1Aillustrates a workflow100for training a model to correlate user engagement and lexical understanding, according to one embodiment disclosed herein. In the illustrated embodiment, a set of Documents105are provided as input to a Document Analyzer110. In embodiments, the Documents105can include any information content, such as presentation materials (e.g., sets of slides), handouts, papers, books, multimedia (e.g., videos and/or audio), and the like. The Documents105may relate to any number of domains or topics, and may be any level of complexity. In the illustrated embodiment, the Document Analyzer110receives the Documents105and processes them to generate Processed Documents115which are each associated with one or more Tags120. In one embodiment, the Document Analyzer110uses one or more ML models and/or natural language processing (NLP) techniques to cluster the Documents105based on semantic similarity, and assigns Tags120reflecting these clusters (e.g., to group semantically-similar documents). In some embodiments, the Document Analyzer110further determines statistical data about each Document105in order to determine a lexical complexity of the Document105. In various embodiments, the statistical data can include the frequency with which various tokens (e.g., words, phrases, and/or concepts) appear in the Document105. In one such embodiment, one or more of these tokens are associated with predefined complexity scores, and a complexity of the Document105can be determined based on the frequencies of each token. For example, in such an embodiment, some words or phrases may be associated with high complexity while other words and phrases are associated with relatively low complexity. In such an embodiment, a Document105having a high frequency of the high complexity tokens (as compared to low complexity tokens) will be assigned a higher complexity score, relative to a document with a relatively lower frequency of high complexity tokens. In some embodiments, the Document Analyzer110first evaluates the Documents105to determine frequencies of various tokens with respect to the entire corpus of Documents105and/or with respect to each cluster. The Document Analyzer110can then determine the frequency of each such token with respect to a given Document105in order to determine a complexity of the Document105. In one such embodiment, the Document Analyzer110identifies common tokens that are found with a first threshold frequency in the Documents105(or within a given cluster), less common tokens that are found below the first threshold but above a second threshold, and rare tokens that are found below the second threshold. If a first Document105includes a relatively high number or frequency of rare tokens as compared to a second Document105, the Document Analyzer110can determine that the first Document105is complex, as compared to the second Document105. In one embodiment, the Document Analyzer110uses predefined threshold frequencies of common, less common, and rare tokens in order to determine lexical complexity of each Document105. For example, in an embodiment, a Document105with a frequency or count of rare words exceeding a first threshold may be classified as “advanced,” while documents with a frequency or count of rare tokens between the first threshold and a second threshold are classified as “intermediate” and documents with a frequency or count of rare tokens are classified as “basic.” Any thresholds may be utilized with respect to each class of token (e.g., rare, intermediate, and common) in order to classify the Documents105. In an embodiment, the Tags120generated by the Document Analyzer110include this lexical complexity classification. As illustrated, these Processed Documents115are provided to or otherwise presented to a test set of Users125in order to monitor their Engagement Level(s)130. In one embodiment, a presenter uses some or all of the Processed Documents115to create one or more presentations. In some embodiments, some or all of the Processed Documents115are provided directly to the Users125. In an embodiment, multiple separate presentations are prepared based on the Processed Documents115. In such an embodiment, each presentation of the material includes a subset of the Processed Documents115corresponding to a given cluster and complexity. For example, a first presentation may correspond to Processed Documents115with a first cluster tag (e.g., belonging to a “healthcare” domain) and a first lexical complexity tag (e.g., a “beginner” complexity). In some embodiments, these separate presentations/sets of documents are each provided to a separate set of Users125. In other embodiments, some or all of the sets of Users125may receive multiple presentations. In the illustrated workflow100, for each presentation, the Engagement Level(s)130of the Users125are monitored (on a per-user basis and/or aggregated across the users) and used to train one or more ML Models140. In one embodiment, for each set of Users125, the ML Models are also trained based on the Lexical Knowledge135of the Users125(on a per-user basis and/or as an aggregated or average value). In one embodiment, the Lexical Knowledge135indicates the user comfort and expertise, and can be based on a variety of factors including the native language of the user, other language(s) the user speaks or understands, the education level of the user, and the like. In some embodiments, the Lexical Knowledge135further indicates the level of the user's domain and subject knowledge, such as terms, notations, and formulas they are familiar with, and the like. In embodiments, the Engagement Levels130can be determined in any number of ways. For example, in one embodiment, one or more of the Users125are fitted with electroencephalogram (EEG) and/or electrocardiogram (ECG) equipment in order to monitor the brain and/or heart activity of the user. In some embodiments, one or more audio capture devices (e.g., microphones) are used to monitor audio from the Users125in order to determine the Engagement Levels130. For example, the system may monitor the frequency of questions or interruptions from the Users125. In one embodiment, the system uses one or more NLP algorithms to analyze the content of the user remarks, in order to determine their complexity, the user's understanding, and the like. Further, in some embodiments, the system monitors for sounds indicating poor engagement, such as fidgeting, yawning, murmuring within the set of Users125, and the like. Additionally, in one embodiment, the system utilizes one or more video capture devices (e.g., cameras) capture images and/or video of the users for analysis. In one such embodiment, the system can apply image recognition algorithms to determine the Engagement Level130of the users. For example, the system may recognize indications of high engagement (e.g., if the users appear to be attentive and facing the presenter) and low engagement (e.g., if the users are fidgeting, looking around the room, yawning, napping, and the like). In at least one embodiment, each User125can additionally indicate (e.g., via a mobile device) how engaged they feel at any given time. In embodiments, the system can utilize any combination of methodologies, as well as other techniques not discussed in detail here, to determine the Engagement Levels130. In the illustrated embodiment, the ML Models140are also provided with the Processed Documents115and corresponding Tags120. In one embodiment, each ML Model140represents a supervised machine learning model trained to receive an indication of Lexical Knowledge135and/or Engagement Levels130as input, and select one or more Processed Documents115as output. In an embodiment, during training, the system can determine, for each User with a given Lexical Knowledge135, an Engagement Level130when presented with Processed Documents115associated with a given lexical complexity (as indicated in the corresponding Tags120). In some embodiments, the system further prompts or facilitates presentation of Processed Documents115with a different lexical complexity in order to determine how the Engagement Levels130change. Based on these factors, in one embodiment, the system trains one or more ML Models140to act as a classifier by receiving an indication of the Lexical Knowledge135of the target user(s) and outputting a selection of the appropriate lexical complexity (e.g., a set of documents). In embodiments, the training process continues until no Processed Documents115remain, no additional test Users125are available, the ML Models are sufficiently trained, and the like. In one embodiment, a separate ML Model140is trained for each cluster of documents (e.g., for each domain or topic). In another embodiment, a single ML Model140can be trained across a number of clusters and/or domains. FIG.1Billustrates a workflow150for using a model to correlate user engagement and lexical understanding, according to one embodiment disclosed herein. In the illustrated embodiment, the ML Model(s)140have been trained and are deployed for use. As illustrated, the workflow150begins when an indication of the Lexical Knowledge155of a target set of Users170is provided to the ML Model(s)140. In one embodiment, the Lexical Knowledge155of the Users170is retrieved or determined based on one or more profiles associated with the Users170. For example, in one such embodiment, the system determines a level of Lexical Knowledge155of each User170in the target group based on factors such as the user's native language, other languages the user speaks, education level, and the like. In some embodiments, the target Users170are indicated by a user (e.g., a presenter), and the system identifies the corresponding Lexical Knowledge155. In some embodiments, the ML Model140processes each Lexical Knowledge155(e.g., the knowledge of each user). In another embodiment, the Lexical Knowledge155is aggregated to determine an overall knowledge of the group. For example, the system can average the knowledge of each user, determine the median (e.g., without outliers), and the like. Based on the Lexical Knowledge155, the ML Models140identify/select Content160using the Tags165. In one embodiment, each item of the Content160corresponds to a Processed Document115. In an embodiment, the ML Model140outputs a recommended level of lexical complexity, and Content160with a corresponding complexity (indicated by the Tags165) is retrieved. In embodiments, the Content160is further selected based on the topic, domain, and/or cluster to which it belongs (e.g., based on the desired topic of the presentation). As illustrated, the system facilitates presentation of the selected Content160to the Users170(e.g., by providing it to a presenting user, preparing handouts, and the like). While the presentation is ongoing, in some embodiments, the Engagement Levels175of one or more of the Users170is determined. In embodiments, the Engagement Levels175of the Users170can be monitored in any number of ways. For example, in some embodiments, the system collects biological data such as brain activity, heart rate, sweat detection, pupil dilatation, and the like in order to determine the Engagement Levels175. In at least one embodiment, the system utilizes one or more image and/or audio capture devices to capture image data and/or audio data, and processes this data to determine engagement. For example, in one such embodiment, the system can apply image and/or motion recognition to determine whether the Users170appear engaged (e.g., looking at the presenter, sitting still, etc.) or disengaged (e.g., looking around the room aimlessly, shuffling or moving in their seats, sitting with their eyes closed and/or head down, etc.). Similarly, in one such embodiment, the audio can be parsed to determine whether the users are asking questions or commenting on the presentation. Additionally, in some embodiments, the Users170can explicitly provide their engagement levels (e.g., via a score or rating). In the illustrated embodiment, the Engagement Levels175are provided to the ML Model(s)140to select additional or alternative Content160. In one embodiment, the system does so upon determining that the Engagement Levels175are below a predefined threshold. In one embodiment, the system aggregates the Engagement Levels175(e.g., by determining a median or mean engagement) prior to determining whether it satisfies the threshold. If the Engagement Levels175satisfy the defined threshold(s), in one embodiment, the system takes no action and allows the presentation to proceed. In one embodiment, if the Engagement Levels175are below the required threshold, the system provides updated Content160to the presenter and/or Users170. In some embodiments, the Content160includes documents regarding a given topic, with varying levels of complexity. For example, in one embodiment, a set of documents (e.g., corresponding to a presentation) can be defined at various levels of complexity (e.g., at a school-aged level, at a graduate level, and at an expert level). In one such embodiment, the system can select between these levels using the ML Models140. FIG.2is a block diagram illustrating a Recommender System205configured to evaluate and recommend content using machine learning, according to one embodiment disclosed herein. Although depicted as a physical device, in embodiments, the Recommender System205may be implemented using virtual device(s), and/or across a number of devices (e.g., in a cloud environment). As illustrated, the Recommender System205a Processor210, Memory215, Storage220, a Network Interface225, and one or more I/O Interfaces230. In the illustrated embodiment, the Processor210retrieves and executes programming instructions stored in Memory215, as well as stores and retrieves application data residing in Storage220. The Processor210is generally representative of a single CPU and/or GPU, multiple CPUs and/or GPUs, a single CPU and/or GPU having multiple processing cores, and the like. The Memory215is generally included to be representative of a random access memory. Storage220may be any combination of disk drives, flash-based storage devices, and the like, and may include fixed and/or removable storage devices, such as fixed disk drives, removable memory cards, caches, optical storage, network attached storage (NAS), or storage area networks (SAN). In some embodiments, input and output devices (such as keyboards, monitors, etc.) are connected via the I/O Interface(s)230. Similarly, via the Network Interface225, the Recommender System205can be communicatively coupled with one or more other devices and components (e.g., via the Network270, which may include the Internet, local network(s), and the like). As illustrated, the Processor210, Memory215, Storage220, Network Interface(s)225, and I/O Interface(s)230are communicatively coupled by one or more Buses265. In the illustrated embodiment, the Storage220includes a set of Documents255, each with one or more Tags260. In some embodiments, the Documents255are used to train one or more ML models used to select content for users, based on their lexical understanding and/or current engagement. Similarly, in one embodiment, the trained ML models select from among the Documents255based on the lexical understanding and/or engagement of the target user(s). In an embodiment, the Tags260can indicate a cluster to which the corresponding Document255belongs (e.g., based on semantic similarity with other Documents255), a domain or topic of the Document255, a type of the Document255, a lexical complexity of the Document255, and the like. As illustrated, the Memory215includes an Engagement Application235. Although depicted as software residing in Memory215, in embodiments, the functionality of the Engagement Application235can be implemented using hardware, software, or a combination of hardware and software. In the illustrated embodiment, the Engagement Application235includes a Document Analyzer110, an Engagement Component245, and an ML Component250. Although depicted as discrete components for conceptual clarity, in embodiments, the operations of the Document Analyzer110, Engagement Component245, and ML Component250can be combined or distributed across any number of components. In an embodiment, the Document Analyzer110generally receives and analyzes Documents255, as discussed above. For example, the Document Analyzer110may cluster the Documents255based on semantic similarity, and/or evaluate each Document255to determine its lexical complexity. In one embodiment, as discussed above, the Document Analyzer110classifies the lexical complexity of a given Document255based on the frequency of tokens (e.g., words, phrases, concepts, and the like) as compared to the overall corpus of Documents255(or as compared to Documents255within the same semantic cluster). For example, in one such embodiment, the Document Analyzer110can identify and classify as “rare” any words that appear in less than a predefined percentage of the Documents255. In another embodiment, the Document Analyzer110classifies a word as “rare” if it appear below a predefined frequency in the collection of Documents255, such as by determining the frequency of the word for each document (or for the overall corpus), where the frequency of an index word is the ratio of the number of times the index word appears, as compared to the number of other words in the document(s). In some embodiments, the Document Analyzer110similarly classifies “less common” words and/or “common” words using other thresholds (e.g., using progressively higher thresholds of frequency). In one embodiment, the Documents255are then classified into levels or categories of lexical complexity (e.g., basic, intermediate, and advanced) based on the relative frequency of the defined “rare,” “common,” and/or “less common” words in each. For example, a Document255with a large proportion of “rare” words may be classified as “advanced,” while a Document255with a lower proportion of “rare” words is classified as “intermediate,” and a Document255with an even lower proportion of “rare” words is classified as “basic.” The lexical complexity of each, as well as the semantic cluster, can then be incorporated in the Tags260associated with each Document255. In one embodiment, the Engagement Component245monitors the engagement of users while receiving, reviewing, or otherwise interacting with the Documents255(e.g., while a first user is giving a presentation involving the Documents255). In one embodiment, during a training phase, this engagement data is provided to the ML Component250for training and/or refining the models. During deployment, in an embodiment, the engagement data is fed into the ML models to determine whether to select a different set of one or more Documents255for a given set of user(s). In embodiments, the Engagement Component245can use a number of means to determine the engagement of the user. For example, in various embodiments, the Engagement Component245may utilize EEG, EKG, heart-rate monitoring, and other biological data monitoring. Similarly, in some embodiments, the Engagement Component245uses image and/or audio analysis (e.g., to monitor whether the users are paying attention). In an embodiment, during training, the ML Component250receives the determined engagement level(s) and trains or refines one or more ML models based on the engagement of the users. In one embodiment, the training is further based on the lexical knowledge of the users. For example, the ML Component250may train a ML model to receive an indication of the lexical knowledge of the user(s) and/or the current engagement of the users, and to output an indication of the recommended lexical complexity of the presentation (e.g., of the Documents255that should be used). Once the models are so trained, they can be used in real-time to analyze the lexical knowledge and/or engagement of an audience of users, in order to select appropriate Documents255for the users. FIG.3is a flow diagram illustrating a method300for training machine learning models based on lexical complexity and knowledge, according to one embodiment disclosed herein. The method300begins at block305, where an Engagement Application235receives a corpus of documents for ingestion. In embodiments, these documents can pertain to any number of topics or domains, and be of any complexity. In some embodiments, the Engagement Application235receives a set of documents and generates, for each document, one or more corresponding “simplified” document(s) (e.g., by substituting rare or uncommon words for more common synonyms, by using machine learning to simplify sentences and/or shorten them, and the like). The method300continues to block310, where the Engagement Application235clusters the received documents based on their semantic similarity. In one embodiment, to do so, the Engagement Application235generates semantic similarity scores between each pair of documents in the received corpus, and generates one or more clusters of documents based on these similarity scores. In this way, the Engagement Application235can suggest other documents that are semantically similar to a given document (but which may have a different lexical complexity). At block315, the Engagement Application235selects one of the received documents. The method300then proceeds to block320, where the Engagement Application235determines the lexical complexity of the received document. In one embodiment, as discussed above, determining the lexical complexity of the selected document includes determining the number, frequency, and/or ratio of “rare” words or tokens, “uncommon” words or tokens, and/or “common” words or tokens, as compared to the overall corpus (or as compared to the cluster to which the selected document belongs). In an embodiment, these values can then be compared to a set of thresholds to classify the documents into categories of lexical complexity. For example, an “expert” category may include documents where a defined minimum number of “rare” words are found in the document, each with a minimum frequency. At block325, the Engagement Application235determines whether there is at least one additional document to be analyzed. If so, the method300returns to block315. Otherwise, the method300proceeds to block330, where the Engagement Application235determines the lexical knowledge of the test audience. In one embodiment, the Engagement Application235determines the audience's lexical knowledge based on profile(s) of the users (e.g., specifying native language, education level, and the like). In another embodiment, the Engagement Application235determines the audience's lexical knowledge based on responses to surveys, or any other data. At block335, the Engagement Application235selects one or more documents to be provided and/or presented to the audience. In one embodiment, the Engagement Application235receives an indication of the desired topic/domain, and selects documents randomly from within the domain (e.g., without regards to the lexical complexity of the document). In another embodiment, the Engagement Application235is provided with an indication of an index or seed document, and selects documents within the same semantic cluster. In another embodiment, if one or more ML models have been (partially) trained, the Engagement Application235uses these models to select documents. At block340, while the selected document(s) are being provided and/or presented to the test audience, the Engagement Application235monitors the engagement of the users. Based on this engagement, at block345, the Engagement Application235refines one or more ML models to better select documents that are able to enhance or maintain the engagement of the audience. In one embodiment, this includes refining one or more weights associated with the ML model(s) to output a suggested lexical complexity, given the engagement and/or lexical knowledge of the audience. In some embodiments, the Engagement Application235refines the models to receive an indication of the user's lexical knowledge, and output a suggested complexity. To do so, the Engagement Application235can determine the complexity of the document(s) currently being presented, as well as the engagement of the audience. The Engagement Application235can then select or provide documents of a different complexity, and monitor the engagement change(s). By iteratively doing so, the Engagement Application235can determine how to modify the weights of the ML models to correlate lexical knowledge and complexity with engagement. At block350, the Engagement Application235determines whether the training is complete. In various embodiments, training may end because, for example, there are no additional test audiences, every document has been used, the models have reached a desired accuracy, and the like. If training has not completed, the method300returns to block330, where the Engagement Application235determines the knowledge of the test audience anew (e.g., for a new set of users aiding in the training of the models). Otherwise, the method300continues to block355, where the Engagement Application235returns/deploys the trained models for use in practice. FIG.4is a flow diagram illustrating a method400for using a trained machine learning model to recommend content based on lexical understanding and engagement, according to one embodiment disclosed herein. The method400begins at block405, where the Engagement Application235identifies the target user(s) for whom documents are to be selected. In one embodiment, one or more of the users can identify themselves (e.g., by signing in), or the presenting user can indicate, to the Engagement Application235, the identities of the audience users. In some embodiments, the Engagement Application235uses facial recognition, voice recognition, and the like to identify the target users in the audience. In some embodiments, the Engagement Application235also determines and/or receives an indication of the desired domain/topic. The method400then proceeds to block410, where the Engagement Application235determines the lexical knowledge of the identified user(s). In one embodiment, the Engagement Application235does so by retrieving user profiles associated with the users. At block415, the Engagement Application235selects one or more documents for the users using the trained ML models. In one embodiment, the Engagement Application235does so by determining an aggregate knowledge (e.g., median or mean) of the users, and providing it as input to the model(s). In some embodiments, the ML model(s) output an indication of the recommended lexical complexity, and the Engagement Application235selects one or more documents tagged with the indicated complexity, from within the indicated domain or semantic cluster. In various embodiments, the selected documents can then be provided to the users and/or to the presenter, in order to aid the presentation. At block420, during this interactivity, the Engagement Application235monitors the engagement levels of the audience user(s) and/or the presenting user(s). As discussed above, the Engagement Application235may do so by monitoring biological data of the users, image and/or audio of the users, and the like. The method400then continues to block425, where the Engagement Application235determines whether the current engagement of the user(s) satisfies predefined criteria. In one embodiment, the Engagement Application235first aggregates the engagement levels of each user to determine an overall engagement of the audience. In an embodiment, the predefined criteria include a minimum threshold level of engagement. If the Engagement Application235determines that the current audience engagement satisfies this threshold, the method400returns to block420to continue monitoring the audience's engagement. That is, in one embodiment, the Engagement Application235refrains from taking any action until the user engagement falls below a defined threshold. If, however, the Engagement Application235determines that the user engagement is below the required threshold, the method400returns to block415, where the Engagement Application235selects a new set of document(s) using the ML models. In this way, the Engagement Application235continuously monitors the user engagement in order to provide recommended content during the presentation. FIG.5is a flow diagram illustrating a method500for using machine learning to evaluate lexical understanding, according to one embodiment disclosed herein. The method500begins at block505where an Engagement Application235trains a machine learning (ML) model to identify appropriate documents based on lexical knowledge of target groups. At block510, the Engagement Application235determines a lexical knowledge of a set of users. The method500then continues to block515, where the Engagement Application235selects a first document of a plurality of documents by processing the determined level of lexical knowledge using the ML model. Further, at block520, the Engagement Application235facilitates presentation of the first document to the set of users. The method500proceeds to block525, where the Engagement Application235determines a level of engagement of the set of users. At block530, upon determining that the level of engagement is below a predefined threshold, the Engagement Application235selects a second document of the plurality of documents using the ML model. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. In the preceding and following, reference is made to embodiments presented in this disclosure. However, the scope of the present disclosure is not limited to specific described embodiments. Instead, any combination of the preceding and following features and elements, whether related to different embodiments or not, is contemplated to implement and practice contemplated embodiments. Furthermore, although embodiments disclosed herein may achieve advantages over other possible solutions or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the scope of the present disclosure. Thus, the preceding and following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). Aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Embodiments of the invention may be provided to end users through a cloud computing infrastructure. Cloud computing generally refers to the provision of scalable computing resources as a service over a network. More formally, cloud computing may be defined as a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Thus, cloud computing allows a user to access virtual computing resources (e.g., storage, data, applications, and even complete virtualized computing systems) in “the cloud,” without regard for the underlying physical systems (or locations of those systems) used to provide the computing resources. Typically, cloud computing resources are provided to a user on a pay-per-use basis, where users are charged only for the computing resources actually used (e.g. an amount of storage space consumed by a user or a number of virtualized systems instantiated by the user). A user can access any of the resources that reside in the cloud at any time, and from anywhere across the Internet. In context of the present invention, a user may access applications (e.g., an Engagement Application235) or related data available in the cloud. For example, the Engagement Application235could execute on a computing system in the cloud and evaluate documents and user engagement. In such a case, the Engagement Application235could train and apply machine learning models, and store the models and tagged documents at a storage location in the cloud. Doing so allows a user to access this information from any computing system attached to a network connected to the cloud (e.g., the Internet). While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. | 42,052 |
11861313 | DETAILED DESCRIPTION Embodiments of the present invention calculates, using cognitive models, personal insights and situational insights for a specific user through analyzing data concerning interactions of the individual user across a plurality of platforms. Cognitive models are used to process and analyze the specific user data, where the cognitive models include computerized models that simulate or predict human behavior or performance based on similar tasks and/or human interactions. Using the personal and situational insights as parameters of the cognitive models, a linguistics preference set for the specific user is calculated. A message customized to the specific user is generated according to the linguistics preference set, which provides a multi-level linguistic alignment between the individual user and the linguistics used in the message. FIG.1illustrates a computing environment for multi-level linguistic alignment in specific user targeted messaging, according to embodiments of the present invention. The computing environment is implemented by one or more computer systems, as described below with reference toFIG.5. The computing environment includes a segment of one engine101for managing messaging campaigns. The segment of one engine101includes a campaign generator102for generating messaging campaigns targeted to or customized for a specific user, using user data from a plurality of data sources120. The data sources120can be any digital source with information concerning an individual user's interactions, including, but are not limited to, social platforms, shopping history, web browsing activities, interactions with digital devices, interactions with other users, etc. The content from the data sources120pushes component103of the segment of one engine101to send messaging campaign information to a cognitive system110for data processing and message generation. The cognitive system110includes a message poller111, which sends periodic requests to the plurality of data sources120for data on the specific user. The collected data are stored in a metadata store114. Upon receipt of the specific user data, a multi-level message alignment component112processes the data and calculates a linguistics preference set for the specific user using cognitive models, as described further below. The linguistics preference set of the specific user is passed to a channel selector and message generator113. The channel selector and message generator113selects a messaging channel for the specific user and generates the customized message for the specific user according to the linguistics preference set, where the customized message incorporates linguistic traits based on the linguistics preference set. The customized message is then sent only to the specific user over the selected messaging channel, as described further below. FIG.2illustrates a method for multi-level linguistic alignment in specific user targeted messaging, according to some embodiments. The multi-level message alignment component112collects user data of a specific user from the plurality of sources (201) and inputs the user data into one or more cognitive models (202). Using the cognitive models, the multi-level message alignment component112calculates a linguistics preference set for the specific user (203).FIG.6illustrates the generating of a linguistic preference set for a specific user, according to embodiments of the present invention. Referring to bothFIGS.2and6, in calculating the linguistics preference set, the multi-level message alignment component112uses cognitive models with the user data601as input. The cognitive models analyze602the user data601and output or calculate personality insights603for the specific user (210). The personality insights represent expectations, motives, goals, beliefs, and other personality traits of the specific user that are inferred from the specific user data. The multi-level message alignment component112further calculates situational insights604for the specific user using cognitive models (211). The situational insights represent the events, actions, and other behavior of the specific user that are inferred from the specific user data. A linguistics preference set for the specific user is then calculated605using the cognitive models with the personality insights603and the situation insights604as parameters (212). The linguistics preference set include one or more sets of linguistic traits607-608preferred by the specific user. The linguistics preference set represent the relationships, inferred using the cognitive models, between different sets of linguistic traits preferred by the specific user and the personality and situational insights. Linguistic traits include the language, dialect, slang terms, syntax and spelling variations, and other such traits associated with particular geographic areas or particular cultures. The geographic area in which the specific user is currently or historically located and/or the culture associated with the specific user is inferred from the user data and reflected in the linguistics preference set for the specific user. The geographic areas can include, but are not limited to, areas in which the user frequents, such as the user's workplace, user's home, a sports arena, or a shopping establishment. The geographic areas may also include areas with which the user has a level of familiarity or a history, such as a home town. For example, the specific user may prefer one language when at work and a second language when not at work. Within the same language, different slangs may apply in different geographical areas that the user frequents. Further, the personality traits of the specific user may indicate patterns of linguistic preferences depending on the topic of the message. The linguistics preference set is thus a multi-leveled set of linguistics preferences that is unique to the specific user. The linguistics preference set for the specific user can then be used to generate a highly customized message that is aligned with the specific user, as described further below. Referring again toFIG.2, the channel selector and message generator113selects a messaging channel according to the linguistics preference set for the specific user (204). A preference for certain messaging channels may be inferred from the user data for the specific user and captured by the linguistics preference set. For example, the specific user may prefer certain messaging channels depending on, but not limited to, the user's location, the send time for the message, or the context of the message. The channel selector and message generator113determines a location of the specific user (205). The location can be a current location of the specific user, obtained through a location service on the user device130, such as coordinates from a global positioning system (GPS) on the user device130. The location can also be a predicated location of where the specific user is expected to be, based on patterns and trends identified through the analysis of the specific user data. The predicted location can be determined based on the context of the message. For example, the context of the message can be related to personal entertainment, and in response, and the personality and situation insights of the specific user indicates that the specific user prefers messages concerning personal entertainment to be received outside of working hours. A location where the specific user is predicted to be when outside working hours can be selected as the user location. The channel selector and message generator113determines a set of linguistic traits applicable to the message based on the linguistics preference set for the specific user and the specific user location (206). Depending on the specific user's current location or predicted location, different languages, slang terms, or other linguistic traits may be preferred by the specific user, as captured by the linguistics preference set for the specific user. The channel selector and message generator113generates a customized message to incorporate the linguistic traits (207). The channel selector and message generator113sends the customized message only to the specific user via a user device130over the selected messaging channel (208). FIG.3illustrates in more detail the multi-level message alignment component112, according to some embodiments. The multi-level message alignment component112includes a dynamic language processing engine300which processes specific user data using cognitive models to calculate the personality insights, situational insights, and linguistics preference set for the specific user. The multi-level message alignment component112includes an application programming interface (API)301for connecting to the plurality of data sources120and collecting user data for the specific user. The Internet data collector302includes interfaces to connect to the Internet to obtain static and dynamic information on the specific user. The API and conversation logic303maps the specific user data to the ground truth using the ground truth reader and mapper306. The ground truth reader and mapper306performs “ground truthing”, which refers to the use of statistical models, machine learning models, or other cognitive models to calculate the truthfulness of the collected data. The insights manager310analyzes the specific user data and infers situational insights for the specific user, using at least the situation to phrases mapper314, which maps phrases that may be found in the user data to situations. The specific user data is analyzed to identify patterns and trends, and the patterns and trends are used to infer the situational insights. The personality insights component304analyzes the specific user data to calculate the personality insights for the specific user. The patterns and trends identified in the specific user data are used to infer the personality traits for the specific user. The dynamic lead library307includes a collection of information about changes in the specific user's situations, such as differences between a meeting at work or a get together with friends. In some embodiments, the dynamic lead library307includes a linguistic and location preference history associated with the different specific user situations. The effect to service map311manages service subscriptions. In some embodiments, the multi-level message alignment is implemented as a cloud service. The effect to service map311identifies which client services are subscribed to the multi-level message alignment service and perform collection/response handling with these client service using map based approaches. The data dumper315generates objects that can be stored and retrieved from the metadata store114. The location to slang identifier305identifies the language variations or slang between geographic areas and the corresponding language corpus. The multi-level language alignment component112identifies correlations between the geographic area-based slang information, the specific user situational insights, the specific user personality insights, and other information calculated by various components of the multi-level message alignment component112. The timeliness effectiveness classifier312classifies levels of effectiveness of messages to the specific user based on the time the message was sent. The boundary classifier316identifies the geographic areas according to shared linguistic traits. The local culture classifier309classifies cultural traits for particular geographic areas. The boundary classifier316and the local culture classifier309are used by the location to slang identifier305to identify the slang corpus consistent with the linguistic and cultural traits of a geographic area. The corpus identifier313identifies the natural language corpus/corpora associated with the linguistic and/or cultural traits. The history-feedback component317stores the history of calculations by the multi-level message alignment component112and obtains feedback concerning the effectiveness and/or accuracy of the results. The history and feedback are then input into the cognitive models of the multi-level message alignment component112to improve their accuracy. The user-based dynamic language processing engine300collects the information output by the cognitive models and passes the information to the channel selector and message generator113. FIG.4illustrates components of the channel selector and message generator113, according to some embodiments. The channel selector and message generator113includes a message collector401, which receives the linguistics preference set for the specific user from the user-based dynamic language processing engine300. The user area selector402determines the specific user's current or predicted location. The preference selection component404selects the preference from the linguistics preference set for the specific user's current ore predicted location. In some embodiment, the preference is selected based on other factors as well, such as the message context and the selected messaging channel. The message is then generated according to the selected preference. Via the messaging platform interface403, the customized message is sent only to the specific user via the user device130over the selected message channel. For example, assume that the multi-level message alignment component112determines, based on analysis of data for a first specific user, that the first specific user prefers messages with a context related to medical services to be received during working hours, be in a first language, and through a telephone call. The multi-level message alignment component112further determines the first specific user's location during hours, identifies the area or boundary in which this location resides, and identifies the language and slang corpora associated with the area and/or culture of the area. The channel selector and message generator113selects the appropriate telephone network, generates an audio message in the first language, and incorporates slang terms associated with the area. A telephone call is then initiated to the first specific user during the first specific user's working hours. For another example, assume that the multi-level message alignment component112determines, based on analysis of data for a second specific user, that the second specific user prefers messages with a context related to food recipes to be received in a second language, as an email, and while the second specific user is at home. The multi-level message alignment component112further determines the location of the second specific user's home, identifies the area in which the home resides, and identifies the language and slang corpora associated with the area and/or culture of the area. The channel selector and message generator113selects the appropriate email messaging channel, and generates an email in the second language, and incorporates slang terms associated with the area. The email is sent to the second specific user during times in which the second specific user is likely to be at home. For a third example, assume that the multi-level message alignment component112determines, based on analysis of data for a third specific user, that the third specific user prefers messages with a context related to sports to be in a third language, as a text message, and while outside of working hours. The multi-level message alignment component112further determines the locations frequented by the third specific user outside of the third specific user's working hours, identifies the areas in which the frequently locations resides, and identifies the language and slang corpora associated with the areas and/or cultures of the areas. The channel selector and message generator113selects the appropriate text messaging channel, generates a text message in the third language, and incorporates slang terms associated with the area. The text message is sent to the third specific user outside of working hours. FIG.5illustrates a computer system, one or more of which implements the multi-level linguistic alignment in targeted individual user messaging, according to embodiments of the present invention. The computer system500is operationally coupled to a processor or processing units506, a memory501, and a bus509that couples various system components, including the memory501to the processor506. The bus509represents one or more of any of several types of bus structure, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. The memory501may include computer readable media in the form of volatile memory, such as random access memory (RAM)502or cache memory503, or non-volatile storage media504. The memory501may include at least one program product having a set of at least one program code module505that are configured to carry out the functions of embodiment of the present invention when executed by the processor506. The computer system500may also communicate with one or more external devices511, such as a display510, via I/O interfaces507. The computer system500may communicate with one or more networks via network adapter508. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. | 25,212 |
11861314 | DETAILED DESCRIPTION Continuity of care is crucial to ensuring good outcomes for patients discharged from an inpatient hospital setting. Hospital discharge summaries are written records of the care provided to patients during hospitalization, and these records are an important source of pending tasks for primary care providers. Discharge summaries describing a hospital stay contain crucial information and action items to share with patients and their future caregivers. However, discharge summaries are often lengthy documents written as free-text with no universal structure. Caregivers often have many patients and little time to review new clinical documents and may fail to identify important pending tasks. Systems and methods are described herein that identify important follow-up items from medical records. Medical records, such as discharge summaries, electronic health records (EHR), doctor notes, and the like may be processed to identify important items. Important items may include follow-up items such as medications, prescriptions, appointments, lab tests, and the like. Important items may be identified and emphasized in the medical record and/or extracted from the medical record. The identified important items may be presented to the physician or other relevant party. Extracting follow-up items could have several direct benefits. First, it could improve patient safety by increasing primary care provider's overall recall of important follow-up tasks. Second, it might decrease the amount of time required to achieve that recall, which is critical as physicians are forced to spend an ever-increasing amount of time interacting with electronic health record (EHR) systems. And thirdly, a working system may integrate with EHRs to automatically address certain follow-ups, improving EHR usability and further reducing medical error. In some examples, it has been observed that medical records such as discharge summary text mostly include information not directly actionable for follow-up. In some cases, extracting the actionable information for review by PCPs could reduce the amount of text they need to read by 88% or more. FIG.1depicts aspects of one example of identification of important items from a discharge summary. In one example, a discharge summary102may include free text. In some cases, a discharge summary may include one or multiple pages of text. A physician reading the discharge summary may be required to quickly identify actionable or important items such as recommended appointments, medications, and required laboratory tests. Identifying the important items in the text may require significant time and concentration from the physician and important items may be easily missed. In embodiments, the methods and systems described herein may automatically identify important items in the discharge summary. The identified items may be emphasized within the text to allow the physician to quickly see and find the important items in the medical record. In one example, the identified important items may be highlighted within the discharge summary text. In one example, important items may be categorized into different categories104of important items and the different colors, textures, outlines, and the like may be used to identify and differentiate between the different categories of important items.FIG.1shows a discharge summary where the important items are categorized into three categories: medical-based follow-up106, appointment-based follow-up108, and lab-based follow-up110. Important items within the discharge summary text102may be highlighted using colors that correspond to each of the categories106,108,110. The highlights within the discharge summary text102may allow a physician to quickly identify the important and actionable item in the context of the medical record. The success of the identification of important data in a medical record requires the identification and consideration of numerous subtleties associated with records. For example, for important data related to appointments, it may be desirable to leave out sentences that refer to “as needed” appointments, e.g., “See your endocrinologist as needed.” As another example, for important data related to medications, it may be desirable to exclude sentences describing simple additions to the medication list, e.g., “Discharged on glargine10uat bedtime,” as these typically do not require further action. As another example, for important data related to medications, it may be desirable to include sentences that related to instructions to hold and restart medications, new medications with an end date (e.g., antibiotics), and medications requiring dosage adjustment (e.g. “ . . . the plan is to keep patient off diuretics with monitoring of his labs and reinstitution once the kidney function improves”). In embodiments, the identified important items in a medical record (such as a discharge summary) may be extracted in addition or instead of being emphasized within the medical record. Important items may be extracted and shown/displayed to a physician outside of the medical record, wherein only the identified important items are shown. In some cases, the extracted important items may be shown categorized according to the categorization of the important items. In some embodiments, the identified items may be tagged within the medical record and/or extracted and used by other systems to automatically address certain important items such as scheduling appointments, lab tests, ordering medications, and the like. In embodiments, identification of important items (such as actionable information) may include multi-label sentence classification. The labels generated by the multi-label classification may represent a type of action to be taken. In embodiments, important items such as follow-up items may fall into more than one category. For example, a sentence relating to scheduling imaging accompanied by a procedure or medication instructions may be related to multiple categories of important items. It is important to note that the methods and systems described herein differ from techniques related to mere document summarization. A summary of a document is generally constrained by size, coverage, scope, and the like and is not concerned with identifying all actionable content in the document. Known document summarization can miss and ignore actionable content and are not suitable for the identification of important items. FIG.2shows aspects of one example embodiment of a system for multi-label sentence classification. The system may receive words or sentences from the medical record. In one example, the system may receive a sentence204from the medical record, and the system may output a multi-label score vector210that identifies if the sentence204(also referred herein as the focus sentence) is related to an important item and may also identify one or more categories of important items. In embodiments the sentence204may be a full sentence from the medical record, may be a partial sentence (such as a phrase), a group of consecutive words, or a plurality of sentences. In embodiments, the system may further receive contextual data202related to the focus sentence204. In embodiments, the contextual data202may be sentences, sequential words, and the like that appear around the focus sentence204in the medical record. In one embodiment, the contextual data may be one or more sentences that appear directly before the focus sentence and/or one or more sentences that appear directly after the focus sentence in the medical record. In some cases, context data may include metadata associated with the text of the medical record. In some cases, structured data associated with the text of the medical records (lists of medications, patient data, dates, etc.) may be used as context data. The focus sentence204and the contextual data202may be provided to the system as parallel inputs. The system may receive the contextual data202and the focus sentence204and process the contextual data202and the focus sentence204using a word embedding model206. The word embeddings model206may be a trained machine learning model. In some cases, the word embedding model may be a transformer-based machine learning model. The word embedding model206may be pretrained to take into account the context for each occurrence of a word in a focus sentence. In one embodiment, the word embedding model206may be based on pre-trained language models such as a Bidirectional Encoder Representations from Transformers (BERT) model, GPT-2, GPT-3, XLNet, RoBERTa, and the like. The word embedding model206may include a plurality of layers and/or hidden states. The output of the word embedding model206may provide an embedding of the words of the focus sentence204. In some embodiments, where the input to the word embedding model includes contextual data202, the output of the word embedding model may provide contextual embedding of the words of the focus sentence. In one example, the word embedding model2006may generate one vector embedding output for each word or a pair of words in the focus sentence. In embodiments, the vector embeddings may be generated based on one or more intermediate outputs (vectors from intermediate layers and/or hidden layers) of the word embedding model206. An embedding is a representation of a token (such as a word, sentence, group of words) in a vector space such that the token embedding includes relevant information about the token. For example, in some implementations, a token embedding may embody information about the meaning of the token. Two tokens that have similar meanings may have token embeddings that are close to each other in the vector space. By contrast, two tokens that do not have similar meanings may have token embeddings that are not close to each other in the vector space. The embeddings may be contextual. Where embeddings are contextual, the embedding of a token may depend on previous or subsequent tokens (such as previous or subsequent sentences/words in the contextual data202). For example, the token “cold” in the phrases “apply a cold compress” and “patient has a cold” may have different values according to the two very different meanings of “cold.” The system may include a sentence embedding model208. The sentence embedding model208may receive the output of the word embedding model206and determine sentence embeddings. The sentence embedding model208may receive word embeddings (such as a contextual word embedding of the focus sentence204). In embodiments, the sentence embedding model may be a trained machine mode such as a convolution neural network (CNN), a recurrent neural network, and the like. In one example, the sentence embedding model208may generate one-sentence embedding for the whole focus sentence204. In one example, one sentence embedding may be determined by averaging the word embeddings generated by the word embedding model206. In some embodiments, the sentence embedding model208may generate sentence embeddings based on special token embeddings generated by the word embedding model206. For example, the word embedding model may be a BERT-type model that may receive special tokens as inputs and may generate embeddings of the special tokens at the output. The sentence embedding model208may process the embeddings of the special tokens generated by the word embedding mode206and generate sentence embeddings. In one example, the system includes a multi-label classifier210. The multi-label classifier may be a linear classifier that may be configured to determine a multi-label score vector212wherein each value of the score vector212identifies a score that provides a measure of whether the focus sentence204belongs to a category of important items that should be emphasized or extracted from a medical record. In embodiments, the multi-label classifier may be a logistic regression classifier and may include a linear layer followed by a sigmoid function. The multi-label score vector212may be a confidence score relating to how likely the focus sentence relates to an important item or actionable item. In embodiments, each value of the score vector212may correspond to a different category of important items. In some embodiments, a threshold value for each element of the vector may be used to determine if the focus sentence should be classified as an important item. For example, the score vector212may include four elements. Each element of the vector may be in the range of [0,1]. Each element of the vector may be associated with a threshold value and a category. The threshold value may indicate a value for each element above which the focus sentence may be classified as an important item for the respective category. In another embodiment, a function of two or more elements of the score vector may be used to determine if the focus sentence relates to an important item and/or what category of important items it relates to. The system ofFIG.2may be used to sequentially process a plurality of sentences in a medical record. The system may start with the first sentence as the focus sentence followed by selecting the second sentence as the focus sentence, and so on. In some cases, multiple instances of the system may be parallelized to allows the processing of multiple sentences from the medical record in parallel. In some cases, the system may be scaled such that the focus sentence may be the complete medical record comprising multiple sentences, and the system may label all the sentences in the medical record in parallel. FIG.3depicts aspects of a method for identifying important items from a medical record. The method may include receiving written medical records of care provided to a patient302. The medical records may be received from one or more health record systems. In some cases, the health records may be combined from different providers based on associated dates with the records. In some embodiments, the medical records may be preprocessed to filter extraneous data (such as irrelevant dates, codes, data providers) tokenize the elements (dividing text into sentences, words, phrases, sub-words, etc.). For example, tokenization may include dividing a word into sub-words to reduce the number of vocabulary items (for example, “talking” can be broken down into sub-words “talk” and “ing”). The method may further include determining contextualized embeddings of words in the sentences of the written medical records by processing the sentences with a trained machine model304. The trained machine model may be a word embedding model. The model may identify embeddings of sentences and/or words by processing the sentences and contextually associated with the sentences and/or words. In some cases, the contextual data may include sentences and/or words that occurred before (to the left) or after (to the right) of the sentence. The embeddings may be further processed with a sentence embedding model306. The sentence embedding model may identify sentence embeddings which may be processed by a multi-label classifier that may be configured to identify a label for each sentence in the medical record308. In some embodiments, the label may be a score vector. The method may further include identifying clinically actionable items in the written medical records based on the labeled sentences310. The identified actionable items may be emphasized within the medical record when viewed by a physician on an electronic device. The emphasis may include highlighting, changing the color of text, background, blinking, visual effects, and the like. In some embodiments, users viewing medical records may be provided with selection options for highlighting identified important items, choosing to only see the identified important items, selecting categories of important items to show and/or highlight. In some cases, users may be provided with selection options for selecting and/or dismissing individual or one or more groups of sentences that were identified as important items or not identified as important. The selection and/or dismissal of selections may be used to refine models. The selection and/or dismissal of selections may be used as additional training data for training models used to identify the important items. Various interfaces such as pen-based selections, checkboxes, list boxes, and the like may be used to make selections and/or dismiss selections. FIG.4shows aspects of an apparatus for identifying important items in a medical record. The apparatus may receive medical records402. The medical records may include text404that may be organized into sentences or groups of words. A focus406sentence (a sentence to be evaluated) may be identified in the medical record. Based on the location of the focus sentence406, contextual data may be identified in the text404of the medical record. The contextual data408may include sentences410from the text404of the medical record that are before and/or after the focus sentence. The apparatus may include a tokenizer412for parsing the focus sentence406and the contextual data408. The tokenizer412may divide the focus and/or contextual data into tokens such as words or phrases. In some cases, the tokenizer412may further include tags or special tokens to identify or mark the focus sentence406and the contextual data408. In some cases, the tokenizer412may add separators (special tags, reserved words, reserved embedding vectors, and the like) between the focus sentence406and the contextual data408such that the words/embeddings of the focus sentence may be identifiable from the contextual data408. In some cases, the tokenizer may further be configured to remove punctuation, remove capitalization, identify headers denoting sections, and the like. The apparatus may include a word embedding model such as a transformer-based model414that processes the output from the tokenizer412and determines embeddings related to the focus sentence406, contextual data408, and/or special tokens. In embodiments, the embeddings may be contextual. The apparatus may further include a sentence embedding model such as a convolutional neural network418for further processing the contextual embeddings416to determine sentence embeddings. In embodiments, the sentence embedding model may process words. The apparatus may further include a multi-label classifier such as a linear classifier422. The multi-label classifier422may receive the output of the sentence embedding model418and generate a sentence label424. The label424may be a number or a tag that provides an identification of the determined importance of the focus sentence and/or a category of the focus sentence. In some embodiments, the multi-label classifier422may receive additional inputs. In one example, inputs to the multi-label classifier422may include a focus sentence position420. The focus sentence position420may identify the position of the focus sentence in the medical record text404. In one example, the focus sentence position420may be the sentence number (such as an indication that the focus sentence is the fourth sentence in the text404) or a relative position of the focus sentence in the text404(such as a normalized number between 0 and 1). The linear classifier422may determine the focus sentence label424. In embodiments, the systems and apparatus described herein may require training. In embodiments, the components of the system, such as the word embedding model, sentence embedding model, multi-label classifiers, and the like, may require training. Training may be required to improve the accuracy of the focus sentence labels. In some cases, models may be pretrained (such as on generic language data or for generic medical records) but may be further trained on medical records from a specific institution, for a specific medical field, and the like. In some embodiments, all three components may be trained using labeled medical records. In some embodiments, only the multi-label classifier or the sentence embedding model may be trained using labeled medical records, and the word embedding model may be a pre-trained model that was trained on a general language corpus. FIG.5is a schematic diagram depicting aspects of training. In embodiments, training of models510may be based on training data508that include medical records labeled with actionable content. Training may include training of one or more of the multi-label classifier502, sentence embedding model504, and/or the word embedding model506. In embodiments, the word embedding model (506may be initialized with a pretrained model. The model may be pretrained on general language sources such as books, Wikipedia articles, and the like. In some cases, the model may be pretrained on general medical text and/or medical records. In embodiments, the multi-label classifier502and the sentence embedding model504may be initialized with random parameters. After training510using the training data508, the models may be trained models512. The training may result in a trained multi-label classifier514, trained sentence embedding model516, and a fine-tuned word embedding model518. In embodiments, training techniques may include supervised training. The training may comprise multiple rounds where each round updates the parameters of the models by minimizing a loss function. Training may include training using stochastic gradient descent. At each round, a forward pass may be performed using the training data. An error may be computed based on the predicted labels and expected labels. A backward pass may be performed to update the parameters of the models. This training process may proceed until a suitable stopping or convergence criterion is reached. In embodiments, training may include training the word embedding model, the sentence embedding model, and the multi-label classifier together such that the parameters of the models are updated together. In one example, models may be trained together using stochastic gradient descent. The training data may be manually labeled by people. In one example, training data may include data from user interactions with highlighted data as described herein. User interactions with medical records that include identified important items may be tracked and used as training data. Interactions such as selection and/or dismissing selections as described herein may be used to update the parameters of the model. In one example, training data may be manually annotated discharge summaries from the set of patients that were discharged from the ICU (i.e., survived) and thus brought back to the care of their primary care physician or relevant specialists. The training data may be further split by document id into training, validation, and test sets. Training data may be annotated with categories of important items. In one example, categories may include:Appointments: Appointments to be either made by the PCP or monitored to ensure the patient attends them after the patient has been discharged from the hospital.Lab tests: Laboratory tests that either have results pending at the time of hospital discharge or need to be ordered by the PCP.Procedures: Procedures that the PCP needs to either order, ensure another caregiver orders, or ensure the patient undergoes.Medications: Medications that the PCP either needs to prescribe or ensure that the patient is taking correctly, e.g., time-limited medications or new medications that may need a dose adjustment.Imaging: Imaging studies that either have results pending at the time of hospital discharge or need to be ordered by the PCP.Patient Instructions: Follow-up instructions that are directed to the patient, so the PCP can ensure the patient understands and performs them.Other: Other follow-up information that is important to relay to the PCP but does not fall under the other categories (e.g., the need to closely observe the patient's diet or fax results to another provider). FIG.6is a schematic diagram depicting one example embodiment of a trained system for the identification of important items in a medical record. The system provides for passing a focus sentence and its left and right context through a pre-trained word embedding model, followed by the incorporation of a sentence embedding model602and a multi-label classifier614to make the final prediction. The system may receive a focus sentence S0618. A tag EA620that identifies the sentence as a focus sentence may be associated with sentence618. The system may further receive context information which may include left context606and right context608. Each of the left and right contexts may include two sentences that are before (S−2and S−1) and two sentences that are after (S1and S2) the focus sentence618in the medical text. The context data606,608may include tags or embeddings (EB) that identify the sentences as relating to context. The input618,620,606,608may be processed by a trained word embedding model604, which may be fine-tuned on clinical data. The word embedding model604may output contextual embeddings of the input sentences. The contextual embedding X0of the focus sentence S0may be further passed through a sentence embedding model602and a multi-label classifier614to generate labels616that categorize the focus sentence. In some embodiments of the system, the sentence embedding model602may also process the contextualized embeddings (X−2, X−1, X1, X2) of the context sentences (S−2, S−1, S1, S2). In some cases, position information612may be an input to the multi-label classifier614. The position information may identify the position (absolute or relative) of the focus sentence618in the medical text. FIG.7is a flow diagram showing aspects of a method700for identifying important items from a medical record. The method may include receiving focus sentences from a medical record702. The method may further include receiving contextual data for the focus sentence704. The focus sentence may be tokenized706. Tokenization706may be into words or phrases and may include additional processing such as filtering, capitalization, and the like. The tokenized data may be processed with a trained machine model to determine word embeddings710. The word embeddings may be determined for the focus sentence and, in some cases, the contextual data. The word embeddings may be processed to identify if the focus sentence has actionable content based on a score vector712. The score vector may be determined by a trained model such as a multi-label classifier. The systems and methods described herein provide for improved identification of important items such as actionable items compared to other methods. Table 1 shows F1 scores on the test set for different categories. The table compares identification of important words using a Bag-of-words model, a CNN, BERT model (pretrained only, without fine-tuning on medical data), a clinical BERT (CBERT, fine-tuned BERT model), CBERT with context, CBERT-Context-CNN, and Full model (CBERT-Context-CNN and sentence position). The table shows that the best model exploits three methods to improve predictions: fine-tuning on unlabeled discharge summaries, incorporating neighboring context from neighboring sentences, and exploiting local textual clues via convolution. Table 1 shows that the CBERT model with the addition of Context, CNN, and position information improves a system's ability to identify actionable content and improves the technology of multi-label item recognition. TABLE 1ModelImagingApptMedicationProcedureLabPatientOtherBag-of-words0.270.740.320.190.340.720.07CNN0.280.760.350.220.420.730.03BERT0.360.830.330.330.420.760.10CBERT0.460.810.420.540.540.760.24CBERT-0.490.820.430.420.530.800.23ContextCBERT-0.440.840.410.440.520.790.15Context-CNNFull Model0.510.840.420.510.600.790.24 FIG.8shows a plot of micro-F1 performance for two CBERT based models, superimposed over a histogram of input sentence lengths. To reduce visual noise, the F1 scores for the smallest bin sizes (which correspond to the longest sentences) are suppressed. The graph shows the improved performance of the full model over just a CBERT model over the length of the sentences. FIG.9shows aspects of another example embodiment of a system for multi-label sentence classification. The system may receive words or sentences from the medical record. In one example, the system may receive a focus sentence904from the medical record, and the system may output a multi-label score vector910that identifies if the focus sentence904is related to an important item and may also identify one or more categories of important items. In embodiments, the system may further receive contextual data902related to the focus sentence904. The focus sentence904and the contextual data902may be separated with special tokens such as separation tokens to distinguish the contextual data from the focus sentence. The system may receive the contextual data902and the focus sentence904and process the contextual data902and the focus sentence904using a word embedding model906wherein the word embedding model may be any word embedding model described herein. The output of the word embedding model906may provide an embedding of the words of the focus and may be contextual. The word embedding model906may provide an embedding of the separation tokens. The system may include a multi-label classifier908. The multi-label classifier908may be a linear classifier that may be configured to determine a multi-label score vector910wherein each value of the score vector910identifies a score that provides a measure of how close the focus sentence904is to a category of important items that should be emphasized or extracted from a medical record. In embodiments, the multi-label classifier may be a logistic regression classifier and may include a linear layer followed by a sigmoid function. The multi-label classifier may receive as input the embedding of the special token that is the output of the word embedding model906. FIG.10is a schematic diagram depicting another example embodiment of a trained system for the identification of important items in a medical record. The system provides for passing a focus sentence and its left and right context through a pre-trained word embedding model such as a BERT model1008, followed by a multi-label classifier1014. The system may receive words for a focus sentence (W7, W8)1006and words for a left context (W1, W2, W3, W4)1002and right context (W11, W18, W11, W14)1004. The sentences may be separated with separation tokens ([SEP] tokens) that identify where each sentence begins and/or ends. Additional tags may further identify if each word and token is associated with the focus data (SA) or with the context data (SB). In embodiments, the input may further include position data associated with each word and token (P0-P14). The input data1002,1004,1006may be processed by a word embedding model such as BERT1008, which may be a BERT model fine-tuned on clinical data. The word embedding model1008may output contextual embeddings of words (X1-X14) and tokens (XCLS, Xsep). The contextual embedding of the separation token XSEPof the focus sentence1016may be further passed through a multi-label classifier1014to generate labels that categorize the focus sentence. In some embodiments, other tokens such as the embeddings of the [CLS] token (XCLS) may be used as the sentence-level representation and used as input to the trained multi-label classifier1014. FIG.11is a flow diagram depicting aspects of training the systems and models fromFIGS.2,4,6, and9-10using semi-supervised approaches. In embodiments, training may include task-targeted pretraining (TTP). TTP may require fewer data and computation compared to traditional training techniques while attaining comparable performance. TTP surfaces unlabeled sentences that may be positive examples. The positive examples may be used to pre-train the models using auxiliary tasks. In embodiments, method1100may include training a word embedding model or systems described with respect toFIGS.2,4,6, and9-10with labeled data1102. The labeled data may be sentences from medical records that are labeled as important and may be associated with categories of important data. The method may further include applying the trained model to unlabeled data1104to generate a targeted dataset. The trained model may be applied to unlabeled data, such as medical records, to select sentences that may include important items such as follow-up or action items. The sentences may be selected based using a fixed threshold in the multi-label score vector generated by the model. In one example, if any value of the multi-label score vector is above a threshold value, the sentence may be selected. The threshold value may be selected to generate a dataset of a desired size. Lowering the threshold may increase the dataset size, while increasing the threshold may decrease the dataset size. Method1100may further include pretraining the model (such as the systems and models described with respect toFIGS.2,4,6, and9-10) on the targeted dataset1106. The pretraining of the model may include training with an auxiliary task. In one embodiment, the auxiliary task may include masked language modeling and sentence switching task. In embodiments, sentences from the unlabeled data that are in proximity (neighboring or context sentences) to the sentences in the targeted dataset may be extracted from the medical records and used for the pretraining. During pretraining using masked language modeling, sentences from the targeted dataset may be presented to the model with some tokens in the focus sentence masked. The model may be tasked with predicting the masked words. In one example, each token in the focus sentences may be masked with a probability of 0.15. During pretraining using sentence switching tasks, sentences a focus sentence may be swapped with another randomly chosen sentence from the same document, and the model may be tasked with predicting if the presented sentences correspond to sequential sentences. In one example, the focus sentence may be swapped with another random sentence from the medical record with a probability of 0.25. In one embodiment, cross-entropy losses for both tasks may be summed to compute the total loss for an instance, and the total loss may be used to update the weights of the model using backpropagation and gradient updates. In some embodiments, only one auxiliary task may be used for pretraining. After pretraining, the model may be further fine-tuned using labeled data1108. The labeled data may be the same data used in the initial training1102. In some embodiments, the fine-tuned model may be used to further select a new set of sentences for pretraining, and the method may be repeated1110. The fine-tuned model may be used to detect interesting items in a medical record. It should be appreciated that the task-targeted pretraining as described herein improves model training by reducing the required computational resources, data storage resources, and time required for pretraining. As described herein, task-targeted pretraining involves only a small subset of available data, while traditional methods may require the use of all the available data and may not be feasible in many applications due to computational, storage, and time constraints. FIG.12is another flow diagram of a method1200for task-targeted pretraining. The method may begin with a step of fine-tuning a general model1202. A general model1210that may have been pre-trained on general or non-field-specific data may be fine-tuned on labeled data1222. The labeled data1222may indicate important items in medical records. The model1210may be fine-tuned on the labeled data1222to generate a fine-tuned model1212. In the next step1204, the fine-tuned model1212may be used to generate a targeted dataset1214. The fine-tuned model1212may be used to process unlabeled data1224to identify important items and optionally categories of important items from the unlabeled data1224to generate a targeted dataset1214. The targeted dataset may be a subset of sentences of the unlabeled data1224. The targeted dataset may further include information about context data from the unlabeled data1224that may include sentences around important items. In the next step1206, the targeted dataset1214may be used to pre-train an untrained model1216to generate a task-targeted pre-trained model1218. The untrained model1216may be pre-trained using auxiliary tasks such as masked language modeling and sentence switching tasks. In step1208, the task-targeted pre-trained model1218may be further fine-tuned using labeled data1222to generate a trained model1220that may be used to process medical records and identify important items in the records. In embodiments, various training configurations as described herein may be used to train components of the system. In one embodiment, training may include supervised training only on the multi-label classifier and optionally (if part of the system) on the sentence embedding model. In another embodiment, training may include supervised training of the multi-label classifier and optionally (if part of the system) the sentence embedding model and supervised fine-tuning of the pre-trained word embedding model. In another embodiment, training may include semi-supervised task targeted pretraining on the word embedding model followed by supervised training of the multi-label classifier and optionally (if part of the system) the sentence embedding model followed by supervised fine-tuning of the pre-trained word embedding model. The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. “Processor” as used herein is meant to include at least one processor, and unless context clearly indicates otherwise, the plural and the singular should be understood to be interchangeable. Any aspects of the present disclosure may be implemented as a computer-implemented method on the machine, as a system or apparatus as part of or in relation to the machine, or as a computer program product embodied in a computer-readable medium executing on one or more of the machines. The processor may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more locations without deviating from the scope of the disclosure. In addition, any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The software program may be associated with a client that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more locations without deviating from the scope of the disclosure. In addition, any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types. The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer-to-peer network, mesh network, or other communications network. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general-purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine-readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. While the invention has been disclosed in connection with the preferred embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the present invention is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law. All documents referenced herein are hereby incorporated by reference in the entirety. | 52,940 |
11861315 | DESCRIPTION OF EXAMPLE EMBODIMENTS System Overview FIG.1illustrates an example network environment100associated with an assistant system. Network environment100includes a client system130, an assistant system140, a social-networking system160, and a third-party system170connected to each other by a network110. AlthoughFIG.1illustrates a particular arrangement of a client system130, an assistant system140, a social-networking system160, a third-party system170, and a network110, this disclosure contemplates any suitable arrangement of a client system130, an assistant system140, a social-networking system160, a third-party system170, and a network110. As an example and not by way of limitation, two or more of a client system130, a social-networking system160, an assistant system140, and a third-party system170may be connected to each other directly, bypassing a network110. As another example, two or more of a client system130, an assistant system140, a social-networking system160, and a third-party system170may be physically or logically co-located with each other in whole or in part. Moreover, althoughFIG.1illustrates a particular number of client systems130, assistant systems140, social-networking systems160, third-party systems170, and networks110, this disclosure contemplates any suitable number of client systems130, assistant systems140, social-networking systems160, third-party systems170, and networks110. As an example and not by way of limitation, network environment100may include multiple client systems130, assistant systems140, social-networking systems160, third-party systems170, and networks110. This disclosure contemplates any suitable network110. As an example and not by way of limitation, one or more portions of a network110may include an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular technology-based network, a satellite communications technology-based network, another network110, or a combination of two or more such networks110. Links150may connect a client system130, an assistant system140, a social-networking system160, and a third-party system170to a communication network110or to each other. This disclosure contemplates any suitable links150. In particular embodiments, one or more links150include one or more wireline (such as for example Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links150each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link150, or a combination of two or more such links150. Links150need not necessarily be the same throughout a network environment100. One or more first links150may differ in one or more respects from one or more second links150. In particular embodiments, a client system130may be any suitable electronic device including hardware, software, or embedded logic components, or a combination of two or more such components, and may be capable of carrying out the functionalities implemented or supported by a client system130. As an example and not by way of limitation, the client system130may include a computer system such as a desktop computer, notebook or laptop computer, netbook, a tablet computer, e-book reader, GPS device, camera, personal digital assistant (PDA), handheld electronic device, cellular telephone, smartphone, smart speaker, smart watch, smart glasses, augmented-reality (AR) smart glasses, virtual reality (VR) headset, other suitable electronic device, or any suitable combination thereof. In particular embodiments, the client system130may be a smart assistant device. More information on smart assistant devices may be found in U.S. patent application Ser. No. 15/949,011, filed 9 Apr. 2018, U.S. patent application Ser. No. 16/153,574, filed 5 Oct. 2018, U.S. patent application Ser. No. 29/631910, filed 3 Jan. 2018, U.S. patent application Ser. No. 29/631747, filed 2 Jan. 2018, U.S. patent application Ser. No. 29/631913, filed 3 Jan. 2018, and U.S. patent application Ser. No. 29/631914, filed 3 Jan. 2018, each of which is incorporated by reference. This disclosure contemplates any suitable client systems130. In particular embodiments, a client system130may enable a network user at a client system130to access a network110. The client system130may also enable the user to communicate with other users at other client systems130. In particular embodiments, a client system130may include a web browser132, and may have one or more add-ons, plug-ins, or other extensions. A user at a client system130may enter a Uniform Resource Locator (URL) or other address directing a web browser132to a particular server (such as server162, or a server associated with a third-party system170), and the web browser132may generate a Hyper Text Transfer Protocol (HTTP) request and communicate the HTTP request to server. The server may accept the HTTP request and communicate to a client system130one or more Hyper Text Markup Language (HTML) files responsive to the HTTP request. The client system130may render a web interface (e.g. a webpage) based on the HTML files from the server for presentation to the user. This disclosure contemplates any suitable source files. As an example and not by way of limitation, a web interface may be rendered from HTML files, Extensible Hyper Text Markup Language (XHTML) files, or Extensible Markup Language (XML) files, according to particular needs. Such interfaces may also execute scripts, combinations of markup language and scripts, and the like. Herein, reference to a web interface encompasses one or more corresponding source files (which a browser may use to render the web interface) and vice versa, where appropriate. In particular embodiments, a client system130may include a social-networking application134installed on the client system130. A user at a client system130may use the social-networking application134to access on online social network. The user at the client system130may use the social-networking application134to communicate with the user's social connections (e.g., friends, followers, followed accounts, contacts, etc.). The user at the client system130may also use the social-networking application134to interact with a plurality of content objects (e.g., posts, news articles, ephemeral content, etc.) on the online social network. As an example and not by way of limitation, the user may browse trending topics and breaking news using the social-networking application134. In particular embodiments, a client system130may include an assistant application136. A user at a client system130may use the assistant application136to interact with the assistant system140. In particular embodiments, the assistant application136may include an assistant xbot functionality as a front-end interface for interacting with the user of the client system130, including receiving user inputs and presenting outputs. In particular embodiments, the assistant application136may comprise a stand-alone application. In particular embodiments, the assistant application136may be integrated into the social-networking application134or another suitable application (e.g., a messaging application). In particular embodiments, the assistant application136may be also integrated into the client system130, an assistant hardware device, or any other suitable hardware devices. In particular embodiments, the assistant application136may be accessed via the web browser132. In particular embodiments, the user may interact with the assistant system140by providing user input to the assistant application136via various modalities (e.g., audio, voice, text, vision, image, video, gesture, motion, activity, location, orientation). The assistant application136may communicate the user input to the assistant system140(e.g., via the assistant xbot). Based on the user input, the assistant system140may generate responses. The assistant system140may send the generated responses to the assistant application136. The assistant application136may then present the responses to the user at the client system130via various modalities (e.g., audio, text, image, and video). As an example and not by way of limitation, the user may interact with the assistant system140by providing a user input (e.g., a verbal request for information regarding a current status of nearby vehicle traffic) to the assistant xbot via a microphone of the client system130. The assistant application136may then communicate the user input to the assistant system140over network110. The assistant system140may accordingly analyze the user input, generate a response based on the analysis of the user input (e.g., vehicle traffic information obtained from a third-party source), and communicate the generated response back to the assistant application136. The assistant application136may then present the generated response to the user in any suitable manner (e.g., displaying a text-based push notification and/or image(s) illustrating a local map of nearby vehicle traffic on a display of the client system130). In particular embodiments, a client system130may implement wake-word detection techniques to allow users to conveniently activate the assistant system140using one or more wake-words associated with assistant system140. As an example and not by way of limitation, the system audio API on client system130may continuously monitor user input comprising audio data (e.g., frames of voice data) received at the client system130. In this example, a wake-word associated with the assistant system140may be the voice phrase “hey assistant.” In this example, when the system audio API on client system130detects the voice phrase “hey assistant” in the monitored audio data, the assistant system140may be activated for subsequent interaction with the user. In alternative embodiments, similar detection techniques may be implemented to activate the assistant system140using particular non-audio user inputs associated with the assistant system140. For example, the non-audio user inputs may be specific visual signals detected by a low-power sensor (e.g., camera) of client system130. As an example and not by way of limitation, the visual signals may be a static image (e.g., barcode, QR code, universal product code (UPC)), a position of the user (e.g., the user's gaze towards client system130), a user motion (e.g., the user pointing at an object), or any other suitable visual signal. In particular embodiments, a client system130may include a rendering device137and, optionally, a companion device138. The rendering device137may be configured to render outputs generated by the assistant system140to the user. The companion device138may be configured to perform computations associated with particular tasks (e.g., communications with the assistant system140) locally (i.e., on-device) on the companion device138in particular circumstances (e.g., when the rendering device137is unable to perform said computations). In particular embodiments, the client system130, the rendering device137, and/or the companion device138may each be a suitable electronic device including hardware, software, or embedded logic components, or a combination of two or more such components, and may be capable of carrying out, individually or cooperatively, the functionalities implemented or supported by the client system130described herein. As an example and not by way of limitation, the client system130, the rendering device137, and/or the companion device138may each include a computer system such as a desktop computer, notebook or laptop computer, netbook, a tablet computer, e-book reader, GPS device, camera, personal digital assistant (PDA), handheld electronic device, cellular telephone, smartphone, smart speaker, virtual reality (VR) headset, augmented-reality (AR) smart glasses, other suitable electronic device, or any suitable combination thereof. In particular embodiments, one or more of the client system130, the rendering device137, and the companion device138may operate as a smart assistant device. As an example and not by way of limitation, the rendering device137may comprise smart glasses and the companion device138may comprise a smart phone. As another example and not by way of limitation, the rendering device137may comprise a smart watch and the companion device138may comprise a smart phone. As yet another example and not by way of limitation, the rendering device137may comprise smart glasses and the companion device138may comprise a smart remote for the smart glasses. As yet another example and not by way of limitation, the rendering device137may comprise a VR/AR headset and the companion device138may comprise a smart phone. In particular embodiments, a user may interact with the assistant system140using the rendering device137or the companion device138, individually or in combination. In particular embodiments, one or more of the client system130, the rendering device137, and the companion device138may implement a multi-stage wake-word detection model to enable users to conveniently activate the assistant system140by continuously monitoring for one or more wake-words associated with assistant system140. At a first stage of the wake-word detection model, the rendering device137may receive audio user input (e.g., frames of voice data). If a wireless connection between the rendering device137and the companion device138is available, the application on the rendering device137may communicate the received audio user input to the companion application on the companion device138via the wireless connection. At a second stage of the wake-word detection model, the companion application on the companion device138may process the received audio user input to detect a wake-word associated with the assistant system140. The companion application on the companion device138may then communicate the detected wake-word to a server associated with the assistant system140via wireless network110. At a third stage of the wake-word detection model, the server associated with the assistant system140may perform a keyword verification on the detected wake-word to verify whether the user intended to activate and receive assistance from the assistant system140. In alternative embodiments, any of the processing, detection, or keyword verification may be performed by the rendering device137and/or the companion device138. In particular embodiments, when the assistant system140has been activated by the user, an application on the rendering device137may be configured to receive user input from the user, and a companion application on the companion device138may be configured to handle user inputs (e.g., user requests) received by the application on the rendering device137. In particular embodiments, the rendering device137and the companion device138may be associated with each other (i.e., paired) via one or more wireless communication protocols (e.g., Bluetooth). The following example workflow illustrates how a rendering device137and a companion device138may handle a user input provided by a user. In this example, an application on the rendering device137may receive a user input comprising a user request directed to the rendering device137. The application on the rendering device137may then determine a status of a wireless connection (i.e., tethering status) between the rendering device137and the companion device138. If a wireless connection between the rendering device137and the companion device138is not available, the application on the rendering device137may communicate the user request (optionally including additional data and/or contextual information available to the rendering device137) to the assistant system140via the network110. The assistant system140may then generate a response to the user request and communicate the generated response back to the rendering device137. The rendering device137may then present the response to the user in any suitable manner. Alternatively, if a wireless connection between the rendering device137and the companion device138is available, the application on the rendering device137may communicate the user request (optionally including additional data and/or contextual information available to the rendering device137) to the companion application on the companion device138via the wireless connection. The companion application on the companion device138may then communicate the user request (optionally including additional data and/or contextual information available to the companion device138) to the assistant system140via the network110. The assistant system140may then generate a response to the user request and communicate the generated response back to the companion device138. The companion application on the companion device138may then communicate the generated response to the application on the rendering device137. The rendering device137may then present the response to the user in any suitable manner. In the preceding example workflow, the rendering device137and the companion device138may each perform one or more computations and/or processes at each respective step of the workflow. In particular embodiments, performance of the computations and/or processes disclosed herein may be adaptively switched between the rendering device137and the companion device138based at least in part on a device state of the rendering device137and/or the companion device138, a task associated with the user input, and/or one or more additional factors. As an example and not by way of limitation, one factor may be signal strength of the wireless connection between the rendering device137and the companion device138. For example, if the signal strength of the wireless connection between the rendering device137and the companion device138is strong, the computations and processes may be adaptively switched to be substantially performed by the companion device138in order to, for example, benefit from the greater processing power of the CPU of the companion device138. Alternatively, if the signal strength of the wireless connection between the rendering device137and the companion device138is weak, the computations and processes may be adaptively switched to be substantially performed by the rendering device137in a standalone manner. In particular embodiments, if the client system130does not comprise a companion device138, the aforementioned computations and processes may be performed solely by the rendering device137in a standalone manner. In particular embodiments, an assistant system140may assist users with various assistant-related tasks. The assistant system140may interact with the social-networking system160and/or the third-party system170when executing these assistant-related tasks. In particular embodiments, the social-networking system160may be a network-addressable computing system that can host an online social network. The social-networking system160may generate, store, receive, and send social-networking data, such as, for example, user profile data, concept-profile data, social-graph information, or other suitable data related to the online social network. The social-networking system160may be accessed by the other components of network environment100either directly or via a network110. As an example and not by way of limitation, a client system130may access the social-networking system160using a web browser132or a native application associated with the social-networking system160(e.g., a mobile social-networking application, a messaging application, another suitable application, or any combination thereof) either directly or via a network110. In particular embodiments, the social-networking system160may include one or more servers162. Each server162may be a unitary server or a distributed server spanning multiple computers or multiple datacenters. As an example and not by way of limitation, each server162may be a web server, a news server, a mail server, a message server, an advertising server, a file server, an application server, an exchange server, a database server, a proxy server, another server suitable for performing functions or processes described herein, or any combination thereof. In particular embodiments, each server162may include hardware, software, or embedded logic components or a combination of two or more such components for carrying out the appropriate functionalities implemented or supported by server162. In particular embodiments, the social-networking system160may include one or more data stores164. Data stores164may be used to store various types of information. In particular embodiments, the information stored in data stores164may be organized according to specific data structures. In particular embodiments, each data store164may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system130, a social-networking system160, an assistant system140, or a third-party system170to manage, retrieve, modify, add, or delete, the information stored in data store164. In particular embodiments, the social-networking system160may store one or more social graphs in one or more data stores164. In particular embodiments, a social graph may include multiple nodes—which may include multiple user nodes (each corresponding to a particular user) or multiple concept nodes (each corresponding to a particular concept)—and multiple edges connecting the nodes. The social-networking system160may provide users of the online social network the ability to communicate and interact with other users. In particular embodiments, users may join the online social network via the social-networking system160and then add connections (e.g., relationships) to a number of other users of the social-networking system160whom they want to be connected to. Herein, the term “friend” may refer to any other user of the social-networking system160with whom a user has formed a connection, association, or relationship via the social-networking system160. In particular embodiments, the social-networking system160may provide users with the ability to take actions on various types of items or objects, supported by the social-networking system160. As an example and not by way of limitation, the items and objects may include groups or social networks to which users of the social-networking system160may belong, events or calendar entries in which a user might be interested, computer-based applications that a user may use, transactions that allow users to buy or sell items via the service, interactions with advertisements that a user may perform, or other suitable items or objects. A user may interact with anything that is capable of being represented in the social-networking system160or by an external system of a third-party system170, which is separate from the social-networking system160and coupled to the social-networking system160via a network110. In particular embodiments, the social-networking system160may be capable of linking a variety of entities. As an example and not by way of limitation, the social-networking system160may enable users to interact with each other as well as receive content from third-party systems170or other entities, or to allow users to interact with these entities through an application programming interfaces (API) or other communication channels. In particular embodiments, a third-party system170may include one or more types of servers, one or more data stores, one or more interfaces, including but not limited to APIs, one or more web services, one or more content sources, one or more networks, or any other suitable components, e.g., that servers may communicate with. A third-party system170may be operated by a different entity from an entity operating the social-networking system160. In particular embodiments, however, the social-networking system160and third-party systems170may operate in conjunction with each other to provide social-networking services to users of the social-networking system160or third-party systems170. In this sense, the social-networking system160may provide a platform, or backbone, which other systems, such as third-party systems170, may use to provide social-networking services and functionality to users across the Internet. In particular embodiments, a third-party system170may include a third-party content object provider. A third-party content object provider may include one or more sources of content objects, which may be communicated to a client system130. As an example and not by way of limitation, content objects may include information regarding things or activities of interest to the user, such as, for example, movie show times, movie reviews, restaurant reviews, restaurant menus, product information and reviews, or other suitable information. As another example and not by way of limitation, content objects may include incentive content objects, such as coupons, discount tickets, gift certificates, or other suitable incentive objects. In particular embodiments, a third-party content provider may use one or more third-party agents to provide content objects and/or services. A third-party agent may be an implementation that is hosted and executing on the third-party system170. In particular embodiments, the social-networking system160also includes user-generated content objects, which may enhance a user's interactions with the social-networking system160. User-generated content may include anything a user can add, upload, send, or “post” to the social-networking system160. As an example and not by way of limitation, a user communicates posts to the social-networking system160from a client system130. Posts may include data such as status updates or other textual data, location information, photos, videos, links, music or other similar data or media. Content may also be added to the social-networking system160by a third-party through a “communication channel,” such as a newsfeed or stream. In particular embodiments, the social-networking system160may include a variety of servers, sub-systems, programs, modules, logs, and data stores. In particular embodiments, the social-networking system160may include one or more of the following: a web server, action logger, API-request server, relevance-and-ranking engine, content-object classifier, notification controller, action log, third-party-content-object-exposure log, inference module, authorization/privacy server, search module, advertisement-targeting module, user-interface module, user-profile store, connection store, third-party content store, or location store. The social-networking system160may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof. In particular embodiments, the social-networking system160may include one or more user-profile stores for storing user profiles. A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. As an example and not by way of limitation, if a user “likes” an article about a brand of shoes the category may be the brand, or the general category of “shoes” or “clothing.” A connection store may be used for storing connection information about users. The connection information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, or are in any way related or share common attributes. The connection information may also include user-defined connections between different users and content (both internal and external). A web server may be used for linking the social-networking system160to one or more client systems130or one or more third-party systems170via a network110. The web server may include a mail server or other messaging functionality for receiving and routing messages between the social-networking system160and one or more client systems130. An API-request server may allow, for example, an assistant system140or a third-party system170to access information from the social-networking system160by calling one or more APIs. An action logger may be used to receive communications from a web server about a user's actions on or off the social-networking system160. In conjunction with the action log, a third-party-content-object log may be maintained of user exposures to third-party-content objects. A notification controller may provide information regarding content objects to a client system130. Information may be pushed to a client system130as notifications, or information may be pulled from a client system130responsive to a user input comprising a user request received from a client system130. Authorization servers may be used to enforce one or more privacy settings of the users of the social-networking system160. A privacy setting of a user may determine how particular information associated with a user can be shared. The authorization server may allow users to opt in to or opt out of having their actions logged by the social-networking system160or shared with other systems (e.g., a third-party system170), such as, for example, by setting appropriate privacy settings. Third-party-content-object stores may be used to store content objects received from third parties, such as a third-party system170. Location stores may be used for storing location information received from client systems130associated with users. Advertisement-pricing modules may combine social information, the current time, location information, or other suitable information to provide relevant advertisements, in the form of notifications, to a user. Assistant Systems FIG.2illustrates an example architecture200of the assistant system140. In particular embodiments, the assistant system140may assist a user to obtain information or services. The assistant system140may enable the user to interact with the assistant system140via user inputs of various modalities (e.g., audio, voice, text, vision, image, video, gesture, motion, activity, location, orientation) in stateful and multi-turn conversations to receive assistance from the assistant system140. As an example and not by way of limitation, a user input may comprise an audio input based on the user's voice (e.g., a verbal command), which may be processed by a system audio API (application programming interface) on client system130. The system audio API may perform techniques including echo cancellation, noise removal, beam forming, self-user voice activation, speaker identification, voice activity detection (VAD), and/or any other suitable acoustic technique in order to generate audio data that is readily processable by the assistant system140. In particular embodiments, the assistant system140may support mono-modal inputs (e.g., only voice inputs), multi-modal inputs (e.g., voice inputs and text inputs), hybrid/multi-modal inputs, or any combination thereof. In particular embodiments, a user input may be a user-generated input that is sent to the assistant system140in a single turn. User inputs provided by a user may be associated with particular assistant-related tasks, and may include, for example, user requests (e.g., verbal requests for information or performance of an action), user interactions with the assistant application136associated with the assistant system140(e.g., selection of UI elements via touch or gesture), or any other type of suitable user input that may be detected and understood by the assistant system140(e.g., user movements detected by the client device130of the user). In particular embodiments, the assistant system140may create and store a user profile comprising both personal and contextual information associated with the user. In particular embodiments, the assistant system140may analyze the user input using natural-language understanding (NLU) techniques. The analysis may be based at least in part on the user profile of the user for more personalized and context-aware understanding. The assistant system140may resolve entities associated with the user input based on the analysis. In particular embodiments, the assistant system140may interact with different agents to obtain information or services that are associated with the resolved entities. The assistant system140may generate a response for the user regarding the information or services by using natural-language generation (NLG). Through the interaction with the user, the assistant system140may use dialog management techniques to manage and forward the conversation flow with the user. In particular embodiments, the assistant system140may further assist the user to effectively and efficiently digest the obtained information by summarizing the information. The assistant system140may also assist the user to be more engaging with an online social network by providing tools that help the user interact with the online social network (e.g., creating posts, comments, messages). The assistant system140may additionally assist the user to manage different tasks such as keeping track of events. In particular embodiments, the assistant system140may proactively execute, without a user input, pre-authorized tasks that are relevant to user interests and preferences based on the user profile, at a time relevant for the user. In particular embodiments, the assistant system140may check privacy settings to ensure that accessing a user's profile or other user information and executing different tasks are permitted subject to the user's privacy settings. More information on assisting users subject to privacy settings may be found in U.S. patent application Ser. No. 16/182,542, filed 6 Nov. 2018, which is incorporated by reference. In particular embodiments, the assistant system140may assist a user via an architecture built upon client-side processes and server-side processes which may operate in various operational modes. InFIG.2, the client-side process is illustrated above the dashed line202whereas the server-side process is illustrated below the dashed line202. A first operational mode (i.e., on-device mode) may be a workflow in which the assistant system140processes a user input and provides assistance to the user by primarily or exclusively performing client-side processes locally on the client system130. For example, if the client system130is not connected to a network110(i.e., when client system130is offline), the assistant system140may handle a user input in the first operational mode utilizing only client-side processes. A second operational mode (i.e., cloud mode) may be a workflow in which the assistant system140processes a user input and provides assistance to the user by primarily or exclusively performing server-side processes on one or more remote servers (e.g., a server associated with assistant system140). As illustrated inFIG.2, a third operational mode (i.e., blended mode) may be a parallel workflow in which the assistant system140processes a user input and provides assistance to the user by performing client-side processes locally on the client system130in conjunction with server-side processes on one or more remote servers (e.g., a server associated with assistant system140). For example, the client system130and the server associated with assistant system140may both perform automatic speech recognition (ASR) and natural-language understanding (NLU) processes, but the client system130may delegate dialog, agent, and natural-language generation (NLG) processes to be performed by the server associated with assistant system140. In particular embodiments, selection of an operational mode may be based at least in part on a device state, a task associated with a user input, and/or one or more additional factors. As an example and not by way of limitation, as described above, one factor may be a network connectivity status for client system130. For example, if the client system130is not connected to a network110(i.e., when client system130is offline), the assistant system140may handle a user input in the first operational mode (i.e., on-device mode). As another example and not by way of limitation, another factor may be based on a measure of available battery power (i.e., battery status) for the client system130. For example, if there is a need for client system130to conserve battery power (e.g., when client system130has minimal available battery power or the user has indicated a desire to conserve the battery power of the client system130), the assistant system140may handle a user input in the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) in order to perform fewer power-intensive operations on the client system130. As yet another example and not by way of limitation, another factor may be one or more privacy constraints (e.g., specified privacy settings, applicable privacy policies). For example, if one or more privacy constraints limits or precludes particular data from being transmitted to a remote server (e.g., a server associated with the assistant system140), the assistant system140may handle a user input in the first operational mode (i.e., on-device mode) in order to protect user privacy. As yet another example and not by way of limitation, another factor may be desynchronized context data between the client system130and a remote server (e.g., the server associated with assistant system140). For example, the client system130and the server associated with assistant system140may be determined to have inconsistent, missing, and/or unreconciled context data, the assistant system140may handle a user input in the third operational mode (i.e., blended mode) to reduce the likelihood of an inadequate analysis associated with the user input. As yet another example and not by way of limitation, another factor may be a measure of latency for the connection between client system130and a remote server (e.g., the server associated with assistant system140). For example, if a task associated with a user input may significantly benefit from and/or require prompt or immediate execution (e.g., photo capturing tasks), the assistant system140may handle the user input in the first operational mode (i.e., on-device mode) to ensure the task is performed in a timely manner. As yet another example and not by way of limitation, another factor may be, for a feature relevant to a task associated with a user input, whether the feature is only supported by a remote server (e.g., the server associated with assistant system140). For example, if the relevant feature requires advanced technical functionality (e.g., high-powered processing capabilities, rapid update cycles) that is only supported by the server associated with assistant system140and is not supported by client system130at the time of the user input, the assistant system140may handle the user input in the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) in order to benefit from the relevant feature. In particular embodiments, an on-device orchestrator206on the client system130may coordinate receiving a user input and may determine, at one or more decision points in an example workflow, which of the operational modes described above should be used to process or continue processing the user input. As discussed above, selection of an operational mode may be based at least in part on a device state, a task associated with a user input, and/or one or more additional factors. As an example and not by way of limitation, with reference to the workflow architecture illustrated inFIG.2, after a user input is received from a user, the on-device orchestrator206may determine, at decision point (D0)205, whether to begin processing the user input in the first operational mode (i.e., on-device mode), the second operational mode (i.e., cloud mode), or the third operational mode (i.e., blended mode). For example, at decision point (D0)205, the on-device orchestrator206may select the first operational mode (i.e., on-device mode) if the client system130is not connected to network110(i.e., when client system130is offline), if one or more privacy constraints expressly require on-device processing (e.g., adding or removing another person to a private call between users), or if the user input is associated with a task which does not require or benefit from server-side processing (e.g., setting an alarm or calling another user). As another example, at decision point (D0)205, the on-device orchestrator206may select the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) if the client system130has a need to conserve battery power (e.g., when client system130has minimal available battery power or the user has indicated a desire to conserve the battery power of the client system130) or has a need to limit additional utilization of computing resources (e.g., when other processes operating on client device130require high CPU utilization (e.g., SMS messaging applications)). In particular embodiments, if the on-device orchestrator206determines at decision point (D0)205that the user input should be processed using the first operational mode (i.e., on-device mode) or the third operational mode (i.e., blended mode), the client-side process may continue as illustrated inFIG.2. As an example and not by way of limitation, if the user input comprises speech data, the speech data may be received at a local automatic speech recognition (ASR) module208aon the client system130. The ASR module208amay allow a user to dictate and have speech transcribed as written text, have a document synthesized as an audio stream, or issue commands that are recognized as such by the system. In particular embodiments, the output of the ASR module208amay be sent to a local natural-language understanding (NLU) module210a. The NLU module210amay perform named entity resolution (NER), or named entity resolution may be performed by the entity resolution module212a, as described below. In particular embodiments, one or more of an intent, a slot, or a domain may be an output of the NLU module210a. In particular embodiments, the user input may comprise non-speech data, which may be received at a local context engine220a. As an example and not by way of limitation, the non-speech data may comprise locations, visuals, touch, gestures, world updates, social updates, contextual information, information related to people, activity data, and/or any other suitable type of non-speech data. The non-speech data may further comprise sensory data received by client system130sensors (e.g., microphone, camera), which may be accessed subject to privacy constraints and further analyzed by computer vision technologies. In particular embodiments, the computer vision technologies may comprise human reconstruction, face detection, facial recognition, hand tracking, eye tracking, and/or any other suitable computer vision technologies. In particular embodiments, the non-speech data may be subject to geometric constructions, which may comprise constructing objects surrounding a user using any suitable type of data collected by a client system130. As an example and not by way of limitation, a user may be wearing AR glasses, and geometric constructions may be utilized to determine spatial locations of surfaces and items (e.g., a floor, a wall, a user's hands). In particular embodiments, the non-speech data may be inertial data captured by AR glasses or a VR headset, and which may be data associated with linear and angular motions (e.g., measurements associated with a user's body movements). In particular embodiments, the context engine220amay determine various types of events and context based on the non-speech data. In particular embodiments, the outputs of the NLU module210aand/or the context engine220amay be sent to an entity resolution module212a. The entity resolution module212amay resolve entities associated with one or more slots output by NLU module210a. In particular embodiments, each resolved entity may be associated with one or more entity identifiers. As an example and not by way of limitation, an identifier may comprise a unique user identifier (ID) corresponding to a particular user (e.g., a unique username or user ID number for the social-networking system160). In particular embodiments, each resolved entity may also be associated with a confidence score. More information on resolving entities may be found in U.S. Pat. No. 10,803,050, filed 27 Jul. 2018, and U.S. patent application Ser. No. 16/048,072, filed 27 Jul. 2018, each of which is incorporated by reference. In particular embodiments, at decision point (D0)205, the on-device orchestrator206may determine that a user input should be handled in the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode). In these operational modes, the user input may be handled by certain server-side modules in a similar manner as the client-side process described above. In particular embodiments, if the user input comprises speech data, the speech data of the user input may be received at a remote automatic speech recognition (ASR) module208bon a remote server (e.g., the server associated with assistant system140). The ASR module208bmay allow a user to dictate and have speech transcribed as written text, have a document synthesized as an audio stream, or issue commands that are recognized as such by the system. In particular embodiments, the output of the ASR module208bmay be sent to a remote natural-language understanding (NLU) module210b. In particular embodiments, the NLU module210bmay perform named entity resolution (NER) or named entity resolution may be performed by entity resolution module212bof dialog manager module216bas described below. In particular embodiments, one or more of an intent, a slot, or a domain may be an output of the NLU module210b. In particular embodiments, the user input may comprise non-speech data, which may be received at a remote context engine220b. In particular embodiments, the remote context engine220bmay determine various types of events and context based on the non-speech data. In particular embodiments, the output of the NLU module210band/or the context engine220bmay be sent to a remote dialog manager216b. In particular embodiments, as discussed above, an on-device orchestrator206on the client system130may coordinate receiving a user input and may determine, at one or more decision points in an example workflow, which of the operational modes described above should be used to process or continue processing the user input. As further discussed above, selection of an operational mode may be based at least in part on a device state, a task associated with a user input, and/or one or more additional factors. As an example and not by way of limitation, with continued reference to the workflow architecture illustrated inFIG.2, after the entity resolution module212agenerates an output or a null output, the on-device orchestrator206may determine, at decision point (D1)215, whether to continue processing the user input in the first operational mode (i.e., on-device mode), the second operational mode (i.e., cloud mode), or the third operational mode (i.e., blended mode). For example, at decision point (D1)215, the on-device orchestrator206may select the first operational mode (i.e., on-device mode) if an identified intent is associated with a latency sensitive processing task (e.g., taking a photo, pausing a stopwatch). As another example and not by way of limitation, if a messaging task is not supported by on-device processing on the client system130, the on-device orchestrator206may select the third operational mode (i.e., blended mode) to process the user input associated with a messaging request. As yet another example, at decision point (D1)215, the on-device orchestrator206may select the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) if the task being processed requires access to a social graph, a knowledge graph, or a concept graph not stored on the client system130. Alternatively, the on-device orchestrator206may instead select the first operational mode (i.e., on-device mode) if a sufficient version of an informational graph including requisite information for the task exists on the client system130(e.g., a smaller and/or bootstrapped version of a knowledge graph). In particular embodiments, if the on-device orchestrator206determines at decision point (D1)215that processing should continue using the first operational mode (i.e., on-device mode) or the third operational mode (i.e., blended mode), the client-side process may continue as illustrated inFIG.2. As an example and not by way of limitation, the output from the entity resolution module212amay be sent to an on-device dialog manager216a. In particular embodiments, the on-device dialog manager216amay comprise a dialog state tracker218aand an action selector222a. The on-device dialog manager216amay have complex dialog logic and product-related business logic to manage the dialog state and flow of the conversation between the user and the assistant system140. The on-device dialog manager216amay include full functionality for end-to-end integration and multi-turn support (e.g., confirmation, disambiguation). The on-device dialog manager216amay also be lightweight with respect to computing limitations and resources including memory, computation (CPU), and binary size constraints. The on-device dialog manager216amay also be scalable to improve developer experience. In particular embodiments, the on-device dialog manager216amay benefit the assistant system140, for example, by providing offline support to alleviate network connectivity issues (e.g., unstable or unavailable network connections), by using client-side processes to prevent privacy-sensitive information from being transmitted off of client system130, and by providing a stable user experience in high-latency sensitive scenarios. In particular embodiments, the on-device dialog manager216amay further conduct false trigger mitigation. Implementation of false trigger mitigation may detect and prevent false triggers from user inputs which would otherwise invoke the assistant system140(e.g., an unintended wake-word) and may further prevent the assistant system140from generating data records based on the false trigger that may be inaccurate and/or subject to privacy constraints. As an example and not by way of limitation, if a user is in a voice call, the user's conversation during the voice call may be considered private, and the false trigger mitigation may limit detection of wake-words to audio user inputs received locally by the user's client system130. In particular embodiments, the on-device dialog manager216amay implement false trigger mitigation based on a nonsense detector. If the nonsense detector determines with a high confidence that a received wake-word is not logically and/or contextually sensible at the point in time at which it was received from the user, the on-device dialog manager216amay determine that the user did not intend to invoke the assistant system140. In particular embodiments, due to a limited computing power of the client system130, the on-device dialog manager216amay conduct on-device learning based on learning algorithms particularly tailored for client system130. As an example and not by way of limitation, federated learning techniques may be implemented by the on-device dialog manager216a. Federated learning is a specific category of distributed machine learning techniques which may train machine-learning models using decentralized data stored on end devices (e.g., mobile phones). In particular embodiments, the on-device dialog manager216amay use federated user representation learning model to extend existing neural-network personalization techniques to implementation of federated learning by the on-device dialog manager216a. Federated user representation learning may personalize federated learning models by learning task-specific user representations (i.e., embeddings) and/or by personalizing model weights. Federated user representation learning is a simple, scalable, privacy-preserving, and resource-efficient. Federated user representation learning may divide model parameters into federated and private parameters. Private parameters, such as private user embeddings, may be trained locally on a client system130instead of being transferred to or averaged by a remote server (e.g., the server associated with assistant system140). Federated parameters, by contrast, may be trained remotely on the server. In particular embodiments, the on-device dialog manager216amay use an active federated learning model, which may transmit a global model trained on the remote server to client systems130and calculate gradients locally on the client systems130. Active federated learning may enable the on-device dialog manager216ato minimize the transmission costs associated with downloading models and uploading gradients. For active federated learning, in each round, client systems130may be selected in a semi-random manner based at least in part on a probability conditioned on the current model and the data on the client systems130in order to optimize efficiency for training the federated learning model. In particular embodiments, the dialog state tracker218amay track state changes over time as a user interacts with the world and the assistant system140interacts with the user. As an example and not by way of limitation, the dialog state tracker218amay track, for example, what the user is talking about, whom the user is with, where the user is, what tasks are currently in progress, and where the user's gaze is at subject to applicable privacy policies. In particular embodiments, at decision point (D1)215, the on-device orchestrator206may determine to forward the user input to the server for either the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode). As an example and not by way of limitation, if particular functionalities or processes (e.g., messaging) are not supported by on the client system130, the on-device orchestrator206may determine at decision point (D1)215to use the third operational mode (i.e., blended mode). In particular embodiments, the on-device orchestrator206may cause the outputs from the NLU module210a, the context engine220a, and the entity resolution module212a, via a dialog manager proxy224, to be forwarded to an entity resolution module212bof the remote dialog manager216bto continue the processing. The dialog manager proxy224may be a communication channel for information/events exchange between the client system130and the server. In particular embodiments, the dialog manager216bmay additionally comprise a remote arbitrator226b, a remote dialog state tracker218b, and a remote action selector222b. In particular embodiments, the assistant system140may have started processing a user input with the second operational mode (i.e., cloud mode) at decision point (D0)205and the on-device orchestrator206may determine to continue processing the user input based on the second operational mode (i.e., cloud mode) at decision point (D1)215. Accordingly, the output from the NLU module210band the context engine220bmay be received at the remote entity resolution module212b. The remote entity resolution module212bmay have similar functionality as the local entity resolution module212a, which may comprise resolving entities associated with the slots. In particular embodiments, the entity resolution module212bmay access one or more of the social graph, the knowledge graph, or the concept graph when resolving the entities. The output from the entity resolution module212bmay be received at the arbitrator226b. In particular embodiments, the remote arbitrator226bmay be responsible for choosing between client-side and server-side upstream results (e.g., results from the NLU module210a/b, results from the entity resolution module212a/b, and results from the context engine220a/b). The arbitrator226bmay send the selected upstream results to the remote dialog state tracker218b. In particular embodiments, similarly to the local dialog state tracker218a, the remote dialog state tracker218bmay convert the upstream results into candidate tasks using task specifications and resolve arguments with entity resolution. In particular embodiments, at decision point (D2)225, the on-device orchestrator206may determine whether to continue processing the user input based on the first operational mode (i.e., on-device mode) or forward the user input to the server for the third operational mode (i.e., blended mode). The decision may depend on, for example, whether the client-side process is able to resolve the task and slots successfully, whether there is a valid task policy with a specific feature support, and/or the context differences between the client-side process and the server-side process. In particular embodiments, decisions made at decision point (D2)225may be for multi-turn scenarios. In particular embodiments, there may be at least two possible scenarios. In a first scenario, the assistant system140may have started processing a user input in the first operational mode (i.e., on-device mode) using client-side dialog state. If at some point the assistant system140decides to switch to having the remote server process the user input, the assistant system140may create a programmatic/predefined task with the current task state and forward it to the remote server. For subsequent turns, the assistant system140may continue processing in the third operational mode (i.e., blended mode) using the server-side dialog state. In another scenario, the assistant system140may have started processing the user input in either the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) and may substantially rely on server-side dialog state for all subsequent turns. If the on-device orchestrator206determines to continue processing the user input based on the first operational mode (i.e., on-device mode), the output from the dialog state tracker218amay be received at the action selector222a. In particular embodiments, at decision point (D2)225, the on-device orchestrator206may determine to forward the user input to the remote server and continue processing the user input in either the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode). The assistant system140may create a programmatic/predefined task with the current task state and forward it to the server, which may be received at the action selector222b. In particular embodiments, the assistant system140may have started processing the user input in the second operational mode (i.e., cloud mode), and the on-device orchestrator206may determine to continue processing the user input in the second operational mode (i.e., cloud mode) at decision point (D2)225. Accordingly, the output from the dialog state tracker218bmay be received at the action selector222b. In particular embodiments, the action selector222a/bmay perform interaction management. The action selector222a/bmay determine and trigger a set of general executable actions. The actions may be executed either on the client system130or at the remote server. As an example and not by way of limitation, these actions may include providing information or suggestions to the user. In particular embodiments, the actions may interact with agents228a/b, users, and/or the assistant system140itself. These actions may comprise actions including one or more of a slot request, a confirmation, a disambiguation, or an agent execution. The actions may be independent of the underlying implementation of the action selector222a/b. For more complicated scenarios such as, for example, multi-turn tasks or tasks with complex business logic, the local action selector222amay call one or more local agents228a, and the remote action selector222bmay call one or more remote agents228bto execute the actions. Agents228a/bmay be invoked via task ID, and any actions may be routed to the correct agent228a/busing that task ID. In particular embodiments, an agent228a/bmay be configured to serve as a broker across a plurality of content providers for one domain. A content provider may be an entity responsible for carrying out an action associated with an intent or completing a task associated with the intent. In particular embodiments, agents228a/bmay provide several functionalities for the assistant system140including, for example, native template generation, task specific business logic, and querying external APIs. When executing actions for a task, agents228a/bmay use context from the dialog state tracker218a/b, and may also update the dialog state tracker218a/b. In particular embodiments, agents228a/bmay also generate partial payloads from a dialog act. In particular embodiments, the local agents228amay have different implementations to be compiled/registered for different platforms (e.g., smart glasses versus a VR headset). In particular embodiments, multiple device-specific implementations (e.g., real-time calls for a client system130or a messaging application on the client system130) may be handled internally by a single agent228a. Alternatively, device-specific implementations may be handled by multiple agents228aassociated with multiple domains. As an example and not by way of limitation, calling an agent228aon smart glasses may be implemented in a different manner than calling an agent228aon a smart phone. Different platforms may also utilize varying numbers of agents228a. The agents228amay also be cross-platform (i.e., different operating systems on the client system130). In addition, the agents228amay have minimized startup time or binary size impact. Local agents228amay be suitable for particular use cases. As an example and not by way of limitation, one use case may be emergency calling on the client system130. As another example and not by way of limitation, another use case may be responding to a user input without network connectivity. As yet another example and not by way of limitation, another use case may be that particular domains/tasks may be privacy sensitive and may prohibit user inputs being sent to the remote server. In particular embodiments, the local action selector222amay call a local delivery system230afor executing the actions, and the remote action selector222bmay call a remote delivery system230bfor executing the actions. The delivery system230a/bmay deliver a predefined event upon receiving triggering signals from the dialog state tracker218a/bby executing corresponding actions. The delivery system230a/bmay ensure that events get delivered to a host with a living connection. As an example and not by way of limitation, the delivery system230a/bmay broadcast to all online devices that belong to one user. As another example and not by way of limitation, the delivery system230a/bmay deliver events to target-specific devices. The delivery system230a/bmay further render a payload using up-to-date device context. In particular embodiments, the on-device dialog manager216amay additionally comprise a separate local action execution module, and the remote dialog manager216bmay additionally comprise a separate remote action execution module. The local execution module and the remote action execution module may have similar functionality. In particular embodiments, the action execution module may call the agents228a/bto execute tasks. The action execution module may additionally perform a set of general executable actions determined by the action selector222a/b. The set of executable actions may interact with agents228a/b, users, and the assistant system140itself via the delivery system230a/b. In particular embodiments, if the user input is handled using the first operational mode (i.e., on-device mode), results from the agents228aand/or the delivery system230amay be returned to the on-device dialog manager216a. The on-device dialog manager216amay then instruct a local arbitrator226ato generate a final response based on these results. The arbitrator226amay aggregate the results and evaluate them. As an example and not by way of limitation, the arbitrator226amay rank and select a best result for responding to the user input. If the user request is handled in the second operational mode (i.e., cloud mode), the results from the agents228band/or the delivery system230bmay be returned to the remote dialog manager216b. The remote dialog manager216bmay instruct, via the dialog manager proxy224, the arbitrator226ato generate the final response based on these results. Similarly, the arbitrator226amay analyze the results and select the best result to provide to the user. If the user input is handled based on the third operational mode (i.e., blended mode), the client-side results and server-side results (e.g., from agents228a/band/or delivery system230a/b) may both be provided to the arbitrator226aby the on-device dialog manager216aand remote dialog manager216b, respectively. The arbitrator226may then choose between the client-side and server-side side results to determine the final result to be presented to the user. In particular embodiments, the logic to decide between these results may depend on the specific use-case. In particular embodiments, the local arbitrator226amay generate a response based on the final result and send it to a render output module232. The render output module232may determine how to render the output in a way that is suitable for the client system130. As an example and not by way of limitation, for a VR headset or AR smart glasses, the render output module232may determine to render the output using a visual-based modality (e.g., an image or a video clip) that may be displayed via the VR headset or AR smart glasses. As another example, the response may be rendered as audio signals that may be played by the user via a VR headset or AR smart glasses. As yet another example, the response may be rendered as augmented-reality data for enhancing user experience. In particular embodiments, in addition to determining an operational mode to process the user input, the on-device orchestrator206may also determine whether to process the user input on the rendering device137, process the user input on the companion device138, or process the user request on the remote server. The rendering device137and/or the companion device138may each use the assistant stack in a similar manner as disclosed above to process the user input. As an example and not by, the on-device orchestrator206may determine that part of the processing should be done on the rendering device137, part of the processing should be done on the companion device138, and the remaining processing should be done on the remote server. In particular embodiments, the assistant system140may have a variety of capabilities including audio cognition, visual cognition, signals intelligence, reasoning, and memories. In particular embodiments, the capability of audio cognition may enable the assistant system140to, for example, understand a user's input associated with various domains in different languages, understand and summarize a conversation, perform on-device audio cognition for complex commands, identify a user by voice, extract topics from a conversation and auto-tag sections of the conversation, enable audio interaction without a wake-word, filter and amplify user voice from ambient noise and conversations, and/or understand which client system130a user is talking to if multiple client systems130are in vicinity. In particular embodiments, the capability of visual cognition may enable the assistant system140to, for example, perform face detection and tracking, recognize a user, recognize people of interest in major metropolitan areas at varying angles, recognize interesting objects in the world through a combination of existing machine-learning models and one-shot learning, recognize an interesting moment and auto-capture it, achieve semantic understanding over multiple visual frames across different episodes of time, provide platform support for additional capabilities in people, places, or objects recognition, recognize a full set of settings and micro-locations including personalized locations, recognize complex activities, recognize complex gestures to control a client system130, handle images/videos from egocentric cameras (e.g., with motion, capture angles, resolution), accomplish similar levels of accuracy and speed regarding images with lower resolution, conduct one-shot registration and recognition of people, places, and objects, and/or perform visual recognition on a client system130. In particular embodiments, the assistant system140may leverage computer vision techniques to achieve visual cognition. Besides computer vision techniques, the assistant system140may explore options that may supplement these techniques to scale up the recognition of objects. In particular embodiments, the assistant system140may use supplemental signals such as, for example, optical character recognition (OCR) of an object's labels, GPS signals for places recognition, and/or signals from a user's client system130to identify the user. In particular embodiments, the assistant system140may perform general scene recognition (e.g., home, work, public spaces) to set a context for the user and reduce the computer-vision search space to identify likely objects or people. In particular embodiments, the assistant system140may guide users to train the assistant system140. For example, crowdsourcing may be used to get users to tag objects and help the assistant system140recognize more objects over time. As another example, users may register their personal objects as part of an initial setup when using the assistant system140. The assistant system140may further allow users to provide positive/negative signals for objects they interact with to train and improve personalized models for them. In particular embodiments, the capability of signals intelligence may enable the assistant system140to, for example, determine user location, understand date/time, determine family locations, understand users' calendars and future desired locations, integrate richer sound understanding to identify setting/context through sound alone, and/or build signals intelligence models at runtime which may be personalized to a user's individual routines. In particular embodiments, the capability of reasoning may enable the assistant system140to, for example, pick up previous conversation threads at any point in the future, synthesize all signals to understand micro and personalized context, learn interaction patterns and preferences from users' historical behavior and accurately suggest interactions that they may value, generate highly predictive proactive suggestions based on micro-context understanding, understand what content a user may want to see at what time of a day, and/or understand the changes in a scene and how that may impact the user's desired content. In particular embodiments, the capabilities of memories may enable the assistant system140to, for example, remember which social connections a user previously called or interacted with, write into memory and query memory at will (i.e., open dictation and auto tags), extract richer preferences based on prior interactions and long-term learning, remember a user's life history, extract rich information from egocentric streams of data and auto catalog, and/or write to memory in structured form to form rich short, episodic and long-term memories. FIG.3illustrates an example flow diagram300of the assistant system140. In particular embodiments, an assistant service module305may access a request manager310upon receiving a user input. In particular embodiments, the request manager310may comprise a context extractor312and a conversational understanding object generator (CU object generator)314. The context extractor312may extract contextual information associated with the user input. The context extractor312may also update contextual information based on the assistant application136executing on the client system130. As an example and not by way of limitation, the update of contextual information may comprise content items are displayed on the client system130. As another example and not by way of limitation, the update of contextual information may comprise whether an alarm is set on the client system130. As another example and not by way of limitation, the update of contextual information may comprise whether a song is playing on the client system130. The CU object generator314may generate particular content objects relevant to the user input. The content objects may comprise dialog-session data and features associated with the user input, which may be shared with all the modules of the assistant system140. In particular embodiments, the request manager310may store the contextual information and the generated content objects in a data store320which is a particular data store implemented in the assistant system140. In particular embodiments, the request manger310may send the generated content objects to the NLU module210. The NLU module210may perform a plurality of steps to process the content objects. The NLU module210may first run the content objects through an allowlist/blocklist330. In particular embodiments, the allowlist/blocklist330may comprise interpretation data matching the user input. The NLU module210may then perform a featurization332of the content objects. The NLU module210may then perform domain classification/selection334on user input based on the features resulted from the featurization332to classify the user input into predefined domains. In particular embodiments, a domain may denote a social context of interaction (e.g., education), or a namespace for a set of intents (e.g., music). The domain classification/selection results may be further processed based on two related procedures. In one procedure, the NLU module210may process the domain classification/selection results using a meta-intent classifier336a. The meta-intent classifier336amay determine categories that describe the user's intent. An intent may be an element in a pre-defined taxonomy of semantic intentions, which may indicate a purpose of a user interaction with the assistant system140. The NLU module210amay classify a user input into a member of the pre-defined taxonomy. For example, the user input may be “Play Beethoven's 5th,” and the NLU module210amay classify the input as having the intent [IN:play_music]. In particular embodiments, intents that are common to multiple domains may be processed by the meta-intent classifier336a. As an example and not by way of limitation, the meta-intent classifier336amay be based on a machine-learning model that may take the domain classification/selection results as input and calculate a probability of the input being associated with a particular predefined meta-intent. The NLU module210may then use a meta slot tagger338ato annotate one or more meta slots for the classification result from the meta-intent classifier336a. A slot may be a named sub-string corresponding to a character string within the user input representing a basic semantic entity. For example, a slot for “pizza” may be [SL:dish]. In particular embodiments, a set of valid or expected named slots may be conditioned on the classified intent. As an example and not by way of limitation, for the intent [IN:play_music], a valid slot may be [SL:song_name]. In particular embodiments, the meta slot tagger338amay tag generic slots such as references to items (e.g., the first), the type of slot, the value of the slot, etc. In particular embodiments, the NLU module210may process the domain classification/selection results using an intent classifier336b. The intent classifier336bmay determine the user's intent associated with the user input. In particular embodiments, there may be one intent classifier336bfor each domain to determine the most possible intents in a given domain. As an example and not by way of limitation, the intent classifier336bmay be based on a machine-learning model that may take the domain classification/selection results as input and calculate a probability of the input being associated with a particular predefined intent. The NLU module210may then use a slot tagger338bto annotate one or more slots associated with the user input. In particular embodiments, the slot tagger338bmay annotate the one or more slots for the n-grams of the user input. As an example and not by way of limitation, a user input may comprise “change 500 dollars in my account to Japanese yen.” The intent classifier336bmay take the user input as input and formulate it into a vector. The intent classifier336bmay then calculate probabilities of the user input being associated with different predefined intents based on a vector comparison between the vector representing the user input and the vectors representing different predefined intents. In a similar manner, the slot tagger338bmay take the user input as input and formulate each word into a vector. The intent classifier336bmay then calculate probabilities of each word being associated with different predefined slots based on a vector comparison between the vector representing the word and the vectors representing different predefined slots. The intent of the user may be classified as “changing money”. The slots of the user input may comprise “500”, “dollars”, “account”, and “Japanese yen”. The meta-intent of the user may be classified as “financial service”. The meta slot may comprise “finance”. In particular embodiments, the NLU module210may additionally extract information from one or more of a social graph, a knowledge graph, or a concept graph, and may retrieve a user's profile stored locally on the client system130. The NLU module210may additionally consider contextual information when analyzing the user input. The NLU module210may further process information from these different sources by identifying and aggregating information, annotating n-grams of the user input, ranking the n-grams with confidence scores based on the aggregated information, and formulating the ranked n-grams into features that may be used by the NLU module210for understanding the user input. In particular embodiments, the NLU module210may identify one or more of a domain, an intent, or a slot from the user input in a personalized and context-aware manner. As an example and not by way of limitation, a user input may comprise “show me how to get to the coffee shop.” The NLU module210may identify a particular coffee shop that the user wants to go to based on the user's personal information and the associated contextual information. In particular embodiments, the NLU module210may comprise a lexicon of a particular language, a parser, and grammar rules to partition sentences into an internal representation. The NLU module210may also comprise one or more programs that perform naive semantics or stochastic semantic analysis, and may further use pragmatics to understand a user input. In particular embodiments, the parser may be based on a deep learning architecture comprising multiple long-short term memory (LSTM) networks. As an example and not by way of limitation, the parser may be based on a recurrent neural network grammar (RNNG) model, which is a type of recurrent and recursive LSTM algorithm. More information on natural-language understanding (NLU) may be found in U.S. patent application Ser. No. 16/011,062, filed 18 Jun. 2018, U.S. patent application Ser. No. 16/025,317, filed 2 Jul. 2018, and U.S. patent application Ser. No. 16/038,120, filed 17 Jul. 2018, each of which is incorporated by reference. In particular embodiments, the output of the NLU module210may be sent to the entity resolution module212to resolve relevant entities. Entities may include, for example, unique users or concepts, each of which may have a unique identifier (ID). The entities may include one or more of a real-world entity (from general knowledge base), a user entity (from user memory), a contextual entity (device context/dialog context), or a value resolution (numbers, datetime, etc.). In particular embodiments, the entity resolution module212may comprise domain entity resolution340and generic entity resolution342. The entity resolution module212may execute generic and domain-specific entity resolution. The generic entity resolution342may resolve the entities by categorizing the slots and meta slots into different generic topics. The domain entity resolution340may resolve the entities by categorizing the slots and meta slots into different domains. As an example and not by way of limitation, in response to the input of an inquiry of the advantages of a particular brand of electric car, the generic entity resolution342may resolve the referenced brand of electric car as vehicle and the domain entity resolution340may resolve the referenced brand of electric car as electric car. In particular embodiments, entities may be resolved based on knowledge350about the world and the user. The assistant system140may extract ontology data from the graphs352. As an example and not by way of limitation, the graphs352may comprise one or more of a knowledge graph, a social graph, or a concept graph. The ontology data may comprise the structural relationship between different slots/meta-slots and domains. The ontology data may also comprise information of how the slots/meta-slots may be grouped, related within a hierarchy where the higher level comprises the domain, and subdivided according to similarities and differences. For example, the knowledge graph may comprise a plurality of entities. Each entity may comprise a single record associated with one or more attribute values. The particular record may be associated with a unique entity identifier. Each record may have diverse values for an attribute of the entity. Each attribute value may be associated with a confidence probability and/or a semantic weight. A confidence probability for an attribute value represents a probability that the value is accurate for the given attribute. A semantic weight for an attribute value may represent how the value semantically appropriate for the given attribute considering all the available information. For example, the knowledge graph may comprise an entity of a book titled “BookName”, which may include information extracted from multiple content sources (e.g., an online social network, online encyclopedias, book review sources, media databases, and entertainment content sources), which may be deduped, resolved, and fused to generate the single unique record for the knowledge graph. In this example, the entity titled “BookName” may be associated with a “fantasy” attribute value for a “genre” entity attribute. More information on the knowledge graph may be found in U.S. patent application Ser. No. 16/048,049, filed 27 Jul. 2018, and U.S. patent application Ser. No. 16/048,101, filed 27 Jul. 2018, each of which is incorporated by reference. In particular embodiments, the assistant user memory (AUM)354may comprise user episodic memories which help determine how to assist a user more effectively. The AUM354may be the central place for storing, retrieving, indexing, and searching over user data. As an example and not by way of limitation, the AUM354may store information such as contacts, photos, reminders, etc. Additionally, the AUM354may automatically synchronize data to the server and other devices (only for non-sensitive data). As an example and not by way of limitation, if the user sets a nickname for a contact on one device, all devices may synchronize and get that nickname based on the AUM354. In particular embodiments, the AUM354may first prepare events, user state, reminder, and trigger state for storing in a data store. Memory node identifiers (ID) may be created to store entry objects in the AUM354, where an entry may be some piece of information about the user (e.g., photo, reminder, etc.) As an example and not by way of limitation, the first few bits of the memory node ID may indicate that this is a memory node ID type, the next bits may be the user ID, and the next bits may be the time of creation. The AUM354may then index these data for retrieval as needed. Index ID may be created for such purpose. In particular embodiments, given an “index key” (e.g., PHOTO_LOCATION) and “index value” (e.g., “San Francisco”), the AUM354may get a list of memory IDs that have that attribute (e.g., photos in San Francisco). As an example and not by way of limitation, the first few bits may indicate this is an index ID type, the next bits may be the user ID, and the next bits may encode an “index key” and “index value”. The AUM354may further conduct information retrieval with a flexible query language. Relation index ID may be created for such purpose. In particular embodiments, given a source memory node and an edge type, the AUM354may get memory IDs of all target nodes with that type of outgoing edge from the source. As an example and not by way of limitation, the first few bits may indicate this is a relation index ID type, the next bits may be the user ID, and the next bits may be a source node ID and edge type. In particular embodiments, the AUM354may help detect concurrent updates of different events. More information on episodic memories may be found in U.S. patent application Ser. No. 16/552,559, filed 27 Aug. 2019, which is incorporated by reference. In particular embodiments, the entity resolution module212may use different techniques to resolve different types of entities. For real-world entities, the entity resolution module212may use a knowledge graph to resolve the span to the entities, such as “music track”, “movie”, etc. For user entities, the entity resolution module212may use user memory or some agents to resolve the span to user-specific entities, such as “contact”, “reminders”, or “relationship”. For contextual entities, the entity resolution module212may perform coreference based on information from the context engine220to resolve the references to entities in the context, such as “him”, “her”, “the first one”, or “the last one”. In particular embodiments, for coreference, the entity resolution module212may create references for entities determined by the NLU module210. The entity resolution module212may then resolve these references accurately. As an example and not by way of limitation, a user input may comprise “find me the nearest grocery store and direct me there”. Based on coreference, the entity resolution module212may interpret “there” as “the nearest grocery store”. In particular embodiments, coreference may depend on the information from the context engine220and the dialog manager216so as to interpret references with improved accuracy. In particular embodiments, the entity resolution module212may additionally resolve an entity under the context (device context or dialog context), such as, for example, the entity shown on the screen or an entity from the last conversation history. For value resolutions, the entity resolution module212may resolve the mention to exact value in standardized form, such as numerical value, date time, address, etc. In particular embodiments, the entity resolution module212may first perform a check on applicable privacy constraints in order to guarantee that performing entity resolution does not violate any applicable privacy policies. As an example and not by way of limitation, an entity to be resolved may be another user who specifies in their privacy settings that their identity should not be searchable on the online social network. In this case, the entity resolution module212may refrain from returning that user's entity identifier in response to a user input. By utilizing the described information obtained from the social graph, the knowledge graph, the concept graph, and the user profile, and by complying with any applicable privacy policies, the entity resolution module212may resolve entities associated with a user input in a personalized, context-aware, and privacy-protected manner. In particular embodiments, the entity resolution module212may work with the ASR module208to perform entity resolution. The following example illustrates how the entity resolution module212may resolve an entity name. The entity resolution module212may first expand names associated with a user into their respective normalized text forms as phonetic consonant representations which may be phonetically transcribed using a double metaphone algorithm. The entity resolution module212may then determine an n-best set of candidate transcriptions and perform a parallel comprehension process on all of the phonetic transcriptions in the n-best set of candidate transcriptions. In particular embodiments, each transcription that resolves to the same intent may then be collapsed into a single intent. Each intent may then be assigned a score corresponding to the highest scoring candidate transcription for that intent. During the collapse, the entity resolution module212may identify various possible text transcriptions associated with each slot, correlated by boundary timing offsets associated with the slot's transcription. The entity resolution module212may then extract a subset of possible candidate transcriptions for each slot from a plurality (e.g., 1000) of candidate transcriptions, regardless of whether they are classified to the same intent. In this manner, the slots and intents may be scored lists of phrases. In particular embodiments, a new or running task capable of handling the intent may be identified and provided with the intent (e.g., a message composition task for an intent to send a message to another user). The identified task may then trigger the entity resolution module212by providing it with the scored lists of phrases associated with one of its slots and the categories against which it should be resolved. As an example and not by way of limitation, if an entity attribute is specified as “friend,” the entity resolution module212may run every candidate list of terms through the same expansion that may be run at matcher compilation time. Each candidate expansion of the terms may be matched in the precompiled trie matching structure. Matches may be scored using a function based at least in part on the transcribed input, matched form, and friend name. As another example and not by way of limitation, if an entity attribute is specified as “celebrity/notable person,” the entity resolution module212may perform parallel searches against the knowledge graph for each candidate set of terms for the slot output from the ASR module208. The entity resolution module212may score matches based on matched person popularity and ASR-provided score signal. In particular embodiments, when the memory category is specified, the entity resolution module212may perform the same search against user memory. The entity resolution module212may crawl backward through user memory and attempt to match each memory (e.g., person recently mentioned in conversation, or seen and recognized via visual signals, etc.). For each entity, the entity resolution module212may employ matching similarly to how friends are matched (i.e., phonetic). In particular embodiments, scoring may comprise a temporal decay factor associated with a recency with which the name was previously mentioned. The entity resolution module212may further combine, sort, and dedupe all matches. In particular embodiments, the task may receive the set of candidates. When multiple high scoring candidates are present, the entity resolution module212may perform user-facilitated disambiguation (e.g., getting real-time user feedback from users on these candidates). In particular embodiments, the context engine220may help the entity resolution module212improve entity resolution. The context engine220may comprise offline aggregators and an online inference service. The offline aggregators may process a plurality of data associated with the user that are collected from a prior time window. As an example and not by way of limitation, the data may include news feed posts/comments, interactions with news feed posts/comments, search history, etc., that are collected during a predetermined timeframe (e.g., from a prior 90-day window). The processing result may be stored in the context engine220as part of the user profile. The user profile of the user may comprise user profile data including demographic information, social information, and contextual information associated with the user. The user profile data may also include user interests and preferences on a plurality of topics, aggregated through conversations on news feed, search logs, messaging platforms, etc. The usage of a user profile may be subject to privacy constraints to ensure that a user's information can be used only for his/her benefit, and not shared with anyone else. More information on user profiles may be found in U.S. patent application Ser. No. 15/967,239, filed 30 Apr. 2018, which is incorporated by reference. In particular embodiments, the online inference service may analyze the conversational data associated with the user that are received by the assistant system140at a current time. The analysis result may be stored in the context engine220also as part of the user profile. In particular embodiments, both the offline aggregators and online inference service may extract personalization features from the plurality of data. The extracted personalization features may be used by other modules of the assistant system140to better understand user input. In particular embodiments, the entity resolution module212may process the information from the context engine220(e.g., a user profile) in the following steps based on natural-language processing (NLP). In particular embodiments, the entity resolution module212may tokenize text by text normalization, extract syntax features from text, and extract semantic features from text based on NLP. The entity resolution module212may additionally extract features from contextual information, which is accessed from dialog history between a user and the assistant system140. The entity resolution module212may further conduct global word embedding, domain-specific embedding, and/or dynamic embedding based on the contextual information. The processing result may be annotated with entities by an entity tagger. Based on the annotations, the entity resolution module212may generate dictionaries. In particular embodiments, the dictionaries may comprise global dictionary features which can be updated dynamically offline. The entity resolution module212may rank the entities tagged by the entity tagger. In particular embodiments, the entity resolution module212may communicate with different graphs352including one or more of the social graph, the knowledge graph, or the concept graph to extract ontology data that is relevant to the retrieved information from the context engine220. In particular embodiments, the entity resolution module212may further resolve entities based on the user profile, the ranked entities, and the information from the graphs352. In particular embodiments, the entity resolution module212may be driven by the task (corresponding to an agent228). This inversion of processing order may make it possible for domain knowledge present in a task to be applied to pre-filter or bias the set of resolution targets when it is obvious and appropriate to do so. As an example and not by way of limitation, for the utterance “who is John?” no clear category is implied in the utterance. Therefore, the entity resolution module212may resolve “John” against everything. As another example and not by way of limitation, for the utterance “send a message to John”, the entity resolution module212may easily determine “John” refers to a person that one can message. As a result, the entity resolution module212may bias the resolution to a friend. As another example and not by way of limitation, for the utterance “what is John's most famous album?” To resolve “John”, the entity resolution module212may first determine the task corresponding to the utterance, which is finding a music album. The entity resolution module212may determine that entities related to music albums include singers, producers, and recording studios. Therefore, the entity resolution module212may search among these types of entities in a music domain to resolve “John.” In particular embodiments, the output of the entity resolution module212may be sent to the dialog manager216to advance the flow of the conversation with the user. The dialog manager216may be an asynchronous state machine that repeatedly updates the state and selects actions based on the new state. The dialog manager216may additionally store previous conversations between the user and the assistant system140. In particular embodiments, the dialog manager216may conduct dialog optimization. Dialog optimization relates to the challenge of understanding and identifying the most likely branching options in a dialog with a user. As an example and not by way of limitation, the assistant system140may implement dialog optimization techniques to obviate the need to confirm who a user wants to call because the assistant system140may determine a high confidence that a person inferred based on context and available data is the intended recipient. In particular embodiments, the dialog manager216may implement reinforcement learning frameworks to improve the dialog optimization. The dialog manager216may comprise dialog intent resolution356, the dialog state tracker218, and the action selector222. In particular embodiments, the dialog manager216may execute the selected actions and then call the dialog state tracker218again until the action selected requires a user response, or there are no more actions to execute. Each action selected may depend on the execution result from previous actions. In particular embodiments, the dialog intent resolution356may resolve the user intent associated with the current dialog session based on dialog history between the user and the assistant system140. The dialog intent resolution356may map intents determined by the NLU module210to different dialog intents. The dialog intent resolution356may further rank dialog intents based on signals from the NLU module210, the entity resolution module212, and dialog history between the user and the assistant system140. In particular embodiments, the dialog state tracker218may use a set of operators to track the dialog state. The operators may comprise necessary data and logic to update the dialog state. Each operator may act as delta of the dialog state after processing an incoming user input. In particular embodiments, the dialog state tracker218may a comprise a task tracker, which may be based on task specifications and different rules. The dialog state tracker218may also comprise a slot tracker and coreference component, which may be rule based and/or recency based. The coreference component may help the entity resolution module212to resolve entities. In alternative embodiments, with the coreference component, the dialog state tracker218may replace the entity resolution module212and may resolve any references/mentions and keep track of the state. In particular embodiments, the dialog state tracker218may convert the upstream results into candidate tasks using task specifications and resolve arguments with entity resolution. Both user state (e.g., user's current activity) and task state (e.g., triggering conditions) may be tracked. Given the current state, the dialog state tracker218may generate candidate tasks the assistant system140may process and perform for the user. As an example and not by way of limitation, candidate tasks may include “show suggestion,” “get weather information,” or “take photo.” In particular embodiments, the dialog state tracker218may generate candidate tasks based on available data from, for example, a knowledge graph, a user memory, and a user task history. In particular embodiments, the dialog state tracker218may then resolve the triggers object using the resolved arguments. As an example and not by way of limitation, a user input “remind me to call mom when she's online and I'm home tonight” may perform the conversion from the NLU output to the triggers representation by the dialog state tracker218as illustrated in Table 1 below: TABLE 1Example Conversion from NLU Output to Triggers RepresentationNLU Ontology Representation:Triggers Representation:[IN:CREATE_SMART_REMINDERTriggers: {Remind me toandTriggers: [[SL:TODO call mom] whencondition: {ContextualEvent(mom is[SL:TRIGGER_CONJUNCTIONonline)},[IN:GET_TRIGGERcondition: {ContextualEvent(location is[SL:TRIGGER_SOCIAL_UPDATEhome)},she's online] and I'mcondition: {ContextualEvent(time is[SL:TRIGGER_LOCATION home]tonight)}]))]}[SL:DATE_TIME tonight]]]] In the above example, “mom,” “home,” and “tonight” are represented by their respective entities: personEntity, locationEntity, datetimeEntity. In particular embodiments, the dialog manager216may map events determined by the context engine220to actions. As an example and not by way of limitation, an action may be a natural-language generation (NLG) action, a display or overlay, a device action, or a retrieval action. The dialog manager216may also perform context tracking and interaction management. Context tracking may comprise aggregating real-time stream of events into a unified user state. Interaction management may comprise selecting optimal action in each state. In particular embodiments, the dialog state tracker218may perform context tracking (i.e., tracking events related to the user). To support processing of event streams, the dialog state tracker218amay use an event handler (e.g., for disambiguation, confirmation, request) that may consume various types of events and update an internal assistant state. Each event type may have one or more handlers. Each event handler may be modifying a certain slice of the assistant state. In particular embodiments, the event handlers may be operating on disjoint subsets of the state (i.e., only one handler may have write-access to a particular field in the state). In particular embodiments, all event handlers may have an opportunity to process a given event. As an example and not by way of limitation, the dialog state tracker218may run all event handlers in parallel on every event, and then may merge the state updates proposed by each event handler (e.g., for each event, most handlers may return a NULL update). In particular embodiments, the dialog state tracker218may work as any programmatic handler (logic) that requires versioning. In particular embodiments, instead of directly altering the dialog state, the dialog state tracker218may be a side-effect free component and generate n-best candidates of dialog state update operators that propose updates to the dialog state. The dialog state tracker218may comprise intent resolvers containing logic to handle different types of NLU intent based on the dialog state and generate the operators. In particular embodiments, the logic may be organized by intent handler, such as a disambiguation intent handler to handle the intents when the assistant system140asks for disambiguation, a confirmation intent handler that comprises the logic to handle confirmations, etc. Intent resolvers may combine the turn intent together with the dialog state to generate the contextual updates for a conversation with the user. A slot resolution component may then recursively resolve the slots in the update operators with resolution providers including the knowledge graph and domain agents. In particular embodiments, the dialog state tracker218may update/rank the dialog state of the current dialog session. As an example and not by way of limitation, the dialog state tracker218may update the dialog state as “completed” if the dialog session is over. As another example and not by way of limitation, the dialog state tracker218may rank the dialog state based on a priority associated with it. In particular embodiments, the dialog state tracker218may communicate with the action selector222about the dialog intents and associated content objects. In particular embodiments, the action selector222may rank different dialog hypotheses for different dialog intents. The action selector222may take candidate operators of dialog state and consult the dialog policies360to decide what actions should be executed. In particular embodiments, a dialog policy360may a tree-based policy, which is a pre-constructed dialog plan. Based on the current dialog state, a dialog policy360may choose a node to execute and generate the corresponding actions. As an example and not by way of limitation, the tree-based policy may comprise topic grouping nodes and dialog action (leaf) nodes. In particular embodiments, a dialog policy360may also comprise a data structure that describes an execution plan of an action by an agent228. A dialog policy360may further comprise multiple goals related to each other through logical operators. In particular embodiments, a goal may be an outcome of a portion of the dialog policy and it may be constructed by the dialog manager216. A goal may be represented by an identifier (e.g., string) with one or more named arguments, which parameterize the goal. As an example and not by way of limitation, a goal with its associated goal argument may be represented as {confirm artist, args:{artist: “Madonna”}}. In particular embodiments, goals may be mapped to leaves of the tree of the tree-structured representation of the dialog policy360. In particular embodiments, the assistant system140may use hierarchical dialog policies360with general policy362handling the cross-domain business logic and task policies364handling the task/domain specific logic. The general policy362may be used for actions that are not specific to individual tasks. The general policy362may be used to determine task stacking and switching, proactive tasks, notifications, etc. The general policy362may comprise handling low-confidence intents, internal errors, unacceptable user response with retries, and/or skipping or inserting confirmation based on ASR or NLU confidence scores. The general policy362may also comprise the logic of ranking dialog state update candidates from the dialog state tracker218output and pick the one to update (such as picking the top ranked task intent). In particular embodiments, the assistant system140may have a particular interface for the general policy362, which allows for consolidating scattered cross-domain policy/business-rules, especial those found in the dialog state tracker218, into a function of the action selector222. The interface for the general policy362may also allow for authoring of self-contained sub-policy units that may be tied to specific situations or clients (e.g., policy functions that may be easily switched on or off based on clients, situation). The interface for the general policy362may also allow for providing a layering of policies with back-off, i.e., multiple policy units, with highly specialized policy units that deal with specific situations being backed up by more general policies362that apply in wider circumstances. In this context the general policy362may alternatively comprise intent or task specific policy. In particular embodiments, a task policy364may comprise the logic for action selector222based on the task and current state. The task policy364may be dynamic and ad-hoc. In particular embodiments, the types of task policies364may include one or more of the following types: (1) manually crafted tree-based dialog plans; (2) coded policy that directly implements the interface for generating actions; (3) configurator-specified slot-filling tasks; or (4) machine-learning model based policy learned from data. In particular embodiments, the assistant system140may bootstrap new domains with rule-based logic and later refine the task policies364with machine-learning models. In particular embodiments, the general policy362may pick one operator from the candidate operators to update the dialog state, followed by the selection of a user facing action by a task policy364. Once a task is active in the dialog state, the corresponding task policy364may be consulted to select right actions. In particular embodiments, the action selector222may select an action based on one or more of the event determined by the context engine220, the dialog intent and state, the associated content objects, and the guidance from dialog policies360. Each dialog policy360may be subscribed to specific conditions over the fields of the state. After an event is processed and the state is updated, the action selector222may run a fast search algorithm (e.g., similarly to the Boolean satisfiability) to identify which policies should be triggered based on the current state. In particular embodiments, if multiple policies are triggered, the action selector222may use a tie-breaking mechanism to pick a particular policy. Alternatively, the action selector222may use a more sophisticated approach which may dry-run each policy and then pick a particular policy which may be determined to have a high likelihood of success. In particular embodiments, mapping events to actions may result in several technical advantages for the assistant system140. One technical advantage may include that each event may be a state update from the user or the user's physical/digital environment, which may or may not trigger an action from assistant system140. Another technical advantage may include possibilities to handle rapid bursts of events (e.g., user enters a new building and sees many people) by first consuming all events to update state, and then triggering action(s) from the final state. Another technical advantage may include consuming all events into a single global assistant state. In particular embodiments, the action selector222may take the dialog state update operators as part of the input to select the dialog action. The execution of the dialog action may generate a set of expectations to instruct the dialog state tracker218to handle future turns. In particular embodiments, an expectation may be used to provide context to the dialog state tracker218when handling the user input from next turn. As an example and not by way of limitation, slot request dialog action may have the expectation of proving a value for the requested slot. In particular embodiments, both the dialog state tracker218and the action selector222may not change the dialog state until the selected action is executed. This may allow the assistant system140to execute the dialog state tracker218and the action selector222for processing speculative ASR results and to do n-best ranking with dry runs. In particular embodiments, the action selector222may call different agents228for task execution. Meanwhile, the dialog manager216may receive an instruction to update the dialog state. As an example and not by way of limitation, the update may comprise awaiting agents'228response. An agent228may select among registered content providers to complete the action. The data structure may be constructed by the dialog manager216based on an intent and one or more slots associated with the intent. In particular embodiments, the agents228may comprise first-party agents and third-party agents. In particular embodiments, first-party agents may comprise internal agents that are accessible and controllable by the assistant system140(e.g. agents associated with services provided by the online social network, such as messaging services or photo-share services). In particular embodiments, third-party agents may comprise external agents that the assistant system140has no control over (e.g., third-party online music application agents, ticket sales agents). The first-party agents may be associated with first-party providers that provide content objects and/or services hosted by the social-networking system160. The third-party agents may be associated with third-party providers that provide content objects and/or services hosted by the third-party system170. In particular embodiments, each of the first-party agents or third-party agents may be designated for a particular domain. As an example and not by way of limitation, the domain may comprise weather, transportation, music, shopping, social, videos, photos, events, locations, and/or work. In particular embodiments, the assistant system140may use a plurality of agents228collaboratively to respond to a user input. As an example and not by way of limitation, the user input may comprise “direct me to my next meeting.” The assistant system140may use a calendar agent to retrieve the location of the next meeting. The assistant system140may then use a navigation agent to direct the user to the next meeting. In particular embodiments, the dialog manager216may support multi-turn compositional resolution of slot mentions. For a compositional parse from the NLU module210, the resolver may recursively resolve the nested slots. The dialog manager216may additionally support disambiguation for the nested slots. As an example and not by way of limitation, the user input may be “remind me to call Alex”. The resolver may need to know which Alex to call before creating an actionable reminder to-do entity. The resolver may halt the resolution and set the resolution state when further user clarification is necessary for a particular slot. The general policy362may examine the resolution state and create corresponding dialog action for user clarification. In dialog state tracker218, based on the user input and the last dialog action, the dialog manager216may update the nested slot. This capability may allow the assistant system140to interact with the user not only to collect missing slot values but also to reduce ambiguity of more complex/ambiguous utterances to complete the task. In particular embodiments, the dialog manager216may further support requesting missing slots in a nested intent and multi-intent user inputs (e.g., “take this photo and send it to Dad”). In particular embodiments, the dialog manager216may support machine-learning models for more robust dialog experience. As an example and not by way of limitation, the dialog state tracker218may use neural network based models (or any other suitable machine-learning models) to model belief over task hypotheses. As another example and not by way of limitation, for action selector222, highest priority policy units may comprise white-list/black-list overrides, which may have to occur by design; middle priority units may comprise machine-learning models designed for action selection; and lower priority units may comprise rule-based fallbacks when the machine-learning models elect not to handle a situation. In particular embodiments, machine-learning model based general policy unit may help the assistant system140reduce redundant disambiguation or confirmation steps, thereby reducing the number of turns to execute the user input. In particular embodiments, the determined actions by the action selector222may be sent to the delivery system230. The delivery system230may comprise a CU composer370, a response generation component380, a dialog state writing component382, and a text-to-speech (TTS) component390. Specifically, the output of the action selector222may be received at the CU composer370. In particular embodiments, the output from the action selector222may be formulated as a <k,c,u,d> tuple, in which k indicates a knowledge source, c indicates a communicative goal, u indicates a user model, and d indicates a discourse model. In particular embodiments, the CU composer370may generate a communication content for the user using a natural-language generation (NLG) component372. In particular embodiments, the NLG component372may use different language models and/or language templates to generate natural-language outputs. The generation of natural-language outputs may be application specific. The generation of natural-language outputs may be also personalized for each user. In particular embodiments, the NLG component372may comprise a content determination component, a sentence planner, and a surface realization component. The content determination component may determine the communication content based on the knowledge source, communicative goal, and the user's expectations. As an example and not by way of limitation, the determining may be based on a description logic. The description logic may comprise, for example, three fundamental notions which are individuals (representing objects in the domain), concepts (describing sets of individuals), and roles (representing binary relations between individuals or concepts). The description logic may be characterized by a set of constructors that allow the natural-language generator to build complex concepts/roles from atomic ones. In particular embodiments, the content determination component may perform the following tasks to determine the communication content. The first task may comprise a translation task, in which the input to the NLG component372may be translated to concepts. The second task may comprise a selection task, in which relevant concepts may be selected among those resulted from the translation task based on the user model. The third task may comprise a verification task, in which the coherence of the selected concepts may be verified. The fourth task may comprise an instantiation task, in which the verified concepts may be instantiated as an executable file that can be processed by the NLG component372. The sentence planner may determine the organization of the communication content to make it human understandable. The surface realization component may determine specific words to use, the sequence of the sentences, and the style of the communication content. In particular embodiments, the CU composer370may also determine a modality of the generated communication content using the UI payload generator374. Since the generated communication content may be considered as a response to the user input, the CU composer370may additionally rank the generated communication content using a response ranker376. As an example and not by way of limitation, the ranking may indicate the priority of the response. In particular embodiments, the CU composer370may comprise a natural-language synthesis (NLS) component that may be separate from the NLG component372. The NLS component may specify attributes of the synthesized speech generated by the CU composer370, including gender, volume, pace, style, or register, in order to customize the response for a particular user, task, or agent. The NLS component may tune language synthesis without engaging the implementation of associated tasks. In particular embodiments, the CU composer370may check privacy constraints associated with the user to make sure the generation of the communication content follows the privacy policies. More information on customizing natural-language generation (NLG) may be found in U.S. patent application Ser. No. 15/967,279, filed 30 Apr. 2018, and U.S. patent application Ser. No. 15/966,455, filed 30 Apr. 2018, which is incorporated by reference. In particular embodiments, the delivery system230may perform different tasks based on the output of the CU composer370. These tasks may include writing (i.e., storing/updating) the dialog state into the data store330using the dialog state writing component382and generating responses using the response generation component380. In particular embodiments, the output of the CU composer370may be additionally sent to the TTS component390if the determined modality of the communication content is audio. In particular embodiments, the output from the delivery system230comprising one or more of the generated responses, the communication content, or the speech generated by the TTS component390may be then sent back to the dialog manager216. In particular embodiments, the orchestrator206may determine, based on the output of the entity resolution module212, whether to processing a user input on the client system130or on the server, or in the third operational mode (i.e., blended mode) using both. Besides determining how to process the user input, the orchestrator206may receive the results from the agents228and/or the results from the delivery system230provided by the dialog manager216. The orchestrator206may then forward these results to the arbitrator226. The arbitrator226may aggregate these results, analyze them, select the best result, and provide the selected result to the render output module232. In particular embodiments, the arbitrator226may consult with dialog policies360to obtain the guidance when analyzing these results. In particular embodiments, the render output module232may generate a response that is suitable for the client system130. FIG.4illustrates an example task-centric flow diagram400of processing a user input. In particular embodiments, the assistant system140may assist users not only with voice-initiated experiences but also more proactive, multi-modal experiences that are initiated on understanding user context. In particular embodiments, the assistant system140may rely on assistant tasks for such purpose. An assistant task may be a central concept that is shared across the whole assistant stack to understand user intention, interact with the user and the world to complete the right task for the user. In particular embodiments, an assistant task may be the primitive unit of assistant capability. It may comprise data fetching, updating some state, executing some command, or complex tasks composed of a smaller set of tasks. Completing a task correctly and successfully to deliver the value to the user may be the goal that the assistant system140is optimized for. In particular embodiments, an assistant task may be defined as a capability or a feature. The assistant task may be shared across multiple product surfaces if they have exactly the same requirements so it may be easily tracked. It may also be passed from device to device, and easily picked up mid-task by another device since the primitive unit is consistent. In addition, the consistent format of the assistant task may allow developers working on different modules in the assistant stack to more easily design around it. Furthermore, it may allow for task sharing. As an example and not by way of limitation, if a user is listening to music on smart glasses, the user may say “play this music on my phone.” In the event that the phone hasn't been woken or has a task to execute, the smart glasses may formulate a task that is provided to the phone, which may then be executed by the phone to start playing music. In particular embodiments, the assistant task may be retained by each surface separately if they have different expected behaviors. In particular embodiments, the assistant system140may identify the right task based on user inputs in different modality or other signals, conduct conversation to collect all necessary information, and complete that task with action selector222implemented internally or externally, on server or locally product surfaces. In particular embodiments, the assistant stack may comprise a set of processing components from wake-up, recognizing user inputs, understanding user intention, reasoning about the tasks, fulfilling a task to generate natural-language response with voices. In particular embodiments, the user input may comprise speech input. The speech input may be received at the ASR module208for extracting the text transcription from the speech input. The ASR module208may use statistical models to determine the most likely sequences of words that correspond to a given portion of speech received by the assistant system140as audio input. The models may include one or more of hidden Markov models, neural networks, deep learning models, or any combination thereof. The received audio input may be encoded into digital data at a particular sampling rate (e.g., 16, 44.1, or 96 kHz) and with a particular number of bits representing each sample (e.g., 8, 16, of 24 bits). In particular embodiments, the ASR module208may comprise one or more of a grapheme-to-phoneme (G2P) model, a pronunciation learning model, a personalized acoustic model, a personalized language model (PLM), or an end-pointing model. In particular embodiments, the grapheme-to-phoneme (G2P) model may be used to determine a user's grapheme-to-phoneme style (i.e., what it may sound like when a particular user speaks a particular word). In particular embodiments, the personalized acoustic model may be a model of the relationship between audio signals and the sounds of phonetic units in the language. Therefore, such personalized acoustic model may identify how a user's voice sounds. The personalized acoustical model may be generated using training data such as training speech received as audio input and the corresponding phonetic units that correspond to the speech. The personalized acoustical model may be trained or refined using the voice of a particular user to recognize that user's speech. In particular embodiments, the personalized language model may then determine the most likely phrase that corresponds to the identified phonetic units for a particular audio input. The personalized language model may be a model of the probabilities that various word sequences may occur in the language. The sounds of the phonetic units in the audio input may be matched with word sequences using the personalized language model, and greater weights may be assigned to the word sequences that are more likely to be phrases in the language. The word sequence having the highest weight may be then selected as the text that corresponds to the audio input. In particular embodiments, the personalized language model may also be used to predict what words a user is most likely to say given a context. In particular embodiments, the end-pointing model may detect when the end of an utterance is reached. In particular embodiments, based at least in part on a limited computing power of the client system130, the assistant system140may optimize the personalized language model at runtime during the client-side process. As an example and not by way of limitation, the assistant system140may pre-compute a plurality of personalized language models for a plurality of possible subjects a user may talk about. When a user input is associated with a request for assistance, the assistant system140may promptly switch between and locally optimize the pre-computed language models at runtime based on user activities. As a result, the assistant system140may preserve computational resources while efficiently identifying a subject matter associated with the user input. In particular embodiments, the assistant system140may also dynamically re-learn user pronunciations at runtime. In particular embodiments, the user input may comprise non-speech input. The non-speech input may be received at the context engine220for determining events and context from the non-speech input. The context engine220may determine multi-modal events comprising voice/text intents, location updates, visual events, touch, gaze, gestures, activities, device/application events, and/or any other suitable type of events. The voice/text intents may depend on the ASR module208and the NLU module210. The location updates may be consumed by the dialog manager216to support various proactive/reactive scenarios. The visual events may be based on person or object appearing in the user's field of view. These events may be consumed by the dialog manager216and recorded in transient user state to support visual co-reference (e.g., resolving “that” in “how much is that shirt?” and resolving “him” in “send him my contact”). The gaze, gesture, and activity may result in flags being set in the transient user state (e.g., user is running) which may condition the action selector222. For the device/application events, if an application makes an update to the device state, this may be published to the assistant system140so that the dialog manager216may use this context (what is currently displayed to the user) to handle reactive and proactive scenarios. As an example and not by way of limitation, the context engine220may cause a push notification message to be displayed on a display screen of the user's client system130. The user may interact with the push notification message, which may initiate a multi-modal event (e.g., an event workflow for replying to a message received from another user). Other example multi-modal events may include seeing a friend, seeing a landmark, being at home, running, faces being recognized in a photo, starting a call with touch, taking a photo with touch, opening an application, etc. In particular embodiments, the context engine220may also determine world/social events based on world/social updates (e.g., weather changes, a friend getting online). The social updates may comprise events that a user is subscribed to, (e.g., friend's birthday, posts, comments, other notifications). These updates may be consumed by the dialog manager216to trigger proactive actions based on context (e.g., suggesting a user call a friend on their birthday, but only if the user is not focused on something else). As an example and not by way of limitation, receiving a message may be a social event, which may trigger the task of reading the message to the user. In particular embodiments, the text transcription from the ASR module208may be sent to the NLU module210. The NLU module210may process the text transcription and extract the user intention (i.e., intents) and parse the slots or parsing result based on the linguistic ontology. In particular embodiments, the intents and slots from the NLU module210and/or the events and contexts from the context engine220may be sent to the entity resolution module212. In particular embodiments, the entity resolution module212may resolve entities associated with the user input based on the output from the NLU module210and/or the context engine220. The entity resolution module212may use different techniques to resolve the entities, including accessing user memory from the assistant user memory (AUM)354. In particular embodiments, the AUM354may comprise user episodic memories helpful for resolving the entities by the entity resolution module212. The AUM354may be the central place for storing, retrieving, indexing, and searching over user data. In particular embodiments, the entity resolution module212may provide one or more of the intents, slots, entities, events, context, or user memory to the dialog state tracker218. The dialog state tracker218may identify a set of state candidates for a task accordingly, conduct interaction with the user to collect necessary information to fill the state, and call the action selector222to fulfill the task. In particular embodiments, the dialog state tracker218may comprise a task tracker410. The task tracker410may track the task state associated with an assistant task. In particular embodiments, a task state may be a data structure persistent cross interaction turns and updates in real time to capture the state of the task during the whole interaction. The task state may comprise all the current information about a task execution status, such as arguments, confirmation status, confidence score, etc. Any incorrect or outdated information in the task state may lead to failure or incorrect task execution. The task state may also serve as a set of contextual information for many other components such as the ASR module208, the NLU module210, etc. In particular embodiments, the task tracker410may comprise intent handlers411, task candidate ranking module414, task candidate generation module416, and merging layer419. In particular embodiments, a task may be identified by its ID name. The task ID may be used to associate corresponding component assets if it is not explicitly set in the task specification, such as dialog policy360, agent execution, NLG dialog act, etc. Therefore, the output from the entity resolution module212may be received by a task ID resolution component417of the task candidate generation module416to resolve the task ID of the corresponding task. In particular embodiments, the task ID resolution component417may call a task specification manager API430to access the triggering specifications and deployment specifications for resolving the task ID. Given these specifications, the task ID resolution component417may resolve the task ID using intents, slots, dialog state, context, and user memory. In particular embodiments, the technical specification of a task may be defined by a task specification. The task specification may be used by the assistant system140to trigger a task, conduct dialog conversation, and find a right execution module (e.g., agents228) to execute the task. The task specification may be an implementation of the product requirement document. It may serve as the general contract and requirements that all the components agreed on. It may be considered as an assembly specification for a product, while all development partners deliver the modules based on the specification. In particular embodiments, an assistant task may be defined in the implementation by a specification. As an example and not by way of limitation, the task specification may be defined as the following categories. One category may be a basic task schema which comprises the basic identification information such as ID, name, and the schema of the input arguments. Another category may be a triggering specification, which is about how a task can be triggered, such as intents, event message ID, etc. Another category may be a conversational specification, which is for dialog manager216to conduct the conversation with users and systems. Another category may be an execution specification, which is about how the task will be executed and fulfilled. Another category may be a deployment specification, which is about how a feature will be deployed to certain surfaces, local, and group of users. In particular embodiments, the task specification manager API430may be an API for accessing a task specification manager. The task specification manager may be a module in the runtime stack for loading the specifications from all the tasks and providing interfaces to access all the tasks specifications for detailed information or generating task candidates. In particular embodiments, the task specification manager may be accessible for all components in the runtime stack via the task specification manager API430. The task specification manager may comprise a set of static utility functions to manage tasks with the task specification manager, such as filtering task candidates by platform. Before landing the task specification, the assistant system140may also dynamically load the task specifications to support end-to-end development on the development stage. In particular embodiments, the task specifications may be grouped by domains and stored in runtime configurations435. The runtime stack may load all the task specifications from the runtime configurations435during the building time. In particular embodiments, in the runtime configurations435, for a domain, there may be a cconf file and a cinc file (e.g., sidechef_task.cconf and sidechef_task.inc). As an example and not by way of limitation, <domain>_tasks.cconf may comprise all the details of the task specifications. As another example and not by way of limitation, <domain>_tasks.cinc may provide a way to override the generated specification if there is no support for that feature yet. In particular embodiments, a task execution may require a set of arguments to execute. Therefore, an argument resolution component418may resolve the argument names using the argument specifications for the resolved task ID. These arguments may be resolved based on NLU outputs (e.g., slot [SL:contact]), dialog state (e.g., short-term calling history), user memory (such as user preferences, location, long-term calling history, etc.), or device context (such as timer states, screen content, etc.). In particular embodiments, the argument modality may be text, audio, images or other structured data. The slot to argument mapping may be defined by a filling strategy and/or language ontology. In particular embodiments, given the task triggering specifications, the task candidate generation module416may look for the list of tasks to be triggered as task candidates based on the resolved task ID and arguments. In particular embodiments, the generated task candidates may be sent to the task candidate ranking module414to be further ranked. The task candidate ranking module414may use a rule-based ranker415to rank them. In particular embodiments, the rule-based ranker415may comprise a set of heuristics to bias certain domain tasks. The ranking logic may be described as below with principles of context priority. In particular embodiments, the priority of a user specified task may be higher than an on-foreground task. The priority of the on-foreground task may be higher than a device-domain task when the intent is a meta intent. The priority of the device-domain task may be higher than a task of a triggering intent domain. As an example and not by way of limitation, the ranking may pick the task if the task domain is mentioned or specified in the utterance, such as “create a timer in TIMER app”. As another example and not by way of imitation, the ranking may pick the task if the task domain is on foreground or active state, such as “stop the timer” to stop the timer while the TIMER app is on foreground and there is an active timer. As yet another example and not by way of imitation, the ranking may pick the task if the intent is general meta intent, and the task is device control while there is no other active application or active state. As yet another example and not by way of imitation, the ranking may pick the task if the task is the same as the intent domain. In particular embodiments, the task candidate ranking module414may customize some more logic to check the match of intent/slot/entity types. The ranked task candidates may be sent to the merging layer419. In particular embodiments, the output from the entity resolution module212may also sent to a task ID resolution component412of the intent handlers411. The task ID resolution component412may resolve the task ID of the corresponding task similarly to the task ID resolution component417. In particular embodiments, the intent handlers411may additionally comprise an argument resolution component413. The argument resolution component413may resolve the argument names using the argument specifications for the resolved task ID similarly to the argument resolution component418. In particular embodiments, intent handlers411may deal with task agnostic features and may not be expressed within the task specifications which are task specific. Intent handlers411may output state candidates other than task candidates such as argument update, confirmation update, disambiguation update, etc. In particular embodiments, some tasks may require very complex triggering conditions or very complex argument filling logic that may not be reusable by other tasks even if they were supported in the task specifications (e.g., in-call voice commands, media tasks via [IN:PLAY MEDIA], etc.). Intent handlers411may be also suitable for such type of tasks. In particular embodiments, the results from the intent handlers411may take precedence over the results from the task candidate ranking module414. The results from the intent handlers411may be also sent to the merging layer419. In particular embodiments, the merging layer419may combine the results form the intent handlers411and the results from the task candidate ranking module414. The dialog state tracker218may suggest each task as a new state for the dialog policies360to select from, thereby generating a list of state candidates. The merged results may be further sent to a conversational understanding reinforcement engine (CURE) tracker420. In particular embodiments, the CURE tracker420may be a personalized learning process to improve the determination of the state candidates by the dialog state tracker218under different contexts using real-time user feedback. More information on conversational understanding reinforcement engine may be found in U.S. patent application Ser. No. 17/186,459, filed 26 Feb. 2021, which is incorporated by reference. In particular embodiments, the state candidates generated by the CURE tracker420may be sent to the action selector222. The action selector222may consult with the task policies364, which may be generated from execution specifications accessed via the task specification manager API430. In particular embodiments, the execution specifications may describe how a task should be executed and what actions the action selector222may need to take to complete the task. In particular embodiments, the action selector222may determine actions associated with the system. Such actions may involve the agents228to execute. As a result, the action selector222may send the system actions to the agents228and the agents228may return the execution results of these actions. In particular embodiments, the action selector may determine actions associated with the user or device. Such actions may need to be executed by the delivery system230. As a result, the action selector222may send the user/device actions to the delivery system230and the delivery system230may return the execution results of these actions. The embodiments disclosed herein may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers. Continuous Learning for Natural-Language Understanding Models In particular embodiments, the assistant system140may efficiently identify errors from the natural-language understanding (NLU) models used by the assistant system140. The assistant system140may then correct these errors and retrain the NLU models to improve their quality. The NLU models in the assistant system140may generate structured semantic representations (e.g., slots and intents) for user queries and then match user queries to queries in a plurality of training data as a way to generate responses for the user queries. However, sometimes the user queries may not match well with the training data (e.g., because users phrased their queries in unusual or non-grammatical ways or used uncommon terms). Under this situation, the NLU models may not handle these user queries accurately, i.e., generating inaccurate semantic representations, which may further lead to inaccurate responses to the user queries. Conventional approaches to address this issue may be manually identifying these errors and creating new training data incorporating these tail cases (i.e., uncommon cases) and then retraining the NLU models to improve their quality, which may be cumbersome and inefficient. By contrast, the embodiments disclosed herein may use active learning to select live traffic data (e.g., conversations between users and the assistant system140) that the NLU models are predicted to fail (i.e., incorrectly identify intents/slots). Active learning refers to a set of algorithms for deciding which unlabeled data to label in order to improve the quality of the NLU models. The selected traffic data may be manually annotated and used to evaluate the NLU models to identify the most important failure cases. The failure cases may be further used to automatically generate new (e.g., synthetic) training data, which may be used to retrain the NLU models to optimize them. Although this disclosure describes optimizing particular NLU models by particular systems in a particular manner, this disclosure contemplates optimizing any suitable NLU model in any suitable manner. As an example and not by way of limitation, such model may comprise any model within the assistant system140, e.g., natural-language generation model, entity resolution model, automatic speech recognition model, text-to-speech model, etc. In particular embodiments, the assistant system140may receive a user request to automatically debug a natural-language understanding (NLU) model. Debugging the NLU model may comprise one or more of discovering quality issues or proposing fix of the model. In particular embodiments, the NLU model may comprise a machine-learning model or rules based functionalities. The assistant system140may access a plurality of predicted semantic representations generated by the NLU model. In particular embodiments, the plurality of predicted semantic representations may be associated with a plurality of dialog sessions, respectively. Each dialog session may be between a user from a plurality of users and an assistant xbot associated with the NLU model. In particular embodiments, the assistant system140may generate, based on an auto-correction model, a plurality of expected semantic representations associated with the plurality of dialog session. The auto-correction model may be learned from a plurality of dialog training samples generated based on active learning. In particular embodiments, the assistant system140may then identify, based on a comparison between the predicted semantic representations and the expected semantic representations, one or more incorrect semantic representations of the predicted semantic representations. The assistant system140may further automatically correct the one or more incorrect semantic representations by replacing them with one or more respective expected semantic representations generated by the auto-correction model. In particular embodiments, the NLU models may be not effective in handling tail case requests from users. Tail cases may be these cases where the users phrased their queries in an unusual way or a nongrammatical way or used uncommon terms. As an example and not by way of limitation, “what's the broadcast in Seattle?” may be a tail case for asking weather information in contrast with “what's the weather in Seattle?” The NLU models may be trained from training data based on the latter user input. As a result, the utterances of the former may cause inaccurate semantic representations generated by the NLU models because it doesn't match up well with the training data that was initially used for training the NLU models. To address this issue, the assistant system140may use the auto-correction model to identify the incorrect semantic representations (predicted semantic representations) for user inputs and generate correct semantic representations (expected semantic representations). In particular embodiments, each of the predicted semantic representations or each of the expected semantic representations may comprise one or more of an intent represented by a structural format or a slot represented by a structural format. Each of the one or more incorrect semantic representations and its respective expected semantic representation may be associated with a same dialog session. The assistant system140may then correct the incorrect semantic representations. The assistant system140may further updating the NLU model based on the plurality of expected semantic representations. FIG.5illustrates an example workflow diagram500for improving the NLU models. In particular embodiments, the assistant system140may learn the auto-correction model to improve the NLU models. To learn the auto-correction model, the assistant system140may generate the plurality of dialog training samples for such purpose. The generation of these dialog training samples may start with accessing a plurality of live user-assistant dialog samples. In particular embodiments, the live user-assistant dialog samples may be accessed from multiple data sources502. The live user-assistant dialog samples may comprise interactions between users and the assistant system140. As an example and not by way of limitation, one data source may comprise the live traffic (e.g., conversations) between users and the assistant system140. The accessed data may comprise samples of user inputs (e.g., requests) and corresponding outputs (e.g., responses) from the assistant system140. In particular embodiments, the assistant system140may use comments processing logic to process all the accessed data. The assistant system140may grade the plurality of live user-assistant dialog samples. As an example and not by way of limitation, annotators may grade the responses from the assistant system140. In particular embodiments, grading the plurality of live user-assistant dialog samples may be based on task completion associated with each of the plurality of live user-assistant dialog samples, i.e., evaluating whether a response from the assistant system140for a task associated with a corresponding user request is successful or not. In alternative embodiments, the assistant system140may skip the grading of the live user-assistant dialog samples for higher efficiency. In particular embodiments, the graded samples may be stored in an unlabeled candidates504data store. The unlabeled candidates504data store may comprise the pool of data that has received first-pass, light-weight annotations (e.g. to remove privacy-sensitive rows and to grade overall assistant task outcome), but otherwise has not received complete NLU annotations. The stored graded samples may be then fed into an NLU annotation pipeline506. In particular embodiments, in the NLU annotation pipeline506, there may be a data routing module508. Within the data routing module508, the assistant system140may divide the plurality of live user-assistant dialog samples based on one or more of a language associated with each live user-assistant dialog sample, a client system associated with each live user-assistant dialog sample, or a clustering algorithm. As an example and not by way of limitation, dividing the graded samples based on language may be based on whether they are English or Portuguese. Optionally, the graded samples may be also divided based on the types of client systems130(i.e., surfaces) for surface-specific tasks, i.e., specific tasks may be only associated with specific surfaces. As an example and not by way of limitation, tasks associated with virtual reality (VR) may be only associated with VR-enabled client systems130. In particular embodiments, the clustering algorithm may be an unsupervised clustering algorithm. This algorithm may help group similar users together. During active learning based sampling, the assistant system140may take the grouping signal from the algorithm as a feature to select more samples covers different groups, or the groups where they have higher error rate. In particular embodiments, the graded samples with the same language or with the same surface may be sent to an active learning sampling module510. In particular embodiments, the assistant system140may select based on the active learning, a set of live user-assistant dialog samples for annotation. The active learning sampling module510may take the graded samples and select for those that the NLU models are most likely to fail on as the set of live user-assistant dialog samples for annotation. Active learning may be an effective solution for addressing the technical challenge of improving annotator efficiency by increasing the speed and quality at which we are able to annotate incoming traffic, as active learning may determine what utterances are high-signal for NLU quality improvement, which may be the only requested annotation, thereby reducing the amount of utterances for annotation to improve the annotator efficiency. In particular embodiments, selecting the set of live user-assistant dialog samples for annotation may be further based on grades of the plurality of live user-assistant dialog samples. The active learning used to generate the plurality of dialog training samples may be based on one or more of random selection, model uncertainty, annotation ambiguity, optimization metric, diversity score, or user satisfaction. As an example and not by way of limitation, the active learning may be based on a fuzzy pattern matching (FPM) classifier to predict how likely the NLU models may get the semantic representation of an utterance incorrect. The (FPM) classifier may use intrinsic properties of the model (e.g. confidence scores) or extrinsic properties from the data (e.g. sentence length) as features. The model may be calibrated to provide more reliable confidence scores. In particular embodiments, the grades of these samples may be used. As an example and not by way of limitation, active learning may determine those samples based on their grades annotated by the annotators. By using active learning, the assistant system140may prioritize which utterances are the most important to annotate. In particular embodiments, the samples selected by active learning may be stored in an annotation lookup module512. The assistant system140may annotate semantic representations for the selected set of live user-assistant dialog samples within the annotation lookup module512. Theses selected dialog samples may have their original semantic representations (e.g., intents and slots) that were generated by the NLU models. During the annotation of the dialog samples, the assistant system140may provide corrected or updated semantic representations. In other words, these annotations may filter out previous annotation generated by the NLU models. In particular embodiments, the annotation lookup module512may comprise a data disk, which may comprise a temporary table that holds the live user-assistant dialog data that was sampled. On the temporary table, the assistant system140may collect the annotations and then have the quality of the annotations reviewed. Once the quality is satisfactory, the annotated data may be sent to the production data, which may be used for training the auto-correction model. As can be seen, using a temporary table may avoid updating the production data directly, which may give the assistant system140buffer to identify potential problems with the incoming annotated data before it is added to the production data. In particular embodiments, the annotation lookup module512may access existing annotations514to check if there are existing annotations for these selected dialog samples. For dialog samples that already have existing annotations, the assistant system140may auto-label them based on the existing annotations and then land them directly in the NLU production table522. In particular embodiments, the NLU production table522may comprise labeled live user data. For dialog samples that don't have existing annotations, the assistant system140may treat them as unlabeled data518and apply prediction models on them to generate self-labeled data520. These self-labeled data520may be reviewed by annotators for update/revision, if necessary. In particular embodiments, the dialog samples in the data disk may be annotated on a regular (e.g., weekly) basis, in which interesting samples (e.g., the failure cases) may be annotated with corrected/updated semantic representations. After the selected dialog samples are annotated, they (including both auto-labeled data516and self-labeled data520) may be added to the NLU production table522. The annotated data can then be used to automatically generate new (synthetic) training data, either using a model-based generator or a rules-based generator. These new data sets can be used for improving NLU quality. This could be done by adding all examples sampled with active learning to training, for example. Data generation using the seed utterances could lead to further potential improvements. Finally, as the volume of data increases we may also want to consider using new training strategies in production (e.g., incremental retraining). Although there is manual annotation involved, the aforementioned process for generating new training data is still less intensive then manually generating new training data from scratch. The NLU models may be further improved based on these new training data. With the improved NLU models, the system can automatically identify errors and correct them. The assistant system140may then imitate these utterances. If the NLU models get some of these wrong, the linguists may be involved to annotate them with correct semantic representations. In particular embodiments, the example workflow diagram500may be a continuous learning workstream, which may remove the bottlenecks for quality improvement of NLU models. In particular embodiments, the assistant system140may update the NLU models close to real-time based on online learning. Online learning may refer to a machine-learning technique where models are updated sequentially instead of batch-trained. In the embodiments disclosed herein, online learning may also to refer to dynamic, continuous evolution of the NLU models with fresh data. Online learning may enable the assistant system140to adapt the NLU models to tail cases and seasonal changes in user behavior much more quickly, stemming from world events to new product scenarios (e.g., music question and answering, messaging for smart glasses, etc.). In particular embodiments, the assistant system140may generate, based on the NLU production table522, a train set524, an evaluation set526, and a test set528. The assistant system140may there have a technical advantage of automatic self-driving quality measurement, wherein the assistant system may create fresh head and long tail focused test sets from user data to be used for quality measurement. The train set524may be used as part of the dialog training samples for learning the auto-correction model based on an auto-correction model training module530. The evaluation set526may be used to generate a quality report532comprising the accuracies of the head use cases and tail use cases. In other words, the assistant system140may generate a quality report based the selected set of live user-assistant dialog samples and their annotations. The quality report532may comprise one or more of an accuracy of a semantic representation associated with a head use case or an accuracy of a semantic representation associated with a tail use case. As a result, the assistant system140may have another technical advantage of automatically generating insights identifying bugs that have the highest impact and detecting opportunities for new product scenarios. In particular embodiments, the quality report532may be provided to a template extractor534. The assistant system140may identify one or more failure cases from the selected set of live user-assistant dialog samples. For some of the failure cases identified in the quality report532, the assistant system140may generate augmented data to address these mistakes. As an example and not by way of limitation, for each failure case, the assistant system140may add the exact utterance to training. For example, if “please call mom” is a failure case, the assistant system140may add “please call mom” to training. The assistant system140may assign a high weight for such utterance to bias the models to be more effective for annotating the semantic representations for this utterance. Extracting templates from failure cases, generating augmented data based on these failure cases and improving the NLU models based on the augmented data may be effective solutions for addressing the technical challenge of effectively generating augmented data for improving the NLU models as such data may focus on these failure cases to enable the NLU models to handle these cases more effectively in the future. In particular embodiments, the template extractor534may extract one or more templates from the one or more failure cases. Suppose there are no similar dialog samples to a particular incorrectly annotated utterance in the train set524. The template extractor534may extract templates from this utterance For example, if “please call mom” is a failure case, the template extractor534may extract a template as “please call {sl:contact}”. This template may then be expanded to other similar utterances such as “please call dad”, “please call Jill”, etc. and added to training. The extracted templates may be then provided to a data generator536, which may further generate a plurality of synthetic dialog samples based on the one or more templates. For example, these synthetic dialog samples may include “what's the broadcast in New York,” “what's the broadcast in San Francisco,” etc. for the failure case of “what's the broadcast in Seattle?” In particular embodiments, the generation of the synthetic dialog samples may be based on the template(s) identified from the live traffic via active learning. The generation process may then replace the slot values with known entity names to improve the diversity and coverage of the training and testing samples. The entity names may be selected via active learning as well, where more difficult ones are selected for the NLU module210and entity resolution module212. The testing results from those newly produced data may further influence the sampling logic as a feedback looping back to the active learning sampling module510. In addition, the extracted templates may be also provided to a rule-based classifier (RBC) generator538. The RBC generator538may generate one or more rule-based classifiers based on one or more rules for annotating semantic representations. The rule-based classifier may have different behaviors from the models used for annotating the semantic representations. In particular embodiments, the semantic representations generated based on the rule-based classifier may overwrite those generated based on models. As a result, the assistant system140may have a technical advantage of automatic improvement of the NLU models by creating data patches and RBC rules (for critical head cases) automatically and ingesting useful user data for NLU annotation and training. In particular embodiments, the augmented data generated by the data generator536may be provided to the NLU synthetic data table540. The NLU synthetic data table540may comprise labeled generated data and labeled crowdsourced data. The NLU synthetic data table540may be another part of the dialog training samples used for learning the auto-correction model530. In other words, the plurality of dialog training samples may comprise the selected set of live user-assistant dialog samples (e.g., from the train set524) and the plurality of synthetic dialog samples (e.g. from the NLU synthetic data table540). As can be seen, based on the aforementioned process, each dialog training sample of the plurality of dialog training samples may comprise a user input and a correct semantic representation associated with the user input. In particular embodiments, the quality report532together with the augmented data with new live-traffic dialog samples may be returned to the NLU annotation pipeline506to reiterate the process to continuously improving the NLU models. In particular embodiments, the assistant system140may send, to a client system130, instructions for presenting a user interface comprising one or more of the plurality of predicted semantic representations, one or more of the plurality of expected semantic representations, or one or more of the incorrect semantic representations. Table 2 illustrates example content of the user interface showing user input and their expected and predicted semantic representations. TABLE 2Example content of the use interface showing user input and their expected andpredicted semantic representations.userfailureconfidenceinputexpectedpredictedreasontemplatesmeanGood[[IN;GET_ROUTINE[IN:GET_ROUTINE GoodRBC_MISMATCHNone2.000000morningGood [SL:DATE_TIMEmorning ]]morning] ]]what's[[IN:GET_ROUTINE[[IN:WHATS_UP what'sMISLABELEDNone0.999222upwhat's up ]]up ]]play[[IN:SKIP_TRACK_MUSIC[[IN:SKIP_TRACK_MUSICRBC_MISMATCHNone2.000000nextplay [SL:ORDINALplay next song ]]songnext] [SL:MUSIC_TYPEsong ] ]]Good[[IN:GET_ROUTINE[IN:GOOD_EVENINGMISLABELEDNone0.997763eveningGood [SL:DATE_TIMEGood evening ]]evening ] ]]play the[[IN:GET_STORIES_NEWS[[IN:OPEN_RESOURCELOW_TRAINING_DATA{‘play the0.982413newsplay the news ]]play the [SL:RESOURCE{sl:resource}’:[‘play thenews ] ]]corrupted dot com’. ‘playthe chat’. ‘play thesearch_engine_name.‘play the search’. ‘playthe smart camera’. ‘playthe j double you dot o rg’. ‘play the w w w dotshopping_site_name dotcom’. ‘play thefinancial_company_namedot com’. ‘play therhapsody’. ‘play thevideo’]}how[[IN:GET_TIMER how[[IN:GET_TIMER howMISLABELEDNone0.999103muchmuch time is left ]]muchtime is[SL:METHOD_TIMERlefttime ] is left ]]what's[[IN:GET_ROUTINE[[IN:WHATS_UP what'sMISLABELEDNone0.992673going onwhat's going on ]]going on ]]Good[IN:GET_ROUTINE[[IN:GOOD_AFTERNOONLOW_TRAINING_DATA{‘Good{sl:date_time}’:[0.998642afternoonGood [SL:DATE_TIMEGood [SL:DATE_TIMEGood for one hour’.afternoon ] ]]afternoon ] ]]‘Good 58 minutes’. Goodnow’. ‘Good for 63minutes’. ‘Good for 9pmtonight’. ‘Good for threehours’. ‘Good for 45minutes’. ‘Good for onWednesday’. ‘Goodfifteen minutes’. ‘Goodfor 6 tomorrowmorning’]}call[[IN:CREATE_CALL call[IN:CREATE_CALL callRBC_MISMATCHNone1.800000nana[SL:CONTACT[SL:CONTACT nana ] ]][IN:GET_CONTACT[SL:TYPE RELATIONnana ] ] ] ]]hang[[IN:END_CALL hang ]][[IN:CREATE_CALLLOW_TRAINING_DATA{None: [‘hang’]}0.939998hang ]]how[IN:GET_TIMER how[[IN:GET_TIMER howMISLABELEDNone0.996968muchmuch time left ]]muchtime left[SL:METHOD_TIMERtime ] left ]]play[[IN:GET_STORIES_NEWS[[IN:OPEN_RESOURCEUNKNOWNNone0.275605news_site_nameplayplay [SL.RESOURCE[SL:NEWS_SOURCEnews_site_name ] ]]news site name ] ]]what's[[IN:GET_ROUTINE[[IN:WHATS_UP what'sMISLABELEDNone0.998155newwhat's new ]]newset a 30[[IN:CREATE_TIMER set[[IN:CREATE_TIMER setUNKNOWNNone0.998127seconda [SL:DATE_TIMEa [SL:DATE_TIME 30timer[IN:GET_TIMEsecond ][SL:DATE_TIME 30[SL:METHOD_TIMERsecond ] ] ]timer ] ]][SL:METHOD_TIMERtimer ] ]]Play[IN:REPLAY_MUSIC[[IN:START_OVER_MEDLOW_TRAINING_DATA{None:[Play again]}0.861328againPlay again ]]IA Play again ]][ILLEGIBLE] As illustrated in Table 2, the user interface may show a plurality of user input (e.g., utterances) associated with dialog sessions. For each user input, there may be a corresponding expected semantic representation, predicted semantic representation, failure reason, template, and confidence mean. As an example and not by way of limitation, for the user input “play next song”, the expected semantic representation may be [[IN: SKIP_TRACK_MUSIC play [SL:ORDINAL next] [SL:MUSIC_TYPE song]]] whereas the predicted semantic representation may be [[IN:SKIP_TRACK_MUSIC play next song]]. The failure reason is identified as an RBC mismatch (RBC_MISMATCH), which indicates the predicted semantic representation was actually generated by the rule-based classifier. This may also indicate the rules overwrote the NLU models as the prediction has a confidence mean of 2, which is considerably high. This confidence mean may be not a real model confidence, but a fake confidence score assigned to the rules. In particular embodiments, if the prediction is generated by rules, it may have a confidence over one as the assistant system140may purposely have rules that override models sometimes. As can be seen, the predicted semantic representation generated by rules mismatches the expected semantic representation, which is why the failure reason is RBC_MISMATCH. As another example and not by way of limitation, for the user input “what's up”, the expected semantic representation may be [[IN:GET_ROUTINE what's up]] whereas the predicted semantic representation may be [[IN:WHATS_UP what's up]]. The failure reason is identified as mislabeled. This may indicate that the expected parse is likely incorrect, i.e. the annotator made a mistake. To be more specific, the intent should be “whats_up” instead of “get_routine”. The expected semantic representation may be generated by the auto-correction model, which may have confidence mean 0.999222. The high confidence score may indicate the model has high probability of being correct. As yet another example and not by way of limitation, for the user input “play the news”, the expected semantic representation may be [[IN:GET_STORIES_NEWS play the news]] whereas the predicted semantic representation may be [[IN:OPEN_RESOURCE play the [SL:RESOURCE news]]]. The failure reason is identified as low training data, which means the user input is phrased in an uncommon way and the training data does not have sufficient amount of similar utterances. The expected semantic representation may be generated by the auto-correction model, which may have confidence mean 0.982413. Furthermore, since the failure reason is low training data, the assistant system140may further extract templates from this user input and add them to the training data to improve the NLU models. These templates may include {‘play the {sl:resource}’:[‘play the corrupted dot com’. ‘play the chat’. ‘play the search_engine_name’. ‘play the search’. ‘play the smart camera’. ‘play the j double you dot o r g’. ‘play the w w w dot shopping_site_name dot com’. ‘play the financial_company_name dot com’. ‘play the rhapsody’. ‘play the video’]}. Because the templates are generated and expanded automatically, some of the sentences produced may not be grammatical or semantically meaningful. In addition to the information listed in Table 2, for each user input, which domain it belongs to, how frequently this bug appears, whether it is cached in the train set, whether there is a correction, and the dictionary features may be also identified. As an example and not by way of limitation, the domain may be routine, music, news, timer, calling, etc. The aforementioned tool may result in a technical advantage for the assistant system140, which may include benchmark establishment, in which the assistant system may establish tools for regular maintenance of incoming data (e.g., correcting out-of-date annotations, managing user requests for deletion, etc.) and creating dashboards to track quality on these sets over time. In particular embodiments, the user interface may be associated with an application that can be used by developers of the assistant system140as an internal tool to retrieve dialog samples and view expected and predicted annotations for different user queries. The expected annotations may be generated by the auto-correction model whereas the predicted annotations may be generated by the current NLU models in production. For a given query, the internal tool may show the intent/slot annotations of the expected semantic representations and the intent/slot annotations of the predicted semantic representations. In particular embodiments, the internal tool may determine, for each of the one or more incorrect semantic representations, a failure reason used for identifying the predicted semantic representation corresponding to the incorrect semantic representation as being incorrect. As an example and not by way of limitation, the failure reason for the incorrect semantic representation may comprise one or more of a mismatch between the predicted semantic representation and a semantic representation determined based on one or more rule-based classifiers, a mislabel of an intent or a slot of the predicted semantic representation, or an insufficient amount of training data associated with the predicted semantic representation. In particular embodiments, the internal tool may further automatically provide a fix (i.e., auto bug fix) of the identified error. In particular embodiments, the internal tool may be developed into an external tool provided to 3rd-party developers. In particular embodiments, the performance of the auto-correction model has been tested on a live traffic evaluation set. The live traffic evaluation set may be from user data associated with one or more client systems. The live traffic evaluation set comprises 1207 incorrect semantic representations (bugs). With the auto-correction model, 789 bugs out of 1207 were automatically corrected. There was 65% absolute frame accuracy improvement on the set of 1207 bugs. Some examples of the bugs fixed may be listed below in Table 3. TABLE 3Example fixed bugs by the auto-correction model.utterancebefore fixingafter fixingwhat's today[IN:WHATS_UP[IN:GET DATE what'swhat's today ][SL:DATE TIME today ] ]play 92.5[IN:PLAY MEDIA play[IN:PLAY MUSIC play[SL:TITLE_MEDIA 92.5 ] ][SL:MUSIC_RADIO_ID92.5 ] ]sleepy time[IN:UNSUPPORTED[IN:SLEEP sleepy time ]TIMER sleepy time ]oh shoot[IN:TAKE_PHOTO[outOfDomain oh shoot ]oh shoot ] The columns in Table 3 represent the semantic representations before-and-after the auto-correction model is applied. FIG.6illustrates an example method600for improving the NLU models. The method may begin at step610, where the assistant system140may receive a user request to automatically debug a natural-language understanding (NLU) model. For example, a developer may access a tool provided by the assistant system140that allows the developer to automatically debug the NLU model. At step620, the assistant system140may access a plurality of predicted semantic representations generated by the NLU model, wherein the plurality of predicted semantic representations are associated with a plurality of dialog sessions, respectively, wherein each dialog session is between a user from a plurality of users and an assistant xbot associated with the NLU model. At step630, the assistant system140may generate a plurality of dialog training samples, wherein the generation comprises accessing a plurality of live user-assistant dialog samples, dividing the plurality of live user-assistant dialog samples based on one or more of a language associated with each live user-assistant dialog sample or a client system associated with each live user-assistant dialog sample, grading the plurality of live user-assistant dialog samples, wherein grading the plurality of live user-assistant dialog samples is based on task completion associated with each of the plurality of live user-assistant dialog samples, selecting, based on active learning, a set of live user-assistant dialog samples for annotation, wherein the active learning is based on one or more of random selection, model uncertainty, diversity score, or user satisfaction, and wherein selecting the set of live user-assistant dialog samples for annotation is further based on grades of the plurality of live user-assistant dialog samples, annotating semantic representations for the selected set of live user-assistant dialog samples, and generating a quality report based the selected set of live user-assistant dialog samples and their annotations, wherein the quality report comprises one or more of an accuracy of a semantic representation associated with a head use case or an accuracy of a semantic representation associated with a tail use case. At step640, the assistant system140may generate, based on an auto-correction model, a plurality of expected semantic representations associated with the plurality of dialog sessions, wherein the auto-correction model is learned from the plurality of dialog training samples, wherein each dialog training sample of the plurality of dialog training samples comprises a user input and a correct semantic representation associated with the user input, wherein each of the predicted semantic representations or each of the expected semantic representations comprises one or more of an intent represented by a structural format or a slot represented by a structural format. At step650, the assistant system140may identify, based on a comparison between the predicted semantic representations and the expected semantic representations, one or more incorrect semantic representations of the predicted semantic representations, wherein each of the one or more incorrect semantic representations and its respective expected semantic representation are associated with a same dialog session. At step660, the assistant system140may automatically correct the one or more incorrect semantic representations by replacing them with one or more respective expected semantic representations generated by the auto-correction model. Particular embodiments may repeat one or more steps of the method ofFIG.6, where appropriate. At step670, the assistant system140may update the NLU model based on the plurality of expected semantic representations. Although this disclosure describes and illustrates particular steps of the method ofFIG.6as occurring in a particular order, this disclosure contemplates any suitable steps of the method ofFIG.6occurring in any suitable order. Moreover, although this disclosure describes and illustrates an example method for improving the NLU models including the particular steps of the method ofFIG.6, this disclosure contemplates any suitable method for improving the NLU models including any suitable steps, which may include all, some, or none of the steps of the method ofFIG.6, where appropriate. Furthermore, although this disclosure describes and illustrates particular components, devices, or systems carrying out particular steps of the method ofFIG.6, this disclosure contemplates any suitable combination of any suitable components, devices, or systems carrying out any suitable steps of the method ofFIG.6. Vector Spaces and Embeddings FIG.7illustrates an example view of a vector space700. In particular embodiments, an object or an n-gram may be represented in a d-dimensional vector space, where d denotes any suitable number of dimensions. Although the vector space700is illustrated as a three-dimensional space, this is for illustrative purposes only, as the vector space700may be of any suitable dimension. In particular embodiments, an n-gram may be represented in the vector space700as a vector referred to as a term embedding. Each vector may comprise coordinates corresponding to a particular point in the vector space700(i.e., the terminal point of the vector). As an example and not by way of limitation, vectors710,720, and730may be represented as points in the vector space700, as illustrated inFIG.7. An n-gram may be mapped to a respective vector representation. As an example and not by way of limitation, n-grams t1and t2may be mapped to vectorsandin the vector space700, respectively, by applying a functiondefined by a dictionary, such that=(t1) and=(t2). As another example and not by way of limitation, a dictionary trained to map text to a vector representation may be utilized, or such a dictionary may be itself generated via training. As another example and not by way of limitation, a word-embeddings model may be used to map an n-gram to a vector representation in the vector space700. In particular embodiments, an n-gram may be mapped to a vector representation in the vector space700by using a machine leaning model (e.g., a neural network). The machine learning model may have been trained using a sequence of training data (e.g., a corpus of objects each comprising n-grams). In particular embodiments, an object may be represented in the vector space700as a vector referred to as a feature vector or an object embedding. As an example and not by way of limitation, objects e1and e2may be mapped to vectorsandin the vector space700, respectively, by applying a function, such that=(e1) and=(e2). In particular embodiments, an object may be mapped to a vector based on one or more properties, attributes, or features of the object, relationships of the object with other objects, or any other suitable information associated with the object. As an example and not by way of limitation, a function may map objects to vectors by feature extraction, which may start from an initial set of measured data and build derived values (e.g., features). As an example and not by way of limitation, an object comprising a video or an image may be mapped to a vector by using an algorithm to detect or isolate various desired portions or shapes of the object. Features used to calculate the vector may be based on information obtained from edge detection, corner detection, blob detection, ridge detection, scale-invariant feature transformation, edge direction, changing intensity, autocorrelation, motion detection, optical flow, thresholding, blob extraction, template matching, Hough transformation (e.g., lines, circles, ellipses, arbitrary shapes), or any other suitable information. As another example and not by way of limitation, an object comprising audio data may be mapped to a vector based on features such as a spectral slope, a tonality coefficient, an audio spectrum centroid, an audio spectrum envelope, a Mel-frequency cepstrum, or any other suitable information. In particular embodiments, when an object has data that is either too large to be efficiently processed or comprises redundant data, a functionmay map the object to a vector using a transformed reduced set of features (e.g., feature selection). In particular embodiments, a functionmay map an object e to a vector(e) based on one or more n-grams associated with object e. Although this disclosure describes representing an n-gram or an object in a vector space in a particular manner, this disclosure contemplates representing an n-gram or an object in a vector space in any suitable manner. In particular embodiments, the social-networking system160may calculate a similarity metric of vectors in vector space700. A similarity metric may be a cosine similarity, a Minkowski distance, a Mahalanobis distance, a Jaccard similarity coefficient, or any suitable similarity metric. As an example and not by way of limitation, a similarity metric ofandmay be a cosine similarity v1⇀·v2⇀v1⇀v2⇀. As another example and not by way of limitation, a similarity metric ofandmay be a Euclidean distance-. A similarity metric of two vectors may represent how similar the two objects or n-grams corresponding to the two vectors, respectively, are to one another, as measured by the distance between the two vectors in the vector space700. As an example and not by way of limitation, vector710and vector720may correspond to objects that are more similar to one another than the objects corresponding to vector710and vector730, based on the distance between the respective vectors. Although this disclosure describes calculating a similarity metric between vectors in a particular manner, this disclosure contemplates calculating a similarity metric between vectors in any suitable manner. More information on vector spaces, embeddings, feature vectors, and similarity metrics may be found in U.S. patent application Ser. No. 14/949,436, filed 23 Nov. 2015, U.S. patent application Ser. No. 15/286,315, filed 5 Oct. 2016, and U.S. patent application Ser. No. 15/365,789, filed 30 Nov. 2016, each of which is incorporated by reference. Artificial Neural Networks FIG.8illustrates an example artificial neural network (“ANN”)800. In particular embodiments, an ANN may refer to a computational model comprising one or more nodes. Example ANN800may comprise an input layer810, hidden layers820,830,840, and an output layer850. Each layer of the ANN800may comprise one or more nodes, such as a node805or a node815. In particular embodiments, each node of an ANN may be connected to another node of the ANN. As an example and not by way of limitation, each node of the input layer810may be connected to one of more nodes of the hidden layer820. In particular embodiments, one or more nodes may be a bias node (e.g., a node in a layer that is not connected to and does not receive input from any node in a previous layer). In particular embodiments, each node in each layer may be connected to one or more nodes of a previous or subsequent layer. AlthoughFIG.8depicts a particular ANN with a particular number of layers, a particular number of nodes, and particular connections between nodes, this disclosure contemplates any suitable ANN with any suitable number of layers, any suitable number of nodes, and any suitable connections between nodes. As an example and not by way of limitation, althoughFIG.8depicts a connection between each node of the input layer810and each node of the hidden layer820, one or more nodes of the input layer810may not be connected to one or more nodes of the hidden layer820. In particular embodiments, an ANN may be a feedforward ANN (e.g., an ANN with no cycles or loops where communication between nodes flows in one direction beginning with the input layer and proceeding to successive layers). As an example and not by way of limitation, the input to each node of the hidden layer820may comprise the output of one or more nodes of the input layer810. As another example and not by way of limitation, the input to each node of the output layer850may comprise the output of one or more nodes of the hidden layer840. In particular embodiments, an ANN may be a deep neural network (e.g., a neural network comprising at least two hidden layers). In particular embodiments, an ANN may be a deep residual network. A deep residual network may be a feedforward ANN comprising hidden layers organized into residual blocks. The input into each residual block after the first residual block may be a function of the output of the previous residual block and the input of the previous residual block. As an example and not by way of limitation, the input into residual block N may be F(x)+x, where F(x) may be the output of residual block N−1, x may be the input into residual block N−1. Although this disclosure describes a particular ANN, this disclosure contemplates any suitable ANN. In particular embodiments, an activation function may correspond to each node of an ANN. An activation function of a node may define the output of a node for a given input. In particular embodiments, an input to a node may comprise a set of inputs. As an example and not by way of limitation, an activation function may be an identity function, a binary step function, a logistic function, or any other suitable function. As another example and not by way of limitation, an activation function for a node k may be the sigmoid function Fk(sk)=11+e-sk, the hyperbolic tangent function Fk(sk)=esk-e-skesk+e-sk, the rectifier Fk(sk)=max (0, sk), or any other suitable function Fk(sk), where skmay be the effective input to node k. In particular embodiments, the input of an activation function corresponding to a node may be weighted. Each node may generate output using a corresponding activation function based on weighted inputs. In particular embodiments, each connection between nodes may be associated with a weight. As an example and not by way of limitation, a connection825between the node805and the node815may have a weighting coefficient of 0.4, which may indicate that 0.4 multiplied by the output of the node805is used as an input to the node815. As another example and not by way of limitation, the output ykof node k may be yk=Fk(sk), where Fkmay be the activation function corresponding to node k, sk=Σj(wjkxj) may be the effective input to node k, xjmay be the output of a node j connected to node k, and wjkmay be the weighting coefficient between node j and node k. In particular embodiments, the input to nodes of the input layer may be based on a vector representing an object. Although this disclosure describes particular inputs to and outputs of nodes, this disclosure contemplates any suitable inputs to and outputs of nodes. Moreover, although this disclosure may describe particular connections and weights between nodes, this disclosure contemplates any suitable connections and weights between nodes. In particular embodiments, an ANN may be trained using training data. As an example and not by way of limitation, training data may comprise inputs to the ANN800and an expected output. As another example and not by way of limitation, training data may comprise vectors each representing a training object and an expected label for each training object. In particular embodiments, training an ANN may comprise modifying the weights associated with the connections between nodes of the ANN by optimizing an objective function. As an example and not by way of limitation, a training method may be used (e.g., the conjugate gradient method, the gradient descent method, the stochastic gradient descent) to backpropagate the sum-of-squares error measured as a distances between each vector representing a training object (e.g., using a cost function that minimizes the sum-of-squares error). In particular embodiments, an ANN may be trained using a dropout technique. As an example and not by way of limitation, one or more nodes may be temporarily omitted (e.g., receive no input and generate no output) while training. For each training object, one or more nodes of the ANN may have some probability of being omitted. The nodes that are omitted for a particular training object may be different than the nodes omitted for other training objects (e.g., the nodes may be temporarily omitted on an object-by-object basis). Although this disclosure describes training an ANN in a particular manner, this disclosure contemplates training an ANN in any suitable manner. Privacy In particular embodiments, one or more objects (e.g., content or other types of objects) of a computing system may be associated with one or more privacy settings. The one or more objects may be stored on or otherwise associated with any suitable computing system or application, such as, for example, a social-networking system160, a client system130, an assistant system140, a third-party system170, a social-networking application, an assistant application, a messaging application, a photo-sharing application, or any other suitable computing system or application. Although the examples discussed herein are in the context of an online social network, these privacy settings may be applied to any other suitable computing system. Privacy settings (or “access settings”) for an object may be stored in any suitable manner, such as, for example, in association with the object, in an index on an authorization server, in another suitable manner, or any suitable combination thereof. A privacy setting for an object may specify how the object (or particular information associated with the object) can be accessed, stored, or otherwise used (e.g., viewed, shared, modified, copied, executed, surfaced, or identified) within the online social network. When privacy settings for an object allow a particular user or other entity to access that object, the object may be described as being “visible” with respect to that user or other entity. As an example and not by way of limitation, a user of the online social network may specify privacy settings for a user-profile page that identify a set of users that may access work-experience information on the user-profile page, thus excluding other users from accessing that information. In particular embodiments, privacy settings for an object may specify a “blocked list” of users or other entities that should not be allowed to access certain information associated with the object. In particular embodiments, the blocked list may include third-party entities. The blocked list may specify one or more users or entities for which an object is not visible. As an example and not by way of limitation, a user may specify a set of users who may not access photo albums associated with the user, thus excluding those users from accessing the photo albums (while also possibly allowing certain users not within the specified set of users to access the photo albums). In particular embodiments, privacy settings may be associated with particular social-graph elements. Privacy settings of a social-graph element, such as a node or an edge, may specify how the social-graph element, information associated with the social-graph element, or objects associated with the social-graph element can be accessed using the online social network. As an example and not by way of limitation, a particular photo may have a privacy setting specifying that the photo may be accessed only by users tagged in the photo and friends of the users tagged in the photo. In particular embodiments, privacy settings may allow users to opt in to or opt out of having their content, information, or actions stored/logged by the social-networking system160or assistant system140or shared with other systems (e.g., a third-party system170). Although this disclosure describes using particular privacy settings in a particular manner, this disclosure contemplates using any suitable privacy settings in any suitable manner. In particular embodiments, the social-networking system160may present a “privacy wizard” (e.g., within a webpage, a module, one or more dialog boxes, or any other suitable interface) to the first user to assist the first user in specifying one or more privacy settings. The privacy wizard may display instructions, suitable privacy-related information, current privacy settings, one or more input fields for accepting one or more inputs from the first user specifying a change or confirmation of privacy settings, or any suitable combination thereof. In particular embodiments, the social-networking system160may offer a “dashboard” functionality to the first user that may display, to the first user, current privacy settings of the first user. The dashboard functionality may be displayed to the first user at any appropriate time (e.g., following an input from the first user summoning the dashboard functionality, following the occurrence of a particular event or trigger action). The dashboard functionality may allow the first user to modify one or more of the first user's current privacy settings at any time, in any suitable manner (e.g., redirecting the first user to the privacy wizard). Privacy settings associated with an object may specify any suitable granularity of permitted access or denial of access. As an example and not by way of limitation, access or denial of access may be specified for particular users (e.g., only me, my roommates, my boss), users within a particular degree-of-separation (e.g., friends, friends-of-friends), user groups (e.g., the gaming club, my family), user networks (e.g., employees of particular employers, students or alumni of particular university), all users (“public”), no users (“private”), users of third-party systems170, particular applications (e.g., third-party applications, external websites), other suitable entities, or any suitable combination thereof. Although this disclosure describes particular granularities of permitted access or denial of access, this disclosure contemplates any suitable granularities of permitted access or denial of access. In particular embodiments, one or more servers162may be authorization/privacy servers for enforcing privacy settings. In response to a request from a user (or other entity) for a particular object stored in a data store164, the social-networking system160may send a request to the data store164for the object. The request may identify the user associated with the request and the object may be sent only to the user (or a client system130of the user) if the authorization server determines that the user is authorized to access the object based on the privacy settings associated with the object. If the requesting user is not authorized to access the object, the authorization server may prevent the requested object from being retrieved from the data store164or may prevent the requested object from being sent to the user. In the search-query context, an object may be provided as a search result only if the querying user is authorized to access the object, e.g., if the privacy settings for the object allow it to be surfaced to, discovered by, or otherwise visible to the querying user. In particular embodiments, an object may represent content that is visible to a user through a newsfeed of the user. As an example and not by way of limitation, one or more objects may be visible to a user's “Trending” page. In particular embodiments, an object may correspond to a particular user. The object may be content associated with the particular user, or may be the particular user's account or information stored on the social-networking system160, or other computing system. As an example and not by way of limitation, a first user may view one or more second users of an online social network through a “People You May Know” function of the online social network, or by viewing a list of friends of the first user. As an example and not by way of limitation, a first user may specify that they do not wish to see objects associated with a particular second user in their newsfeed or friends list. If the privacy settings for the object do not allow it to be surfaced to, discovered by, or visible to the user, the object may be excluded from the search results. Although this disclosure describes enforcing privacy settings in a particular manner, this disclosure contemplates enforcing privacy settings in any suitable manner. In particular embodiments, different objects of the same type associated with a user may have different privacy settings. Different types of objects associated with a user may have different types of privacy settings. As an example and not by way of limitation, a first user may specify that the first user's status updates are public, but any images shared by the first user are visible only to the first user's friends on the online social network. As another example and not by way of limitation, a user may specify different privacy settings for different types of entities, such as individual users, friends-of-friends, followers, user groups, or corporate entities. As another example and not by way of limitation, a first user may specify a group of users that may view videos posted by the first user, while keeping the videos from being visible to the first user's employer. In particular embodiments, different privacy settings may be provided for different user groups or user demographics. As an example and not by way of limitation, a first user may specify that other users who attend the same university as the first user may view the first user's pictures, but that other users who are family members of the first user may not view those same pictures. In particular embodiments, the social-networking system160may provide one or more default privacy settings for each object of a particular object-type. A privacy setting for an object that is set to a default may be changed by a user associated with that object. As an example and not by way of limitation, all images posted by a first user may have a default privacy setting of being visible only to friends of the first user and, for a particular image, the first user may change the privacy setting for the image to be visible to friends and friends-of-friends. In particular embodiments, privacy settings may allow a first user to specify (e.g., by opting out, by not opting in) whether the social-networking system160or assistant system140may receive, collect, log, or store particular objects or information associated with the user for any purpose. In particular embodiments, privacy settings may allow the first user to specify whether particular applications or processes may access, store, or use particular objects or information associated with the user. The privacy settings may allow the first user to opt in or opt out of having objects or information accessed, stored, or used by specific applications or processes. The social-networking system160or assistant system140may access such information in order to provide a particular function or service to the first user, without the social-networking system160or assistant system140having access to that information for any other purposes. Before accessing, storing, or using such objects or information, the social-networking system160or assistant system140may prompt the user to provide privacy settings specifying which applications or processes, if any, may access, store, or use the object or information prior to allowing any such action. As an example and not by way of limitation, a first user may transmit a message to a second user via an application related to the online social network (e.g., a messaging app), and may specify privacy settings that such messages should not be stored by the social-networking system160or assistant system140. In particular embodiments, a user may specify whether particular types of objects or information associated with the first user may be accessed, stored, or used by the social-networking system160or assistant system140. As an example and not by way of limitation, the first user may specify that images sent by the first user through the social-networking system160or assistant system140may not be stored by the social-networking system160or assistant system140. As another example and not by way of limitation, a first user may specify that messages sent from the first user to a particular second user may not be stored by the social-networking system160or assistant system140. As yet another example and not by way of limitation, a first user may specify that all objects sent via a particular application may be saved by the social-networking system160or assistant system140. In particular embodiments, privacy settings may allow a first user to specify whether particular objects or information associated with the first user may be accessed from particular client systems130or third-party systems170. The privacy settings may allow the first user to opt in or opt out of having objects or information accessed from a particular device (e.g., the phone book on a user's smart phone), from a particular application (e.g., a messaging app), or from a particular system (e.g., an email server). The social-networking system160or assistant system140may provide default privacy settings with respect to each device, system, or application, and/or the first user may be prompted to specify a particular privacy setting for each context. As an example and not by way of limitation, the first user may utilize a location-services feature of the social-networking system160or assistant system140to provide recommendations for restaurants or other places in proximity to the user. The first user's default privacy settings may specify that the social-networking system160or assistant system140may use location information provided from a client system130of the first user to provide the location-based services, but that the social-networking system160or assistant system140may not store the location information of the first user or provide it to any third-party system170. The first user may then update the privacy settings to allow location information to be used by a third-party image-sharing application in order to geo-tag photos. In particular embodiments, privacy settings may allow a user to specify one or more geographic locations from which objects can be accessed. Access or denial of access to the objects may depend on the geographic location of a user who is attempting to access the objects. As an example and not by way of limitation, a user may share an object and specify that only users in the same city may access or view the object. As another example and not by way of limitation, a first user may share an object and specify that the object is visible to second users only while the first user is in a particular location. If the first user leaves the particular location, the object may no longer be visible to the second users. As another example and not by way of limitation, a first user may specify that an object is visible only to second users within a threshold distance from the first user. If the first user subsequently changes location, the original second users with access to the object may lose access, while a new group of second users may gain access as they come within the threshold distance of the first user. In particular embodiments, the social-networking system160or assistant system140may have functionalities that may use, as inputs, personal or biometric information of a user for user-authentication or experience-personalization purposes. A user may opt to make use of these functionalities to enhance their experience on the online social network. As an example and not by way of limitation, a user may provide personal or biometric information to the social-networking system160or assistant system140. The user's privacy settings may specify that such information may be used only for particular processes, such as authentication, and further specify that such information may not be shared with any third-party system170or used for other processes or applications associated with the social-networking system160or assistant system140. As another example and not by way of limitation, the social-networking system160may provide a functionality for a user to provide voice-print recordings to the online social network. As an example and not by way of limitation, if a user wishes to utilize this function of the online social network, the user may provide a voice recording of his or her own voice to provide a status update on the online social network. The recording of the voice-input may be compared to a voice print of the user to determine what words were spoken by the user. The user's privacy setting may specify that such voice recording may be used only for voice-input purposes (e.g., to authenticate the user, to send voice messages, to improve voice recognition in order to use voice-operated features of the online social network), and further specify that such voice recording may not be shared with any third-party system170or used by other processes or applications associated with the social-networking system160. As another example and not by way of limitation, the social-networking system160may provide a functionality for a user to provide a reference image (e.g., a facial profile, a retinal scan) to the online social network. The online social network may compare the reference image against a later-received image input (e.g., to authenticate the user, to tag the user in photos). The user's privacy setting may specify that such image may be used only for a limited purpose (e.g., authentication, tagging the user in photos), and further specify that such image may not be shared with any third-party system170or used by other processes or applications associated with the social-networking system160. Systems and Methods FIG.9illustrates an example computer system900. In particular embodiments, one or more computer systems900perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems900provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems900performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems900. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems900. This disclosure contemplates computer system900taking any suitable physical form. As example and not by way of limitation, computer system900may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, or a combination of two or more of these. Where appropriate, computer system900may include one or more computer systems900; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems900may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems900may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems900may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system900includes a processor902, memory904, storage906, an input/output (I/O) interface908, a communication interface910, and a bus912. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor902includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor902may retrieve (or fetch) the instructions from an internal register, an internal cache, memory904, or storage906; decode and execute them; and then write one or more results to an internal register, an internal cache, memory904, or storage906. In particular embodiments, processor902may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor902including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor902may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory904or storage906, and the instruction caches may speed up retrieval of those instructions by processor902. Data in the data caches may be copies of data in memory904or storage906for instructions executing at processor902to operate on; the results of previous instructions executed at processor902for access by subsequent instructions executing at processor902or for writing to memory904or storage906; or other suitable data. The data caches may speed up read or write operations by processor902. The TLBs may speed up virtual-address translation for processor902. In particular embodiments, processor902may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor902including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor902may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors902. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory904includes main memory for storing instructions for processor902to execute or data for processor902to operate on. As an example and not by way of limitation, computer system900may load instructions from storage906or another source (such as, for example, another computer system900) to memory904. Processor902may then load the instructions from memory904to an internal register or internal cache. To execute the instructions, processor902may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor902may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor902may then write one or more of those results to memory904. In particular embodiments, processor902executes only instructions in one or more internal registers or internal caches or in memory904(as opposed to storage906or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory904(as opposed to storage906or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor902to memory904. Bus912may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor902and memory904and facilitate accesses to memory904requested by processor902. In particular embodiments, memory904includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory904may include one or more memories904, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage906includes mass storage for data or instructions. As an example and not by way of limitation, storage906may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage906may include removable or non-removable (or fixed) media, where appropriate. Storage906may be internal or external to computer system900, where appropriate. In particular embodiments, storage906is non-volatile, solid-state memory. In particular embodiments, storage906includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage906taking any suitable physical form. Storage906may include one or more storage control units facilitating communication between processor902and storage906, where appropriate. Where appropriate, storage906may include one or more storages906. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface908includes hardware, software, or both, providing one or more interfaces for communication between computer system900and one or more I/O devices. Computer system900may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system900. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces908for them. Where appropriate, I/O interface908may include one or more device or software drivers enabling processor902to drive one or more of these I/O devices. I/O interface908may include one or more I/O interfaces908, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface910includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system900and one or more other computer systems900or one or more networks. As an example and not by way of limitation, communication interface910may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface910for it. As an example and not by way of limitation, computer system900may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system900may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system900may include any suitable communication interface910for any of these networks, where appropriate. Communication interface910may include one or more communication interfaces910, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus912includes hardware, software, or both coupling components of computer system900to each other. As an example and not by way of limitation, bus912may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus912may include one or more buses912, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Miscellaneous Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages. | 224,953 |
11861316 | DETAILED DESCRIPTION The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features. Natural language input data described herein may take any form sufficient to be converted into a computer or software based machine language for processing. As such, the inputs to an intelligent virtual assistant may include written, typed, oral, audio, gestures, or any communication form. In order to identify a variable response (or “reply”) to a particular user query, the techniques may take into account a context associated with a query in two different locations. First, the techniques may take into account the context associated with a query when determining the intent or meaning of the user's query. In addition, after identifying the user's intent with use of the context, the techniques may again take this context into account when determining a response or reply to provide back to the user. In some instances, the techniques take the same pieces of context into account when identifying the intent and the response, while in other instances the techniques may take into account different pieces of context. By taking context into account in both locations, the techniques are able to provide responses that more closely emulate human-to-human conversation than when compared to traditional techniques for identifying virtual-assistant responses. To illustrate, a user may navigate to a site of a service provider that includes a virtual assistant, either on the site or adjacent to the site. The virtual assistant may include an avatar that resembles a human representative of the service provider (e.g., that represents a human face). In addition, the virtual assistant may include an input mechanism, such as a text box, in which a user may input a query. In some instances, the user may type the query, while in other instances the user may issue the query audibly or in any other manner. In either case, the query may comprise a question (e.g., “Can I upgrade my seat assignment on my next flight?”) or may simply comprise one or more keywords or a phrase (e.g., “seat assignment upgrades”). In response to receiving the query, the techniques parse the query and utilize natural language processing techniques to identify one or more concepts expressed therein. In one example, the concepts may be based at least in part on keywords within the query, although the concepts may additionally be determined using a richer process as discussed below. In one basic example, these concepts may comprise keywords, such as “upgrade,” “seat assignment”, “flight”, and the like in this example. After identifying the concept(s) expressed in the query, the techniques may identify a context associated with the query. The context associated with the query may include a context associated with the user, a context associated with the user's session on the site of the service provider or the like. In some instances, a context is expressed as a value of one or more variables, such as whether or not a user has signed in with a site (e.g., “is_signed_in=true” or “is_signed_in=false”). A context associated with the query may comprise a value associated with any type of variable that aids in understanding the meaning of a particular query provided by the user. Example, non-limiting pieces of context may include:whether or not the user has signed in with the site of the service provider (e.g., with a user name and password);a status of the user with the service provider (e.g., based on miles flown, a type of membership of the user, a type of subscription purchased by the user);a page of the site from which the user provides the query to the virtual assistant;how long the user has remained on the page of the site from which the user provides the query to the virtual assistant;a navigation history of the user during the session prior to the user providing the query to the virtual assistant;a location of a cursor on the site when the user provides the query to the virtual assistant;a prior query provided by the user to the virtual assistant during the session or a prior session;a time of day at which the user provides the query to the virtual assistant;a date on which the user provides the query to the virtual assistant; [0031] an age of the user;a location of the user (e.g., a geolocation of the user indicated by the device on which the user provides the query);a device type from which the user accesses the site (e.g., a mobile device, a desktop computer, etc.);a language associated with the user (e.g., a language of the query submitted by the user);how the user interacts with the virtual assistant (e.g., whether the user submits a query textually, using voice input, etc.);how the interaction with the virtual assistant is initiated (e.g., via user selection of a link or graphic, via the virtual assistant proactively engaging the user, etc.);past interaction information between the user and the virtual assistant, either during the current session or during previous sessions (e.g., previous queries and responses, etc.)how the user has been communicating recently (e.g., via text messaging, via email, etc.);information derived from the user's location (e.g., current, forecasted, or past weather at the location, major sports teams at the location, nearby restaurants, etc.);current topics of interest, either to the user or generally (e.g., trending microblog or blog topics, current news, recent microblog or blog posts made by the user, etc.). After identifying one or more pieces of context, such as one or more of those pieces of context listed above, the techniques may map the combination of: (1) the identified concept(s), and (2) the identified piece(s) of context to one of multiple different intents, each of which represents the techniques' best guess as to what exactly the user is asking about. For instance, if a user provides a query stating “what are your store hours?” and the user is determined to be within one block of a brick-and-mortar location of the service provider, then the techniques may determine that the user's intent is to determine whether or not the store is open for the user to enter at this moment. If, however, the user provides a query of “general store hours” and the user is determined to be in a different city as a brick-and-mortar location of the service provider, then the techniques may determine that the user's intent is to learn about the general store hours throughout the week rather than whether or not the store is open at the instant that the user provides the query. In this example, the techniques may map the received queries to two different intents even though the identified concept (store hours) is the same or very similar. After mapping the user's query to one of multiple different intents based on both the identified concepts and the context associated with the query, the techniques may then map the intent to one of multiple different responses associated with the intent. Returning to the example of the user within one block of a merchant providing the query “what are your store hours?”, recall that the techniques have already mapped this query and surrounding context (e.g., location) to an intent indicating that the user is trying to determine whether or not she is able to enter the store at the instant time. Thereafter, the techniques may take into account the same or a different context of the query when identifying a response to provide to the user. For instance, envision that the user issues this query at 8:50 pm and the store closes at 9:00 pm. Based on this context and the previously determined intent, the techniques the may provide a response to the user stating “We close in ten minutes! Hurry and come see us!” If, however, the user issues the query at 9:05 pm, then the techniques may provide a response stating “We just missed you! However, we are open tomorrow from 8 am to 9 pm.” In another example, a user's may provide an initial query asking “may I upgrade my seat assignment on my next flight?” In response, the techniques may first map the query to an intent (based on context) and then again reference one or more pieces of context prior to determining a response to the query. For instance, envision that the techniques determine that the value of the variable “is_signed_in” is true and that the value of the variable “Gold_Customer” is also true, meaning that the user is in fact signed in with the service provider and is a “gold customer” at the service provider. In this example, the intent coupled with this context may map to a particular response, which may indicate that the all gold members are entitled to available upgrades. If, however, the value of the variable “Gold_Customer” is false but the value of the variable “Silver_Customer” is true, then the intent coupled with this different context may map to a response indicating that silver customers are entitled to upgrades in certain circumstances. Furthermore, the techniques could take into account additional context when sending the response, such as a time that the query is received and a time of the user's next flight. If these times indicate that the user's flight is about to take off, then the techniques could use this context to switch the communication channel between the user and virtual assistant. For instance, if the user submits the query via a web interface, but techniques determine that the user's flight is about to take off, then the techniques may send the response via a text message in addition or in the alternative to providing the response via the web interface. The techniques may also take into account past interactions between the user and the virtual assistant, communication channels the user regularly uses, communication channels the user has recently been using, and the like. As described in detail below, a response provided back to a user may include content and/or action(s). For instance, a response may include content such as a textual answer or information, an audible answer or information, one or more hyperlinks to pages that have been determined to be related to the query, or the like. In some instances, the techniques may provide some or all of this response via the virtual assistant. For instance, the returned content may include text and one or more links that are written as a narrative from the perspective of the virtual assistant. This content may also be addressed to or otherwise tailored to the particular user, if recognized (e.g., “Yes, John, as a Gold Customer you are entitled to a seat upgrade, and I have provided some links below that may be of interest to you . . . ”). In addition or in the alternative, the techniques may provide information audibly that appears to originate from the virtual assistant. Addition or alternatively, the techniques may perform an action on behalf of the user in response to receiving the query, such as causing a user's electronic device to navigate to a page deemed related to the query (e.g., to a page associated with Gold Customer upgrade policies), may alter a reservation or order on behalf of the user (e.g., upgrade the user's seat assignment), may initiate a request on behalf of the user (e.g., request the upgrade), may initiate a communication on behalf of the user, may purchase an item on behalf of the user, or may perform any other similar or different type of action in response to receiving the query. By taking into account the context of a query both: (1) for the purposes of identifying an intent, and (2) after for the purposes of identifying a response identifying the intent, the techniques described herein allow for interaction between virtual assistants and end users that more closely mirror human-to-human interactions. These techniques are described below with reference to an example architecture. It is to be appreciated, however, that other similar and/or different architectures may also implement these techniques. Example Architecture FIG.1illustrates an example architecture100that includes a user102operating an electronic device104to render content from a site of a service provider106. The site may comprise a website, an intranet site, a downloaded application, or any other platform on which the user102may access information from the service provider106. In this example, the user102access the site over a network108, which may represent any type of communication network, including a local-area network, a wide-area network, the Internet, a wireless network, a wireless wide-area network (WWAN), a cable television network, a telephone network, a cellular communications network, combinations of the foregoing, and/or the like. As illustrated, the device104renders a user interface (UI)110that includes content112from the service provider106and content114from a virtual-assistant service116. In some instances, the content114may be served from servers of the service provider106as part of the site, while in other instances the content114may be from servers of the virtual-assistant service116served atop or adjacent to the site. In either instance, the content112of the site may include any sort of details or information associated with the service provider106, while the content114may include a virtual assistant (e.g., an avatar that resembles a human representative of the service provider106) along with an interface that allows the user102to enter a query to the virtual assistant. As described in further detail below, the user102may enter a query into the interface provided by the virtual assistant. In response to receiving this query either from the computing device104, from the service provider106, or in some other manner, a variable-response module118of the virtual-assistant service116may identify a response to provide to the user102at least partly via the virtual assistant. For instance, the variable-response module118may map the query to an intent based on a context of the query and may then map the intent to a response, again with reference to the context of the query. After identifying the response, the virtual-assistant service116and/or the service provider106may provide the response to the user102. As illustrated, the service provider106may comprise one or more computing devices (e.g., one or more servers) that include or otherwise have access to one or more processors120, one or more network interfaces122, and memory124, which stores content126of the site of the service provider106. The virtual-assistant service116, meanwhile, may also comprise one or more computing devices (e.g., one or more servers) that include or otherwise have access to one or more processors128, one or more network interfaces130, and memory132, which stores the variable-response module118. Finally, the electronic device104of the user102may include or otherwise have access to one or more processors134, one or more network interfaces136, and memory138, which stores a client application140for rendering the UI110. The client application may comprise a browser for rendering the site content126, a downloaded application provided by the service provider106, or any other client application configured to output content from the service provider106. WhileFIG.1illustrates the service provider106storing the site content126, in some instances the client application140may store some or all of this content locally on the device104. Furthermore, whileFIG.1illustrates the electronic device104as a desktop computer, the electronic device104may comprise any sort of device, such as a mobile phone, a multifunctional device, a laptop computer, a personal digital assistant (PDA), or the like. In each instance, the electronic device104may include various additional components, such as one or more output devices (e.g., displays, speakers, etc.), one or more input devices (e.g., a keyboard, a touchscreen, etc.), an operating system, system busses, and the like. The memory138(and other memories described herein) stores a number of modules and data, and may include volatile and/or nonvolatile memory, removable and/or non-removable media, and the like, which may be implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. WhileFIG.1illustrates one example architecture for providing variable responses, it is to be appreciated that multiple other architectures may implement the described techniques. For instance, whileFIG.1illustrates the service provider106as separate from the virtual-assistant service116, in some instances some or all of these components may reside in a common location, spread out amongst multiple additional entities, located on the electronic device104, and/or the like. Example Variable Responses FIGS.2A-Bcollectively illustrate a high-level communication flow200between the example electronic device104of the user102and the service provider106and/or the virtual-assistant service116. As illustrated, the electronic device104renders a user interface (UI)202that includes content204from the service provider106and content206from the virtual-assistant service116. In some instances, the virtual-assistant service116serves the content206to the device104, while in other instances the service provider106serves the content206, either as part of the site content204or after receiving the content from a separate virtual-assistant service116. In either instance, the example content204here represents a home page of an example service provider (“Vista Airlines”). The content includes a title of the page, a link to current promotions, a link to book a flight, and the like. The content206, meanwhile, collectively comprises a virtual assistant that is configured to emulate human-to-human interaction between the example user102and the service provider106. In this example, the content206includes an avatar208that depicts a human representative of the service provider, as well as text210introducing the avatar208as a virtual assistant (“Hello, I'm Steve your virtual assistant. Please enter any question you have below:”). The content206also includes an input mechanism, here in the form of a text box212, in which the user102is able to enter a query to the virtual assistant. In this example, the user102has entered the query in the form of a string of text214(“Can I upgrade my seat assignment on my next flight?”). The user102may enter this query via a keyboard, audibly, or in any other manner Finally, the example content206includes an icon216(“Submit”) that, when selected, allows the user102to submit the query to the service provider106and/or the virtual-assistant service116. As illustrated, the user102has in fact selected the icon216to submit the entered query to the provider106and/or the service116. In some instances, the device104provides the query214directly to the service provider106, which identifies an appropriate response and may provide this response back to the device104or to another device associated with the user. In other instances, meanwhile, the provider106may receive the query214, provide the query214to the service116, receive a response from the service116, and provide the response to the device104or to another device associated with the user. In still other instances, the device104provides the query214to the service116directly, which may identify a response or provide the query214to the provider106for identifying a response. The service116or the provider106may then provide the response to the device104or to another device associated with the user. Of course, while a few example communication flows have been described, it is to be appreciated that other communication flows are possible. In each instance, the query214sent to the provider106and/or the service116may comprise one or more concepts218and one or more pieces of context220. The concepts218may be based, in part, on the words and phrases within the string of text entered by the user, while the context220may be based on any additional factors associated with the user, the device104, or the like. As described above, for instance, the context220may include whether or not the user is signed in with the service provider106, a status of the user102with the service provider, an age of the user102, a type of device from which the user102provides the query214, or the like. FIG.2Bcontinues the illustration and represents the service provider106and/or the virtual-assistant service116providing a response222for output on the electronic device104or on another electronic device associated with the user102. As described above and in further detail below, the provider106and/or the service116may have identified the response by first mapping the concepts218and the context220to an intent, and thereafter mapping the intent and the context220to the response222. As illustrated, the response222may comprise content224, one or more actions226to perform, or a combination thereof. FIG.2B, for instance, illustrates that the response222includes text228, a hyperlink230, and audio content232. The text228may comprise an answer or information otherwise pertaining to the user's query214. Here, for example, the text228states the following: “Thank you for your query, Mary. Our gold member upgrade policy is shown on the left. You may also find this link helpful:”. As such, the provider106and/or the service116may have determined, via the context220, that the user102was in fact signed with the service provider106when the user102submitted the query214and that the user102(“Mary”) has a status of “gold member” with the service provider106. In addition, the response222included the link (e.g., a hyperlink)230associated with the query and entitled “Request Upgrade”. When the user102selects the link230, the electronic device104may navigate to a page at which the user102may request to upgrade her seat on her next flight. The audio content232, meanwhile, may comprise the same content as the text228, or may comprise different content in other examples. In some instances, the avatar (i.e., the visual representation of the virtual assistant) may appear to utter the audible content232, either based on the tone of the content232and/or based on the avatar appearing to speak the words within the content232. In addition, the response222may include one or more actions226for performance on behalf of the user102. Here, for instance, the response222has instructed the device104to navigate to a new page234of the site of the content provider, with this page being associated with the query214. In this example, the page234indicates the service provider's policy for upgrading gold members, like the user102. In other instances, the action226may comprise automatically upgrading the user's seat assignment, initiating a request to upgrade, or the like. FIGS.3A-Bcollectively illustrate another high-level communication flow300between a mobile electronic device302of the user102and the service provider106and/or the virtual-assistant service116. Here, the user102again provides a query304via the virtual assistant, with the query including one or more concepts306and one or more pieces of context308. In this example, the query comprises the string of text “Where can I find my luggage?”. For instance, the user102may have recently deplaned from a flight on Vista airlines and, hence, may be providing the query304to the provider106and/or the service116while physically located near a particular airport. In another example, the user may be making this request from her home and prior to actually making the flight. In either instance, the query304may include this context in the form of the geolocation of the mobile electronic device302when the user issued the query. This geolocation may be provided explicitly by the device302(e.g., via GPS coordinates, etc.), may be determined via signal triangulation, or may be determined in any other manner. FIG.3Billustrates that, upon receiving the query304, the service provider106and/or the virtual-assistant service116may identify a response310to provide to the user102. Again, this response may be determined by identifying an intent of the query304with reference to the concepts306and one or more pieces of the context308, and then by mapping the determined intent along with one or more same or different pieces of the context308to the response310. As with the example ofFIGS.2A-Babove, the response310may comprise content312and/or action314. In this example, the action314includes navigating the user's electronic device302to a page316of the service provider's site that indicates Vista Airlines' luggage policies. The content312, meanwhile, includes text318indicating that the luggage of the user102can be found at carousel four at the airport at which the user102landed (SEA). To make this determination, the provider106and/or the service116may have identified the user102, her now-completed travel plans, her geolocation, and/or one or more other pieces of context prior to serving the text318for output on the device302. If the user were to have issued the query from her home and prior to her flight, the provider106and/or the service116may have taken this different context (e.g., a different geolocation, a different time of the query, etc.) into account and may have served different content. In this example, the content312of the response310also includes a hyperlink320(“Report a Missing Bag”) that is related to the query304of the user102. Finally, in this example, the content312also includes audible content322for output by the virtual assistant. Again, while this audible content322is the same as the text318in this example, in other examples these pieces of content differ from one another. Example Virtual-Assistant Service FIG.4illustrates example components that the virtual-assistant service116may utilize when identifying a variable response to provide to a user's query. As illustrated, the service116may be hosted on one or more servers that include one or more processors128, one or more network interfaces130, and memory132. The memory132may store or otherwise have access to the variable-response module118, which may include a natural language processing module402, a context-determination module404, an intent-mapping module406, and a response-mapping module408. In addition, the memory132may also store or otherwise have access to a datastore of one or more concepts410, a datastore of one or more contexts412, a datastore of one or more intents414, and a datastore of one or more responses416. The natural language processing module402may implement known or new natural language processing techniques to parse a received query for the purpose of identifying one or more concepts expressed therein. For instance, the module402may identify a set of concepts410based on the string of text of the query. The context-determination module404, meanwhile, may function to identify one or more pieces of context associated with the received query, such as whether the user is signed in, a geolocation of the user when issuing the query, or the like. The intent-mapping module406may then map the identified set of concepts and the identified pieces of context to one of the multiple different intents414. That is, given the union of a particular concept set and respective values of one or more variables associated with the context of the query, the module406may map the query to a particular intent of the intents414. Finally, the response-mapping module408may map the intent to a particular response based at least in part on respective values of one or more variables, which may be the same or different variables used when mapping the query to an intent. Stated otherwise, and as illustrated below with reference toFIG.5, each intent of the intents414may be associated with multiple different responses. Therefore, after a particular query has been mapped to a particular intent, the response-mapping module408may identify which of the multiple responses associated with the intent to provide to the user who provided the query, with reference to the context of the query. WhileFIG.4illustrates the described components as residing on the virtual-assistant service116, in other instances some or all of these components may reside in another location. For instance, these components may reside across the service116, the service provider106, the electronic device104or302, or at any other location. FIG.5illustrates how the virtual-assistant service116may identify a response to provide to the example user102in response to receiving a query from the user102via a virtual assistant. In this example, the query is provided from the user on a client side502of the illustration, while the identifying of a response to provide to the query is illustrated as being performed on a server side504of the illustration. Of course, in other implementations different portions of the operations may be performed at other locations. AsFIG.5depicts, the example query again includes one or more concepts218and one or more pieces of context220. Upon receiving the query, the variable-response module118may identify, potentially with reference to the datastores410and412, the concepts and context of the query. Based on the identified set of concepts of the query (or “concept set”) and the identified pieces of context of the query (or “context”), the module118may map the query to one of multiple different intents414(1), . . . ,414(N). For instance,FIG.5illustrates that a query having a concept set “CS1,1” and a context “C1,1” maps to the intent414(1), while a query having a concept set “CSN,1” and a content “C N,1” maps to the intent414(N). In some instances, a concept set may map to more than one intent and, therefore, the context of the query may be used to determine which intent to map the query to. That is, in instances where a concept set of a query maps to multiple different intents, the intents may compete for the query based on the context of the query. As used herein, a letter (e.g., “N”, “E”, etc.) represents any integer that is greater than zero. After mapping the query to an intent, the variable-response module118may then map the intent to an appropriate response416(1)(1), . . . ,416(N)(E) with reference to the context of the query. For instance, for a query that the module118has mapped to the intent414(1) and that has a content “C1,1”, the module118maps this query to a response416(1)(1). In some instances, of course, a response may be common (or utilized) across multiple different intents. After identifying the response based on the context, the virtual-assistant service116may then provide this response to the user102, such as directly to the device104or to the service provider106for providing to the device104(and/or to another device associated with the user). FIGS.6A-Bcollectively illustrate an example of mapping a particular query (“Can I upgrade my seat assignment on my next flight?”) to a particular response by referencing a context of the query both when mapping the query to an intent and when mapping the intent to a response. In this example, the user inputs the query, which comprises a particular concept set (“CS45”) and a particular context (“C87”). In response to receiving the query and identifying the concept set and context, the variable-response module118has mapped the query to the example intent414(1). Thereafter, the module118has mapped this intent to the example response416(1)(1) based on the identified context of the query. FIG.6Bcontinues the illustration, and represents the virtual-assistant service116providing the example response416(1)(1) to the electronic device104. As illustrated, the response may include both content (e.g., text, links, audio, etc.) and an action (e.g., navigating the user's electronic device to a new page of the site), as described above with reference toFIG.2B. Example Processes FIGS.7A-Bcollectively illustrate an example process700that includes the example user102providing a query via a virtual assistant and the service provider106and/or the virtual-assistant service116identifying a response to provide to the user102. Consistent with the discussion above, this response may take a context of the query into account both when identifying an intent of the query and when identifying an appropriate response. In this example, operations illustrated beneath the electronic device104may be performed by this device in some examples, while operations illustrated beneath the provider106and the service116may be performed by the provider and/or the service in some examples. However, it is to be appreciated that in other implementations the operations may be performed at any other location(s). The process700(as well as each process described herein) is illustrated as a logical flow graph, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. At702, the service provider106and/or the virtual-assistant service116causes display of a virtual assistant on or adjacent to a site of service provider rendered on a display of the electronic device. At704, and in response, the device104renders the virtual assistant on the display. At706, the device104receives a query from the user, which may comprise a string of text. At708, the device104provides this query to the provider106and/or the service116, which receives the query at710. At712, the provider106and/or the service116parses the query to identify one or more concepts expressed therein. That is, the provider106and/or the service116may use natural language processing techniques to identify concepts specified by the user in the query. These concepts may be determined with reference to contents of the user's query in any suitable manner. In some examples, the concept(s) of a query are determined at least partly with reference to one or more keywords expressed within the query. For instance, the concepts may be determined using relatively basic keyword matching in some instances. In other instances, meanwhile, the concepts may be determined using a much richer process as described below. In these instances, when the provider106and/or the service116receives the query in the form of a string of text, the provider106and/or the service116preprocesses the string by, for example, identifying one or more tokens within the string. The tokens may comprise words, phrases, symbols, or the like that signify some sort of meaning within the query. After tokenizing the string of text, the provider106and/or the service116may then map each of these tokens and/or ordered patterns of the tokens to a more general set, known as a “vocab item”. A vocab item may comprise a general set of multiple different tokens having a meaning that is common amongst these tokens. For instance, the tokens “happy”, “elated” and a smiley face (e.g., “:-)”) may each map to a vocab item representing “happiness”. After mapping tokens and/or patterns of tokens from the original string of text to one or more vocab items, the provider106and/or the service116may then pattern match the vocab items to one or more concepts. That is, each concept may be associated with multiple different vocab-item patterns (e.g., “(vocab item A, vocab item, D, vocab item F)”, “(vocab item B, vocab item E)”, “(vocab item X)”, etc.). In addition, some of these patterns may be associated with a context. For instance, the pattern “(vocab item B, vocab item E)” may map to a particular concept given a particular context (e.g., the user is a Gold Member), but not otherwise. By pattern matching the vocab items to the concepts, the provider106and/or the service116may identify one or more concepts that are associated with the submitted query. In addition or in the alternative to the techniques described above, the provider106and/or the service116may identify concept(s) of a query with reference to a graph data structure that maintains correlations between words. The graph data structure, for instance, may maintain a hierarchy of words (e.g., hypernyms and hyponyms). The techniques may utilize this hierarchy to identify one or more concepts within a string of text. For instance, if a string contains the word “cookbook”, the techniques may analyze the graph data structure to determine that “cookbook” is a type of a “reference book” which is a type of “book”. The techniques may then identify “book”, “reference book”, and/or “book” as a concept within the query. Of course, in this and other processes used to determine concepts within queries, the techniques may reference other factors associated with the queries, such as the ordering of words, parts of speech of words, and the like. Furthermore, while a few different example techniques for identifying concepts have been described, it is to be appreciated that other new and/or known techniques may be used to identify concepts within a query. At714, the provider106and/or the service116may also identify a context associated with the user102or with a session of the user102on the site of the service provider106. This may include whether the user is logged in on the site, a page from which the user submitted the query, a status of the user at the service provider106, or the like. At716, the provider106and/or the service116then determines an intent of the query based on the identified concept(s) and the identified context. FIG.7Bcontinues the illustration of the process700and includes, at718, the provider106and/or the service116determining a response to provide to the query based on the intent and the identified context. In some instances, the portion of the context referenced in mapping the query to the intent represents the same portion of context referenced in mapping the intent to the response. In other instances, meanwhile, the provider106and/or the service116map the query to an intent using a first portion of context, while using a second, different portion of the context when mapping the intent to the response. Of course, in still other instances, these portions of content may include at least one common piece of context and at least one piece of context that is not commonly used. At720, the provider106and/or the service116provides the response to the electronic device104of the user or to another electronic device associated with the user. In this example, the device104receives the response at722and, at724, outputs the response to the user102, at least a portion of which may be outputted via the virtual assistant. For instance, the device104may render text, one or more links, audible content, and the like, and may perform one or more actions specified in the response. FIG.8illustrates another process800for providing variable responses to user queries via virtual assistants. This process may be performed by the virtual-assistant service116, the service provider106, the electronic device104of the user, and/or some combination thereof. At802, the process800receives a query via an interface provided by a virtual assistant. At804, the process800then maps the query to an intent based on both contents of the query and a context of the query. In some instances, the operation804may comprise a series of sub-operations. At804(1), the process800identifies one or more concepts expressed in the query. At804(2), the process identifies respective values of multiple different variables that may collectively define the context of the query. Finally, at804(3), the process800maps the query to the intent with reference to the concepts and the values of the multiple variables. At806, the process800then maps the intent to a response based on the context of the query. The operation806may include a series of sub-operations that include, at806(1), the process800identifying values of multiple variables, which may be the same and/or different from the variables used in mapping the query to the intent at804(3). At806(2), the process800then maps the intent to the response based at least in part on the values of the variables identified at806(1). Finally, at808, the process800may provide at least a portion of the response to the user via the virtual assistant. FIG.9illustrates yet another example process900for providing variable responses (or “replies”) in response to received user queries. Again, this process may be performed by the virtual-assistant service116, the service provider106, the electronic device104of the user, and/or some combination thereof. At902, the process900receives a request for information from a user via a virtual assistant. At904, and in response, the process900identifies one or more concepts within the request for information. In addition, at906, the process900determines a value of a first variable associated with the user that provided the query. At908, the process900maps the request for information to an intent based on the identified concepts and the value of the first variable. At910, the process900then determines a value of a second variable associated with the user, which may or may not be different than the first variable. Finally, at912, the process900identifies a reply to the request for information based on the intent and the value of the second variable. The process900may then provide this identified reply to a user, as described in detail above. The identification of relational strategies in a single conversational turn can be structured as a multi-intent detection problem. The user not only wants the task completed (the primary intent), but they may also attempt to build credibility or some common ground with the IVA (the secondary intent). Segments of text such as justification or backstory can be annotated as secondary intent and ignored while determining the primary intent. Once relational language is isolated, a separate classification can determine what relational strategies are in use and how to properly respond. Multi-intent detection within dialog systems is still an emerging field; in recent work, only one intent is assumed to be present per turn [9]. A few methods exist such as [10] which uses multi-label learning and [11] which employs a two-stage intent detection strategy. However, in [10,11], multi-intent detection is assumed to be multiple task-oriented intents within a single turn. This disclosure is significantly different, at least in one way, in that secondary intents are relational in nature and therefore must be detected and handled differently. In one non-limiting embodiment, a partitioning strategy can be implemented for multi-intent detection that is extended to detect relational language and further process it. Although English is used in the following examples, this method can be applied to any language with common conjunctions and punctuation. As visualized inFIG.10, this disclosure implements a set of segment identifiers in the form of language specific punctuation symbols combined with a dictionary of common language, specific conjunctions such as “and”, “but”, “because”, “so that”, and the like, to split each input turn on every occurrence of punctuation or conjunction and form the set of all possible hypothesis pairs (H), demonstrated in Example 1 below. Example 1 Original turn horig: My mother and I just returned from Florida and they lost our bags. Who do we contact?Hypothesis pair 1: <My mother>, <I just returned from Florida and they lost our bags. Who do we contact>Hypothesis pair 2: <My mother and I just returned from Florida>, <they lost our bags. Who do we contact>Hypothesis pair 3: <My mother and I just returned from Florida and they lost our bags>, <Who do we contact>. The left and right segments, hLand hr, from every pair h∈H are then fed into the intent classifier independently, and the confidence score of classification on each is recorded. There are many approaches to determining a confidence score that are generally described as probabilities that a result is accurate. U.S. Pat. No. 9,715,875 (Piernot 2017), which is incorporated herein by reference as if set forth fully below, describes at col. 8, lines 10-61 one non-limiting way to envision this probability problem and determining a confidence score. “In some examples, a probabilistic system can be used to determine whether or not the virtual assistant should respond to the spoken user input by determining a likelihood or confidence score that the user intended for the spoken user input to be directed at the virtual assistant. The probabilistic system can include a machine learning system or classifiers, such as neural networks. Additionally, the probabilistic system can learn and adapt to the user using a feedback loop. In these probabilistic system examples, the likelihood or confidence score can include a numerical or other representation of a calculated probability that the user intended for the spoken user input to be directed at the virtual assistant. The calculated likelihood or confidence score can then be compared to a threshold value to determine whether or not the virtual assistant should respond to the spoken user input. For example, if the calculated likelihood or confidence score is greater than the threshold value, it can be determined that the spoken user input was intended for the virtual assistant. If, however, the calculated likelihood or confidence score is not greater than the threshold value, it can be determined that the spoken user input was not intended for the virtual assistant. The likelihood or confidence score can be determined in any number of ways. For example, the determination can generally include summing positive, negative, and/or neutral contributions from any number of different types of contextual information. For example, the likelihood or confidence score can be calculated using the general formula of P=C1+C2+C3+ . . . +CN, where P represents the likelihood or confidence score that the spoken user input was intended for the user device and C1 . . . CN can be positive, negative, or zero values representing the positive, negative, or neutral contributions to the likelihood or confidence score from the N different types of contextual information. A positive contribution can represent a type of contextual information that suggests that the spoken user input was intended for the virtual assistant, a negative contribution can represent a type of contextual information that suggests that the spoken user input was not intended for the virtual assistant, and a neutral contribution can represent a type of contextual information that is neutral regarding the likelihood that the spoken user input was intended for the virtual assistant. Thus, a large P value can indicate that the spoken user input was likely intended for the virtual assistant, while small or negative P values can indicate that the spoken user input was likely not intended for the virtual assistant. The weight or value that each contextual information contribution adds to the likelihood or confidence score determination can be uniform or non-uniform. Additionally, the weight or value that each contribution adds to the likelihood or confidence score determination can depend on the value of the particular type of contextual information. For example, if contribution C1 depends on the volume of the user's voice, the sign (e.g., +/−) and/or magnitude of C1 can depend on a numerical representation of the volume of the user's voice.” In another document, such as U.S. Pat. No. 10,170,116 (Kelly et al, 2019) incorporated by reference herein, the confidence score is discussed at column 11, lines 39-57. “The different ways a spoken utterance may be interpreted (i.e., the different hypotheses) may each be assigned a probability or other type of a confidence score representing the likelihood that a particular set of words matches those spoken in the utterance. The confidence score may be based on a number of factors including, for example, the similarity of the sound in the utterance to models for language sounds (e.g., an acoustic model253stored in an ASR Models Storage252), and the likelihood that a particular word which matches the sounds would be included in the sentence at the specific location (e.g., using a language or grammar model). Thus each potential textual interpretation of the spoken utterance (hypothesis) is associated with a confidence score. Based on the considered factors and the assigned confidence score, the ASR process250outputs the most likely text data recognized in the audio data211. The ASR process may also output multiple hypotheses in the form of a lattice or an N-best list with each hypothesis corresponding to a confidence score or other score (such as probability scores, etc.).” For the purpose of multi-intent detection, the procedure determines if two separate intents are present by comparing the confidence scores for hLand hr to the score for horigas shown in Equation 1. As noted, a previously calculated threshold for Equation 1 is stored in computerized memory to determine of an input has more than a single intent joined by partition identifiers such as punctuation marks and language-based conjunction words. min{score(hl),score(hr)}score(horig)>thresholdmulti-intent(1) If Equation 1 holds, then there are two different intents present in the original input from a user's turn in the human-machine conversation. For this disclosure, this idea extends to partitioning the original turn into segments and using the intent classifier's confidence on each segment for detecting the presence of unnecessary language. If the model observes that either of the following equations hold, using the arbitrary scaling factor s<=0.75 (which is not limiting of the disclosure), the method concludes that hL(in Eq. 2) or hr (Eq. 3) contains language that is unknown to the intent classifier and is therefore out of the expected scope for intent recognition. The upward pointed arrow symbol below is short hand for “the minimum compared to,” whereas a downward pointed arrow (open at the top) would be interpreted as “the maximum compared to”. [score(hl)<score(horig)×s]∧[score(hr)>score(horig)](2)[score(hl)>score(horig)]∧[score(hr)<score(horig)×s](3) In common terms, Equation 2 starts with the premise that one works only with confidence scores on the left side segment of the input that are less than the confidence score of the original entire sentence adjusted by a scaling factor. The scaling factor ensures that the length of the input terms are generally comparable. In Equation 2, after determining the left side confidence scores at issue, then the algorithm selects the left side segments that are minimized as compared to those confidence scores for right side segments that are greater than the confidence score of the original sentence. In the scenarios of Equation 2, the left side segments are deemed non-essential or noisy portions that are not helpful in determining intent. Equation 3 starts with the premise that one works only with confidence scores on the right side segment of the input that are less than the confidence score of the original entire sentence adjusted by a scaling factor. The scaling factor ensures that the length of the input terms are generally comparable. In Equation 3, after determining the right side confidence scores at issue, then the algorithm selects the right side segments that are minimized as compared to those confidence scores for left side segments that are greater than the confidence score of the original sentence. In the scenarios of Equation 3, the right side segments are deemed non-essential or noisy portions that are not helpful in determining intent. Example 3 score(horig)=0.65Hypothesis pair 1: confidence score(hL)=0.01, confidence score(hr)=0.7Hypothesis pair 2: confidence score(hL)=0.1, confidence score(hr)=0.9Hypothesis pair 3: confidence score(hL)=0.4, confidence score(hr)=0.5 Continuing from the previous example, in Example 3 these examples show that either Hypothesis pair 1 or 2 would satisfy Eq. 2, but as Hypothesis pair 2 scored higher for hr, it would be selected. The segment “they lost our bags. Who do we contact” contains no unnecessary information to determine the user intent of <baggage_claims_contact_info>. Using hr as the primary task-oriented intent as it has the highest score, we proceed to determine the relational segment. Notice that although Eq. 2 and Eq. 3 assume exactly two segments, one can easily extend this idea to work with any number of segments if finer grained detection is desired. For example, to detect n segments of relational language, one can generate n equations of the form: [(score(h1)<score(horig)×s)∧(score(h2)<score(horig)×s)∧…]∧[score(hn)>score(horig)] Once separated, relational sections are classified to determine the classes of relational language present. Any multi-class classification method can be used for this, such as a Support Vector Machine, Decision Tree, or Neural Network. Each relational section is evaluated and given one or more of the following tags:Greeting, Backstory, Justification, Gratitude, Rant, Express Emotion, Other. Greetings are a common relational strategy humans use to build rapport with other humans and machines [12]. Backstory is a method of self-exposure that may be employed by the customer. In Example 1, the customer included the fact that he or she is attending a graduation as a means of self-exposure. This may be an attempt to build common ground with the agent or it may indicate the importance of the trip and motivate the agent to help the customer succeed. Justification is used by the customer to argue why the agent should take some action on the part of the customer. For instance, when trying to replace a defective product, a customer may explain how the product failed to establish credibility that the product was at fault. Gratitude, like greetings, are used by humans to also build rapport with humans and machines [12]. Ranting is a means of expressing dissatisfaction when a customer feels frustrated, ignored, or misunderstood. In computer-mediated conversations, the non-verbal emotional cues present in face-to-face conversations are missing; thus, humans resort to such negative strategies to convey their emotions [13]. For tagging purposes, we define a Rant to encompass any excessive complaining or negative narrative. Expressing emotions can be a means of showing displeasure when a customer feels a conversation is not making adequate progress or in reaction to an unexpected or disagreeable agent response. This can also indicate joking or other positive emotional expression. The tag Express Emotion is used as a catch-all for any emotional statement that is not covered by Rant. Examples would be: “i love that!”, “UGH!”, “WHY???”. The Other tag indicates that some or all of the section does not contain any relational language. This is commonly a restatement of the primary intent or facts that can be marked as unnecessary or out of application scope. Once the relational section(s) have been isolated and classified, the IVA can then determine the appropriate action to take based on the task-oriented intent. Given the task, a second component can determine how to respond to the relational classes present. This process is visualized inFIG.11. For example, if a user is complaining as evidenced by the Ranting class, the IVA can include an apology in its response along with the appropriate action to complete the task. If Justification is present, the IVA can reciprocate by indicating understanding on the importance of the task, while also performing or responding to the primary task-oriented intent. If the relational segments do not in fact include relational language, as evidenced by the Other class, they can be ignored as out of application scope. The separation of such language will still increase accuracy in determining the correct task-oriented intent for a given human conversational turn. REFERENCES [1] S. Levy, Alexa, tell me where you're going next. Backchannel.com, 2016. Available online at https://backchannel.com/alexa-tell-me-where-youre-going-next-739c53ff10b3[2] C. B. Gibson and S. G. Cohen, Virtual teams that work, JosseyBass, San Francisco, 2003.[3] D. Ballantyne, Dialogue and its role in the development of relationship specific knowledge, Journal of Business & Industrial Marketing, vol. 19, no. 2, pp. 114123, 2004.[4] J. A. Holton, Building trust and collaboration in a virtual team, Team performance management: an international journal, vol. 7, no. 3/4, pp. 3647, 2001.[5] N. W. Coppola, S. R. Hiltz, and N. G. Rotter, Building trust in virtual teams, IEEE transactions on professional communication, vol. 47, no. 2, pp. 95104, 2004.[6] E. J. de Visser, S. S. Monfort, R. McKendrick, M. A. Smith, P. E. McK-night, F. Krueger, and R. Parasuraman, Almost human: Anthropomorphism increases trust resilience in cognitive agents., Journal of Experimental Psychology: Applied, vol. 22, no. 3, p. 331, 2016.[7] T. Bickmore and J. Cassell, Relational agents: a model and implementation of building user trust, in Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 396403, ACM, 2001.[8] J. Y. Chai, C. Zhang, and T. Baldwin, Towards conversational qa: automatic identification of problematic situations and user intent, in Proceedings of the COLING/ACL on Main conference poster sessions, pp. 5764, Association for Computational Linguistics, 2006.[9] R. Sarikaya, P. Crook, A. Marin, M. Jeong, J. P. Robichaud, A. Celikyilmaz, Y. B. Kim, A. Rochette, O. Z. Khan, X. Liu, et al., An overview of end-to-end language understanding and dialog management for personal digital assistants, in IEEE Workshop on Spoken Language Technology, 2016.[10] P. Xu and R. Sarikaya, Exploiting shared information for multi-intent natural language sentence classification, in INTERSPEECH, pp. 37853789, 2013.[11] B. Kim, S. Ryu, and G. G. Lee, Two-stage multi-intent detection for spoken language understanding, Multimedia Tools and Applications, pp. 114, 2016.[12] M. K. Lee, S. Kiesler, and J. Forlizzi, Receptionist or information kiosk: How do people talk with a robot?, in Proceedings of the 2010 ACM conference on Computer supported cooperative work, pp. 3140, ACM, 2010.[13] A. Laflen and B. Fiorenza, okay, my rant is over: The language of emotion in computer-mediated communication, Computers and Composition, vol. 29, no. 4, pp. 296308, 2012. | 59,840 |
11861317 | DETAILED DESCRIPTION An important step towards enabling language learners to improve their conversational speaking proficiency involves automated scoring of multiple aspects of interactional competence and subsequent targeted feedback. The current subject matters provides enhanced techniques that utilize multiple neural architectures—recurrent, attention and memory based—along with feature-engineered models for the automated scoring of interactional and topic development aspects of text dialog data. Experiments were conducted on a conversational database of text dialogs from human learners interacting with a cloud-based dialog system, which were triple-scored along multiple dimensions of conversational proficiency. It was found that fusion of multiple architectures performs competently on our automated scoring task relative to expert inter-rater agreements, with (i) hand-engineered features passed to a support vector learner and (ii) transformer-based architectures contributing most prominently to the fusion. The current subject matter provides dialog-based learning and assessment systems which provide useful and actionable feedback to users regarding their conversational proficiency (which in turn can encourage widespread adoption of such systems). The current subject matter provides technical advantages over conventional techniques as is described comprehensively below along two directions. First, constructs of a text dialog scoring rubric pertaining to topic development were explored along with constructs pertaining to interaction, aiming to understand how various feature-engineering and model-engineering methods perform on a broader range of scoring dimensions. Second, a more comprehensive experimental setup is provided that explores multiple feature-engineered models and deep learning network architectures—recurrent, attention and memory based—for automated scoring. The current advances were informed by analyzing a corpus of 2288 conversations of nonnative speakers. With this corpus, speakers interact with a dialog application designed to test general English speaking competence in workplace scenarios particularly focusing on pragmatic skills. The application requires participants to interact with their boss and request her for a meeting to review presentation slides using pragmatically appropriate language). Each of the 2288 dialog responses were triple scored by human expert raters on a custom-designed rubric. The rubric defined 12 sub-constructs under the three broad constructs of linguistic control, topic development and interaction, apart from an overall holistic score. This study investigates the topic development construct for the first time in addition to interaction. See Table 1 for specific details of the constructs examined. ConstructSub-constructDescriptionTopicTopicExamines to what extent the responsesDevelopmentare uniformly on topic and relevant.ElaborationExamines the extent to which argumentsare developed taking into account dialoghistory and with minimal or norepetition.StructureEvaluates the structure of the discourseand chain of reasoning, along with theappropriate use of discourse markers.TaskEvaluates how well the useraccomplished the task over the courseof the interaction.InteractionEngagementExamines the extent to which the userengages with the dialog agent andresponds in a thoughtful manner.Turn TakingExamines the extent to which the usertakes the floor at appropriate points inthe conversation without noticeableinterruptions or gaps.RepairExamines the extent to which the usersuccessfully initiates and completes arepair in case of a misunderstanding orerror by the dialog agent.AppropriatenessExamines the extent to which the userreacts to the dialog agent in apragmatically appropriate manner.Overall Holistic PerformanceMeasures the overall performance. Table 1: Human scoring rubric for interaction aspects of conversational proficiency. Scores were assigned on a Likert scale from 1-4 ranging from low to high proficiency. A score of 0 was assigned when there were issues with audio quality or system malfunction or off-topic or empty responses. Automated Scoring Methods. First described is a hand-engineered feature set used in conjunction with a linear support vector machine (SVM) classifier. Next, recurrent, memory and attention based architectures are described. The automated scoring models provided herein were trained to predict valid dialog-level scores from 1-4 (only dialogs with a non-zero score were considered as part of the scoring model training). An exception to this is in the case of the memory network, where scores are predicted at the turn-level, and the dialog level score is reported as the median score across all turns of that dialog. The mean performance of scoring systems was reported on a 10-fold cross-validation (CV) experimental setup. In addition, accuracy and quadratic weighted kappa (which takes into account the ordered nature of the categorical labels) are reported herein as metrics. Feature Engineering Approaches. Two sets of exemplary features were examined. First, features that explicitly capture content (e.g., word n-grams, character n-grams) and grammatical structures (e.g., dependency trees). These features are summarized in Table 2. These features were found to be effective in predicting sub-constructs such as engagement and turn taking in earlier work. Second, nuanced features are utilized that are related to the power dynamics of social interactions and are often indicators of whether an interaction went well or not. It is hypothesized that features that capture interaction strategies such as gratitude expression or greetings will be particularly useful, given that the corpus involves conversations between a participant and their boss. Special focus is provided below on features that capture politeness and acknowledgment. The current features capture strategies such as counterfactual modals (“could/would you . . . ”), the indicative modal (“can/will you . . . ”), deferential back-shift (“I was wondering . . . ”), gratitude (“Thank you . . . ”), apologies (“I apologize”, “forgive me”), appreciation, especially at the end of the conversation (“sounds good”, “works great”), requests (“please review . . . ”), greetings (“Hi, hello miss”), mainly in the beginning of the conversation to build a positive relationship, and hedging (“I suggest . . . ”). These features can be binary, indicating, whether a dialog consists of a specific politeness strategy. Table 3 presents exemplars of politeness strategies observed in our training corpus. FeatureDescriptionWordWord n-grams are collected for n = 1 to 2. This featuren-gramscaptures patterns about vocabulary usage (key words) inresponses.CharacterCharacter n-grams (including whitespace) are collectedn-gramsfor n = 2 to 5. This feature captures patterns that abstractaway from grammatical and other language use errors.ResponseDefined as log2(chars), where chars represents the totallengthnumber of characters in a response.SyntacticA feature that captures grammatical relationshipsdependenciesbetween individual words in a sentence. This featurecaptures linguistic information about “who did what towhom” and abstracts away from a simple unordered setof key words.DiscourseFeatures based on presence or absence of specific wordsstrategyin the response that represent different discoursestrategies (see Table 3 for examples of politenessstrategies). Table 2: Content and grammatical structure features used for machine scoring. SKLL, an open-source Python package that wraps around the scikit-learn package, was used to perform machine learning experiments. Reported is the mean performance of linear support vector machines (SVM) in which a cross entropy (log-loss) objective function was used to optimize learner performance, and hyperparameters such as the regularization coefficient were fine-tuned using a grid search method. StrategyExampleCounterfactualCould you also review my slides?Indicative. . . if we can meet . . .DeferentialI was wondering do you have timeGratitudeI greatly appreciate your time.ApologySorry to bother you . . .AppreciationSounds good. I will see you . . .RequestPlease review the presentation . . .GreetingsHi Hello Miss Lisa it is good . . .Hedges. . . and suggest me anything . . . Table 3: Politeness strategy exemplars reproduced from the training corpus. Recurrent Architectures with and without Attention. Recurrent architectures, such as Long Short-Term Memory (LSTM) networks including bi-directional LSTM (BiLSTM) networks, are able to learn long-term dependencies and are effective in many NLP tasks related to dialog and turn-taking scenarios. As an example and with reference to diagram100ofFIG.1, a stacked BiLSTM network architecture can be implemented with context attention. Here the output of the first BiLSTM hidden layer can be fed as input into the subsequent BiLSTM hidden layer. Varying depths of the stack can be utilized, and in some cases, depth=2. The attention mechanism utilized can be as follows. Let the number of words in the dialog d be w and the hidden representation for word wdibe hdi. A word-level attention mechanism can be provided where the word representation itch is weighted by measuring similarity with a word level context vector udw, i.e., randomly initialized and jointly learned during the training. Finally, a dialog vector udcan be computed that summarizes the weighted sum of the word annotations based on the weights. udi=tanh(Wdhdi+bw)(1)vd=∑i∈[1,w]αdihdi(2)where attention αdi is calculated as: αdi=exp(udiTudw)∑i∈[1,w]exp(udiTudw)(3) Referring again to diagram100ofFIG.1, a high-level structure of the BiLSTM Attention architecture is provided. Words are represented as embeddings and fed to the BiLSTM network. For illustration purposes, only one BiLSTM layer composed of the forward and backward layer which account to the hidden layer hdiis illustrated. Next, context vector udwis utilized to generate word level attention αdi. Finally, the dialog vector udpasses through a dense+Softmax layer to predict the score of the construct in the given experiment. To tune the hyperparameters for BiLSTM based experiments, the training data for each CV fold was split into 80% train and 20% dev, and use the dev partition for parameter tuning. The following hyperparameters for the BiLSTM architectures can be used: GloVe embeddings (100D), mini-batch size of 16, recurrent dropout value of 0.3, 10 epochs (with an early-stopping patience of 5), and the Adam optimizer with its default parameters. End to End Memory Networks (MemN2Ns). Also provided herein is the End to End Memory Network (MemN2N) architecture which is adapted to the dialog scoring task. With reference to diagram200ofFIG.2, the end to end MemN2N architecture models dependencies in text sequences using a recurrent attention model coupled with a memory component, and is therefore suited to modeling how response and prompt histories contribute to a dialog score. The original MemN2N architecture can be modified in the following ways: (i) instead of the original (query, fact history, answer) tuple that is used to train the network, there can be a (current response, response history, prompt history, answer) tuple. In other words, not only memory representations between the current response and the history of previous responses are embedded and learned, but the history of prior system prompts that have been encountered thus far; (ii) an LSTM can be used instead of a matrix multiplication at the final step of the network before prediction; (iii) the network can be trained at the turn level such that the dialog-level score can be assigned as the median score of all scores predicted by the network at the turn-level. Hyperparameters of the network can be tuned in a variety of manners including using the hyperas toolkit. This tuning can include the number of neurons in the Dense and LSTM layers as well as the addition of Dropout layers after each memory component. The example network was trained for 40 epochs (but with an early-stopping patience of 5). 1, 2 and 3 memory hops were experimented with and it was found that 2 was optimal. It was found that initializing the memory embedding matrices with pretrained word2vec or GloVe embeddings worked better than randomly-initialized ones for prompt history encoding in particular. Transformer Models. Another class of explored models comprise the purely attention-based family of transformer models. Attention is a mechanism in the neural network that a model can learn to make predictions by selectively attending to a given set of data (and if predictions are being made for one part of a data sample using other parts of the observation about the same sample, this is self-attention). The amount of attention is quantified by learned weights and thus the output is usually formed as a weighted average. The transformer family of models allows one to model sequence data without using recurrent network units by leveraging a special scaled dot product attention mechanism in an encoder-decoder framework, and thus can be particularly suited to modeling dialog time series data. Various types of transformer models can be used including BERT (Bidirectional Encoder Representations from Transformers) pre-trained transformer-based language models, RoBERTa, DistilBERT, and the like. The Hugging-Face transformers library was used to fine-tune a pre-trained model (bert-base-uncased) on training data for each fold of our 10-fold cross-validation setup and report performance averaged across all folds. The following hyperparameters were used: number of epochs=5, learning rate=5e-5, and Adam epsilon=1e-8. Observations and Results FIG.3is a diagram300including a table that shows quadratic weighted kappa (QWκ) values produced by the different automated scoring methods explored in this study. In particular,FIG.3shows automated scoring performance (as measured by the quadratic weighted kappa or QWκ) of the 6 systems explored above. Reported are results for the fusion system with the best QWκ (optimized across all combinations of individual systems). The last two columns present Human Inter Rater Agreements for the same data expressed in Krippendorff α and Conger κ (note that this is not directly comparable to the reported QWκs). Referring still toFIG.3, notice that all systems generally produce accuracy numbers in the 0.6-0.7 range, with the BERT and SVM systems (with hand-engineered content features) performing best individually. The final two columns of the table inFIG.3display two inter-rater agreement statistics—Conger κ and Krippendorff α—for the human expert scores assigned to the data. Recall that each dialog was scored by 3 out of 8 possible raters. A moderate to high agreement was observed between raters for all dimensions of the scoring rubric. Additionally, it is interesting to note that the QWκ of the fusion system is in a similar ballpark as the κ and α metrics for human inter-rater agreement across all constructs examined, even slightly higher in some cases such as the task, engagement, and turn-taking constructs. Note however that the QWκ values are not directly comparable to the Conger κ values, and the human inter-rater agreement values are more of a reference point than a benchmark value. It was observed that the best fusion systems across constructs all involve the SVM (either with or without politeness features) and BERT systems, suggesting that a combination of feature engineering of content and grammar features along with a neural model leveraging principled attention mechanisms perform best at this automated scoring task. Additionally, it is shown that MemN2N memory networks make a useful contribution in predicting the constructs of turn taking, repair, and topic development, all constructs that require one to take prior conversational history of the dialog into explicit account in a principled manner. LSTM models (either without or with attention) were part of the best fusion systems for topic, elaboration, engagement and overall holistic performance, which require evaluation at the level of the entire dialog. In addition to the performance of an SVM system, an SVM++system was utilized that includes features capturing politeness in the discourse. Also note that SVM experiments and SVM++ are denoted as systems 1 and 2 respectively for clarity and brevity. It was observed that lexicon features capturing politeness help the SVM++ system achieve better accuracy, particularly for the structure, turntaking, and appropriateness constructs, which is in line with expectations, given that our dialog task requires speakers to use appropriate strategies such as greeting, gratitude, and appreciation, among others, in order to accomplish the task successfully. The BiLSTMs with attention (marked as LSTMattnin the table inFIG.4or system number 4) perform better compared to the vanilla BiLSTM networks (system number 3) for all the constructs. An attention layer was positioned on top of the stack networks, which means the attention mechanism is able to identify the key characteristics of the constructs. The heat maps of the attention weights were analyzed to obtain a better understanding of the model performance. Each example depicted in diagram400ofFIG.4, depicts heat map of the words from a portion of the dialog data corresponding to a request. Dialogs were chosen which obtained a median human score of 4 (i.e., high proficiency) and were correctly classified by the BiLSTMs with attention model. It was observed that words such as “meeting” and “discussion” receive high weights for the topic construct (FIG.4(a)). Likewise,FIG.4(b)also shows that the words representing actions, such as “reviewing slides” or “discussion” received the highest weights for the task construct. For appropriateness, it was observed that words representing positive and respectful tone (e.g., “if you would look”; “great yeah”) received higher attention weights (FIG.4(c)). Finally, in theFIG.4(d)the heat map for overall holistic performance was observed. Besides key terms such as “Friday” (part of the task as well as the automated agent's responses), it was observed that positive sentiment words such as “wonderful” receive higher attention weights, suggesting that maintaining a positive intonation is weighted more by the BiLSTM with attention model. Finally, the results from BERT are reported as System6in the table inFIG.3. It was observed, that BERT consistently performs best or comparable to the best model(s) across all the constructs. This verifies the superiority of the transformer architecture in this regard. Conversational proficiency can be characterized by using an ensemble of models (e.g., two or more models inFIG.3, etc.) which can be used to score various dialog constructs. The outputs of these models can be combined or otherwise consumed by other processes/models to characterize conversational proficiency. FIG.5is process flow diagram500illustrating the characterization of a human-document, machine dialog in which, at510, data is received that comprises a recording of an individual interacting with a dialog application (i.e., a computer application) simulating a conversation. Thereafter, at520, the received data is parsed using automated speech recognition to result in text comprising a plurality of words. Features are then extracted, at530, from the parsed data. The extracted features are then inputted, at540, into an ensemble of different machine learning models each trained to generate a score characterizing a plurality of different dialog constructs. Scores generated by the machine learning models are then fused, at550, for each of the dialog constructs. A performance score can then be generated, at560, that characterizing a conversational proficiency of the individual interacting with the dialog application. Data can then be provided (e.g., displayed in an application in a GUI, loaded into memory, stored in physical persistence, transmitted to a remote computing system, etc.), at570, which includes the generated score. FIG.6is a diagram600illustrating a sample computing device architecture for implementing various aspects described herein. A bus604can serve as the information highway interconnecting the other illustrated components of the hardware. A processing system608labeled CPU (central processing unit) or labeled GPU (graphical processing unit)609(e.g., one or more computer processors/data processors at a given computer or at multiple computers), can perform calculations and logic operations required to execute a program. A non-transitory processor-readable storage medium, such as read only memory (ROM)612and random access memory (RAM)616, can be in communication with the processing system608and can include one or more programming instructions for the operations specified here. Optionally, program instructions can be stored on a non transitory computer-readable storage medium such as a magnetic disk, optical disk, recordable memory device, flash memory, or other physical storage medium. In one example, a disk controller648can interface with one or more optional disk drives to the system bus604. These disk drives can be external or internal floppy disk drives such as660, external or internal CD-ROM, CD-R, CD-RW or DVD, or solid state drives such as652, or external or internal hard drives656. As indicated previously, these various disk drives652,656,660and disk controllers are optional devices. The system bus604can also include at least one communication port620to allow for communication with external devices either physically connected to the computing system or available externally through a wired or wireless network. In some cases, the at least one communication port620includes or otherwise comprises a network interface. To provide for interaction with a user, the subject matter described herein can be implemented on a computing device having a display device640(e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information obtained from the bus604via a display interface614to the user and an input device632such as keyboard and/or a pointing device (e.g., a mouse or a trackball) and/or a touchscreen by which the user can provide input to the computer. Other kinds of input devices632can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback by way of a microphone636, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The input device632and the microphone636can be coupled to and convey information via the bus604by way of an input device interface628. Other computing devices, such as dedicated servers, can omit one or more of the display640and display interface614, the input device632, the microphone636, and input device interface628. One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores. In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. | 27,883 |
11861318 | BEST MODE FOR CARRYING OUT THE INVENTION FIG.1is a block diagram illustrating an electronic device101in a network environment100according to various embodiments. Referring toFIG.1, the electronic device101in the network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device150, a sound output device155, a display device160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display device160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components may be implemented as single integrated circuitry. For example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented as embedded in the display device160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may load a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor123(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. Additionally or alternatively, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display device160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150may receive a command or data to be used by other component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input device150may include, for example, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus pen). The sound output device155may output sound signals to the outside of the electronic device101. The sound output device155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record, and the receiver may be used for an incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display device160may visually provide information to the outside (e.g., a user) of the electronic device101. The display device160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display device160may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input device150, or output the sound via the sound output device155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., PCB). According to an embodiment, the antenna module197may include a plurality of antennas. In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102and104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. FIG.2is a block diagram of an electronic device101for providing a sentence based on a persona, according to various embodiments. Hereinafter, a ‘persona’ may be a character having unique linguistic features (or characteristics). In an embodiment, a ‘persona’ may be a character of an individual. For example, a ‘persona’ may be a character which reflects unique linguistic features of a user of a client device (e.g., a user of the electronic devices102and104), a user of the electronic device101, a character of a work (e.g., cartoon, movie), or a specific person (e.g., a celebrity). In an embodiment, a ‘persona’ may be a character for a plurality of persons. For example, a ‘persona’ may be a character which reflects common (or representative) linguistic features of people of a specific age group (e.g., people in their 30s). However, it is not limited thereto, and the ‘persona’ may be a character which reflects people's common linguistic features in relation to at least one of not only an age (e.g., infant, 10s, 20s, 30s, 40s through 50s, or 60s and above) but also a gender (e.g., male or female), an occupation (e.g., a bartender), a region (e.g., a region using a standard language or a region using a dialect), linguistic personality (e.g., polite, friendly, humorous, or assertive) or a product (or a service) (e.g., infant content). In an embodiment, the electronic device101may be a developer's device (or server) for creating (or building) an interactive system (e.g., a chatbot). In an embodiment, the electronic device101may be a service provider's device for providing a service which responds to a request (e.g., a voice input for executing a command) from a user of a client device. However, it is not limited thereto, and in an embodiment, the electronic device101may be a client device including at least part of a configuration to be described. Referring toFIG.2, in an embodiment, the electronic device101may include a communication module210, an input device220, a display device230, a sound output device240, a memory250, and a processor260. In an embodiment, the communication module210may communicate with another electronic device (e.g., a client device). For example, the communication module210may a configuration for receiving a voice input from another electronic device, and transmitting a response to the voice input. In an embodiment, the communication module210may be at least in part identical or similar to the communication module190ofFIG.1. In an embodiment, the input device220may receive a user input from the user (or the developer). For example, the input device220may include a microphone for receiving a user's utterance as a voice signal. As another example, the input device220may include at least one of an input module for receiving a user input from an external device (e.g., a keyboard, a headset), a touch screen for receiving a text input from the user, or a hardware key. In an embodiment, the input device220may be at least in part identical or similar to the input device150ofFIG.1. In an embodiment, the display device230may output various information, while the electronic device101performs an operation for providing a sentence based on a persona. For example, the display device230may display a graphical user interface (GUI) tool for creating (or constructing) the interactive system (or the chatbot). However, information displayed on the display device230is not limited thereto. In an embodiment, since the display device230is at least in part identical or similar to the display device160ofFIG.1, its detailed description will be omitted. In an embodiment, the sound output device240may include a speaker for outputting a voice signal. In an embodiment, the sound output device240may output a voice to be provided in response to the voice input. In an embodiment, since the sound output device240is at least partially identical or similar to the sound output device155ofFIG.1, detailed descriptions thereof will be omitted. In an embodiment, the memory250may store information required to provide a sentence based on a persona. In an embodiment, the memory250may store a style transfer model251. In an embodiment, the memory250may store the style transfer model251corresponding to the persona. In an embodiment, the memory250may store different style transfer models251according to personas. In an embodiment, the style transfer model251corresponding to the persona may be a model for converting a response sentence inputted by the user to a response sentence having a style corresponding to the persona in response to a text input (e.g., an input sentence inputted for intent matching) to be used for the intent matching (or to be intent matched with the voice input received from the client device) or in response to a user input received from the client device. For example, while creating the interactive system (e.g., a chatbot), an actually inputted text input (e.g., the voice input from the user (or the voice input to perform a voice command)) and the text input for the intent matching may be inputted from the user. As a sentence in response to the text input for the intent matching, a response sentence (hereafter, referred to as a ‘first response sentence’) having a certain (or arbitrary) style may be inputted by the user. The style transfer model corresponding to the persona may convert the response sentence having the certain style into a sentence having a style corresponding to the persona (hereafter, referred to as a ‘second response sentence’). In an embodiment, the style transfer model corresponding to the persona may be trained using a neural network, based on a sentence (or set of sentences) having a first style and a sentence having a second style. Training the style transfer model corresponding to persona will be described later in detail with reference toFIG.3. In an embodiment, the memory250may store sentences having styles corresponding to personas for each persona. In an embodiment, if the sentence having the style corresponding to the persona is inputted based on a user input of the electronic device101(or a user input of the client device), based on an additionally inputted sentence, the style transfer model corresponding to the persona may be trained (or updated or retrained). In an embodiment, the memory250may store information of the user. For example, the memory250may store profile information of a user (or the user of the client device) registered (or subscribed) to a service provided by the electronic device101. As another example, the memory250may store profile information of a provider (or a developer) of the service provided by the electronic device101. In an embodiment, the memory250may include a database including a tagged corpus (or corpus), a database related to context, and a database related to statistics and/or usage. In an embodiment, the memory250may be at least in part identical or similar to the memory130ofFIG.1. In an embodiment, the processor260may control the overall operation performed by the electronic device101. In an embodiment, the processor260may include an automatic speech recognition (ASR) module261, a natural language understanding (NLU) module262, a dialogue manager module263, a persona module264, a user information management module265, and a text to speech (TTS) module266. In an embodiment, the ASR module261may convert a voice input inputted from the user of the electronic device101or the user of the client device to text data. In an embodiment, the ASR module261may include an acoustic model, a language model, and a speech recognition module. For example, the acoustic model may include information related to speech, and the language model may include unit phoneme information or information of a combination of the unit phoneme information. The speech recognition module may convert the user speech to the text data by using information related to speech or information of the unit phoneme information. In an embodiment, the NLU module262may acquire a user's intent by performing syntactic analysis or semantic analysis. For example, the NLU module262may acquire meaning of a word extracted from the user input by using linguistic features (e.g., grammatical elements) such as morphemes or phrases, and determine the user's intent by matching the acquired word meaning to a domain and an intent. In an embodiment, the NLU module262may include a module for performing a part-of-speech (POS) tagging operation for tagging information of a POS (e.g., a verb, a noun, an adjective) of the word with respect to the corpus (or corpus). In an embodiment, the dialogue manager module263may determine whether the user's intent acquired by the NLU module262is clear. For example, the dialogue manager module263may determine whether the user's intent is clear, based on whether parameter information is sufficient. In an embodiment, the dialogue manager module263may perform a feedback requesting necessary information from the user if the user's intent is not clear. In an embodiment, the persona module264may generally control an operation of providing the sentence based on the persona. In an embodiment, the persona module264may control an operation of training the style transfer module corresponding to the persona using the neural network, based on the sentence having the first style and the sentence having the second style. In an embodiment, the persona module264may control an operation for converting a first response sentence inputted by the user to a second response sentence having the style corresponding to the persona in response to the text input to be used for the intent matching. In an embodiment, the persona module264may obtain the first response sentence inputted by the user in response to the text input to be used for the intent matching. In an embodiment, the persona module264may determine at least one persona for generating at least one second response sentence (or corresponding to the style of the second response sentence), based on obtaining the first response sentence. In an embodiment, based on obtaining the first response sentence, the persona module264may determine one or more personas corresponding to one or more style transfer modules of all of style transfer models stored in the memory250as personas for generating second response sentences. In an embodiment, the persona module264may determine at least one persona, based on a service (or product or content) provided by the user of the electronic device101. For example, if the user of the electronic device101is a service provider who provides a service for providing information of women's clothing, the persona module264may determine a persona (e.g., a female character) related to the women's clothing information. In an embodiment, the persona module264may determine at least one persona, based on user (or developer) information (or profile information) of the electronic device101. In an embodiment, based on obtaining the first response sentence, the persona module264may determine at least one persona selected by the user of the electronic device101among the one or more personas corresponding to one or more style transfer modules of all the style transfer models stored in the memory250as at least one persona for generating at least one second response sentence. For example, the persona module264may display on the display device230information indicating the one or more personas corresponding to the one or more style transfer modules of all the style transfer models stored in the memory, based on obtaining the response sentence inputted by the user. Based on a user input for selecting at least one persona in the information indicating the one or more personas, the persona module264may determine the at least one selected persona as at least one persona for generating the second response sentence. In an embodiment, the persona module264may convert the first response sentence to at least one second response sentence having the style corresponding to the at least one determined persona, using the neural network. In an embodiment, using the at least one style transfer module corresponding to the at least one determined persona trained through the neural network, the persona module264may generate (or acquire) at least one second response sentence, by converting the first response sentence to at least one second response sentence having the style corresponding to the at least one persona determined. In an embodiment, the persona module264may receive a user input from the client device. For example, the persona module264may receive the user input for executing the command from the client device, through the communication module210. In an embodiment, the persona module264may determine at least one persona for providing the response. In an embodiment, the persona module264may determine a persona for providing the response, based on the user information of the client device. For example, if the user of the client device is a user residing in an area using a specific dialect, the persona module264may determine a persona related to the specific dialect as the persona for providing the response. In an embodiment, the persona module264may determine the persona for providing the response, based on the user information of the client device and the persona selected by the user of the electronic device101(e.g., at least one response sentence corresponding to at least one persona selected by the user of the electronic device101) while generating the interactive system (e.g., the chatbot). For example, the persona module264may determine a persona (e.g., a persona related to information that the user's age is 30s) corresponding to the user information (e.g., information that the user's age is 30s) of the client device among at least one persona selected by the user of the electronic device101while creating the interactive system (e.g., the chatbot) as the persona for providing the response. In an embodiment, the persona module264may determine the persona for providing the response, based on the user information of the electronic device101. In an embodiment, the persona module264may determine the persona for providing the response, based on a service (or content) provided by the user of the electronic device101. For example, if the service provided by the user of the electronic device101is a service for providing cocktail information, the persona module264may determine a persona related to a professional bartender as the persona for providing the response. In an embodiment, the persona module264may determine the persona for providing the response, based on content requested by the user of the client device. For example, if the user of the client device requests information related to infant content, the persona module264may determine a persona related to an infant as the persona for providing the response. In an embodiment, the persona module264may determine the persona for providing the response, based on user setting of the electronic device101. For example, if the user of the electronic device101sets a first persona as the persona for providing the response, the persona module264may determine the first persona as the persona for providing the response. In an embodiment, the persona module264may generate a sentence having a style corresponding to the determined persona. In an embodiment, the persona module264may generate a sentence having a style corresponding to the determined persona using the style transfer module corresponding to the determined persona. For example, the persona module264may generate a sentence having a first style in response to a received user input of the client device. The persona module264may generate a sentence as the response, by converting the sentence having the generated first style to a sentence having the second style corresponding to the persona using the transfer module corresponding to the determined persona. In an embodiment, the user information management module265may manage information of the user of the electronic device101or the user of the client device. For example, if profile information of the user of the user of the electronic device101or the user of the client device is added or changed, the user information management module265may update the user information stored in the memory250, by considering the added or changed profile information. In an embodiment, the TTS module266may change text-type information to voice-type information. For example, the TTS module266may change the text-type sentence generated as the response through the persona module264to voice-type information. In an embodiment, the TTS module266may change a text-type sentence generated as the response through the persona module264to voice-type information reflecting persona's tone. For example, the TTS module266may change the text-type sentence generated as the response through the persona module264to the voice-type information which reflects at least one of persona's speech rate, accent, pitch, or habit. InFIG.2, the processor260includes all of the ASR module261, the NLU module262, the dialogue manager module263, the persona module264, the user information management module265, and the TTS module266, but is not limited thereto. For example, some of the ASR module261, the NLU module262, the dialogue manager module263, the persona module264, the user information management module265, and the TTS module266may be omitted. As another example, at least one module of the ASR module261, the NLU module262, the dialogue manager module263, the persona module264, the user information management module265, or the TTS module266may be a configuration independent of the processor260. In an embodiment, the ASR module261, the NLU module262, the dialogue manager module263, the persona module264, the user information management module265, and the TTS module266may be stored in the memory250, and may be executed by the processor260(e.g., the main processor121). In an embodiment, the processor260may be at least in part identical or similar to the processor120ofFIG.1. FIG.3is a diagram for illustrating a method of training a style transfer model corresponding to a persona using a neural network, according to various embodiments. Referring toFIG.3, in an embodiment,FIG.3may depict a style transfer model for converting a sentence ‘I'm glad to meet you’ having the first style to a sentence ‘Lovely to meet ya’ having a second style. In an embodiment, the input (or the input sentence) and the output (or the output sentence) of the style transfer model may be expressed as the following Equation 1. X=[X1,X2, . . . ,XN] Y=[Y1,Y2, . . . ,YM] Equation 1 In Equation 1, Ximay denote an input word, X may denote a group of Xi(or a sentence to transfer), and N may denote a length of the input sentence. In addition, Yjmay denote an output word, Y may denote a group of Yj(or a sentence after transfer), and M may denote a length of the output sentence. In an embodiment, inFIG.3, X=[I'm, glad, to, meet, you], and Y=[Lovely, to, meet, . . . ]. SinceFIG.3presents an operation for predicting the word ‘meet’, in [Lovely, to, meet, . . . ], ‘ ’ may indicate a state where an operation for predicting ‘ya’ as the next word ‘meet’ is not conducted. In an embodiment, a j-th output word (e.g., ‘meet’) may be calculated based on an attentional distribution, a pointer distribution, and a balance factor. In an embodiment, the distribution may mean a probability distribution. In an embodiment, inFIG.3, the attentional distribution may be expressed as Attentionj(w). Attentionj(w) may denote a probability of generating the word w as the j-th word of the output sentence. In an embodiment, inFIG.3, the pointer distribution may be expressed as Pointerj(w). Pointerj(w) may denote a probability of outputting the word w included in the input sentence as the j-th word of the output sentence. In an embodiment, the attentional distribution may be related to the role of converting the style by considering context (or context), and the pointer distribution may be related to the role of maintaining content of the sentence. In an embodiment, the content of the input sentence may be maintained and the sentence of the converted style may be outputted, by the calculation operations based on the attentional distribution and the pointer distribution. In an embodiment, using the following Equation 2, the j-th word may be calculated, based on the attentional distribution, the pointer distribution, and the balance factor. Outputj(w)=b×Attentionj(w)+(1−b)×Pointerj(w) wj=argmaxw(Outputj(w)) Equation 2 In Equation 2, b may denote the balance factor, Outputj(w) may denote a result of summing the attentional distribution and the pointer distribution using the balance factor, and wjmay denote a j-th output word. In an embodiment, argmaxw(Outputj(w)) may denote a function for selecting the word w to maximize Outputj(w). Hereafter, a method for calculating each of the attentional distribution, the pointer distribution, and the balance factor, and calculating the j-th word based on the calculated attentional distribution, the pointer distribution, and the balance factor will be described in detail. In an embodiment, an embedding operation301may be performed on the input words. In an embodiment, the embedding operation301may be an operation for tokenizing the input words. In an embodiment, the embedding operation301may generate a one-hot vector corresponding to a word token, and generate a word embedding vector by multiplying the one-hot vector by a word embedding matrix. In an embodiment, after conducting the embedding operation301, an encoding operation303may be performed. In an embodiment, a bidirectional long short-term memory (BiLSTM) may be used as an encoder to perform the encoding operation. In an embodiment, by using Equation 3 below, a hidden state of the encoder may be generated, by inputting the embedding vector into the BiLSTM. hienc=BiLSTM(Eenc(Xi)) Equation 3 In Equation 3, hiencmay denote the hidden state of the encoder, and Eenc(Xi) may denote the embedding input word. In an embodiment, an attention weight may be calculated, using the following Equation 4. eij=tanh(hi×sj-1)aij=eij∑k=1NekjEquation4 In Equation 4, sj-1may denote a j−1-th hidden state of the decoder, and himay denote the j-th hidden state of the encoder. aijmay denote the attention weight. In an embodiment, for sake of explanations in Equation 4, k=1 through N are calculated in Σk=1Nekj, but considering a sentinel vector to be described, Ek=1Nekjmay be replaced by Σk=1N+1ekj. In an embodiment, using the following Equation 5, a context vector may be calculated. Cj=Σi=1NakjhiEquation 5 In Equation 5, Cjmay denote a j-th context vector. In an embodiment, the j-th hidden state of the decoder may be calculated, using the following Equation 6. Sj=LSTM(Sj-1,[Embeddec(yj-1);Cj]) Equation 6 In Equation 6, Sjmay denote the j-th hidden state of the decoder, Sj-1may denote the j−1-th hidden state of the decoder, yj-1may denote a j−1-th real word (or ground truth), Embeddec(yj-1) may denote the embedding vector of yj-1, Cjmay denote the j-th hidden state of the decoder, and [Embeddec(yj-1); Cj] may denote a vector which concatenates (or associates) Embeddec(yj-1) and Cj. In an embodiment, a long short-term memory (LSTM) may be used as a decoder for performing the decoding operation, and S1may be calculated by inputting [Embeddec(yj-1); Cj] into the LSTM. In an embodiment, the attentional distribution may be calculated, using the following Equation 7. Attentionj(w)=softmax(Wweig□tmatrix×[Sj;Cj]) Equation 7 In Equation 7, Wweightmatrixmay denote a weight matrix, and [Sj; Cj] may denote a vector which concatenates Sjand Cj. softmax( ) may denote a softmax function as an activation function. In an embodiment, the pointer distribution may be calculated using the following Equation 8. Pointerj(w)=Σi∈I(w,X)aiEquation 8 In Equation 8, Pointerj(w) may denote a probability that the word w is outputted as the j-th word of the output sentence, I(w,X) may denote every position including the word w in the input sentence X, and aimay denote a probability that the word w at the i-th position of the input sentence is outputted as the j-th word of the output sentence. In an embodiment, the balance factor (b) may be calculated based on the sentinel vector. In an embodiment, the sentinel vector may be considered an N+1-th hidden state of the encoder. In an embodiment, the balance vector may be calculated using the following Equation 9. e(N+1)j=tanh(V×sj-1)b=a(N+1)j=e(N+1)j∑k=1N+1ekjEquation 9 In Equation 9, V may denote the sentinel vector, and b may denote the balance factor. In an embodiment, to correct an error (or an error), the following Equation 10 may be used. error=−Σt-1Mlog Outputt(yt) Equation 10 In Equation 10, Outputt(yt) may denote a predicted probability (or a predicted probability distribution) of a t-th word yt. In an embodiment, training of the style transfer model may be conducted to minimize the error. In an embodiment,FIG.3illustrates the operation of predicting the word ‘meet’, but it may be applied to other output words in the same or similar manner. In an embodiment, the input sentence ofFIG.3may correspond to the sentence having the first style, and the output sentence may correspond to the sentence having the second style. In an embodiment, althoughFIG.3illustrates one style transfer model, a style transfer model for each persona (or for each style corresponding to the persona) may be trained in the same or similar manner to the method described inFIG.3. In an embodiment, the trained style transfer model may be stored in the memory250, together with the sentences used for the training. In an embodiment,FIG.3has been described by exemplifying recurrent neural networks (RNN) as the neural network, but it is not limited thereto. For example, the style transfer model may be trained, using convolutional neural networks (CNN), deep belief networks, and restricted boltzman machines. FIG.4is a diagram for illustrating a method of converting a sentence to a sentence having a style corresponding to a persona, according to various embodiments. Referring toFIG.4, in an embodiment,FIG.4may show a table which converts a sentence using a standard language as the first style corresponding to the first persona to a sentence using a dialect as the second style corresponding to the second persona. For example, a sentence ‘Eoseo osibsio’ using the standard language may be converted to a sentence ‘Peotteug oiso’ using Gyeongsang-do dialect corresponding to a 2-1 persona, a sentence ‘Heobeollage wabeolilangkke’ using Jeolla-do dialect corresponding to a 2-2 persona, a sentence ‘Ppalli wayu’ using Chungcheong-do dialect corresponding to a 2-3 persona, and a sentence ‘Honje Obseoye’ using Jeju-do dialect corresponding to a 2-4 persona. In an embodiment, if the sentence ‘Eoseo osibsio’ using the standard language is inputted from the user (or the developer) as a first response sentence, the processor260may provide (or recommend) ‘Peotteug oiso’, ‘Heobeollage wabeolilangkke’, ‘Ppalli wayu’, and ‘Honje Obseoye’, as second response sentences. In an embodiment, the style corresponding to the persona may include a style of the user of the client device, the user of the electronic device101, a character of a work (e.g., cartoon, movie), or a character reflecting unique linguistic features of a specific person (e.g., a celebrity). For example, the processor260may convert a sentence such as ‘Schedule will be postponed’ having the first style to a sentence such as ‘It will be later than expected pika pika˜’ having a style corresponding to a cartoon character, ‘I think the schedule will be a little late’ with a style corresponding to a specific celebrity, and a sentence such as ‘Master schedule is expected to be postponed’ with a style corresponding to a movie character. In an embodiment, the processor260may convert the first response sentence to a sentence corresponding to a persona (or a persona generated by characterizing the client user by reflecting the language features of the client user) for the user of the client device, based on statistics (or history, or log of the input) of the input (e.g., the voice input for executing the command) inputted from the user of the client device. In an embodiment, the processor260may convert the first response sentence to a sentence corresponding to a persona (or a persona generated by characterizing the electronic device101user by reflecting the language features of the user of the electronic device101) for the user of the electronic device101, based on statistics of the input (e.g., the voice input for executing the command) inputted from the user of the electronic device101. An electronic device according to various embodiments of the present invention may include a memory, and at least one processor, wherein the at least one processor may be configured to obtain a sentence based on a user input, based on obtaining the sentence, determine at least one persona, convert the sentence to at least one sentence having a style corresponding to the at least one persona, using a neural network, and provide the converted at least one sentence. In various embodiments, the electronic device may further include a display device, wherein the at least one processor may be configured to, through the display device, display one or more personas for recommendation to a user, in response to an input for selecting at least one persona of the one or more displayed personas, select at least one persona, and determine the at least one selected persona as the at least one persona. In various embodiments, the at least one processor may be configured to convert the sentence to at least one sentence having a style corresponding to the determined at least one persona, and display the at least one converted sentence through the display device. In various embodiments, the at least one processor may be configured to receive an input for selecting one or more sentences from the at least one displayed sentence, and in response to receiving the input, store the one or more selected sentences in the memory in response to an input requesting information, received from an external device. In various embodiments, the at least one processor may be configured to receive an input for modifying one or more sentences among the at least one displayed sentence, and in response to receiving the input, store the one or more modified sentences in the memory in response to an input for requesting information, received from an external device. In various embodiments, the at least one processor may be configured to determine the at least one persona, based on information of a service provided by a user of the electronic device or the user of the electronic device. In various embodiments, the at least one processor may be configured to convert the sentence to at least one sentence having a style corresponding to the at least one persona, using a style transfer model trained using the neural network and corresponding to each of the at least one persona. In various embodiments, the at least one processor may be configured to receive an input for requesting information from an external device, based on receiving the input, identify information of a user of the external device, determine a persona based on the user information of the external device, convert a response sentence for the request to a response sentence having a style corresponding to the determined persona, and provide the converted response sentence to the external device. In various embodiments, the at least one processor may be configured to receive an input for requesting information from an external device, based on receiving the input, identify content contained in the information, determine the persona based on the content information, convert the response sentence for the request to a response sentence having a style corresponding to the determined persona, and provide the converted response sentence to the external device. In various embodiments, the at least one processor may be configured to generate a persona related to a user of the external device or a persona related to a user of the electronic device, based on statistics of information requested from the external device or statistics of a response for a request inputted from the user of the electronic device. FIG.5is a flowchart500for illustrating a method for providing a sentence based on a persona, according to various embodiments. For example,FIG.5may be an overall flowchart500of the method for providing the sentence based on the persona. Referring toFIG.5, in operation501, in an embodiment, the processor260may obtain a sentence based on a user input. For example, the processor260may receive (or receive an input) a text input to be intent-matched to a voice input received from the client device, from the user. After receiving the text input, the processor260may receive, from the user, a first response sentence for the received text input (or in response to the received text input). In operation503, in an embodiment, the processor260may determine at least one persona, based on obtaining the sentence. For example, based on obtaining the first response sentence, the processor260may determine at least one persona for generating a second response sentence (or corresponding to a style of the second response sentence). In an embodiment, based on obtaining the first response sentence, the processor260may determine one or more personas corresponding to one or more style transfer modules among all of the style transfer models stored in the memory250as personas for generating second response sentences. In an embodiment, based on obtaining the first response sentence, the processor260may determine at least one persona selected by the user of the device101, among the one or more personas corresponding to the one or more style transfer modules of all of the style transfer models stored in the memory250, as at least one persona for generating at least one second response sentence. For example, based on obtaining a response sentence inputted by the user, the processor260may display information indicating the one or more personas corresponding to the one or more style transfer modules of all of the style transfer models stored in the memory250through the display device230to recommend it to the user. Based on a user input for selecting at least one persona in the information indicating the one or more personas, the processor260may determine the at least one selected persona as at least one persona for generating the second response sentence. In an embodiment, based on a service (or a product, or content) provided by the user of the electronic device101, the processor260may determine at least one persona. For example, if the user of the electronic device101is a service provider which provides a service for providing women's clothing information, the processor260may determine a persona (e.g., a female character) related to the women's clothing information as the persona for generating at least one second response sentence. In an embodiment, the processor260may determine at least one persona, based on information (or profile information) of the user of the client device or the user (or the developer) of the electronic device101. For example, the processor260may determine the persona (or the persona generated by characterizing the client user by reflecting the language features of the client user) for the user of the client device as the persona for generating at least one second response sentence, based on the statistics (or history, or log of the input) of the input (e.g., the voice input for executing the command) inputted from the user of the client device. As anther example, the processor260may determine the persona (or the persona generated by characterizing the electronic device101user by reflecting the language features of the user of the electronic device101) for the user of the electronic device101as the persona for generating at least one second response sentence, based on the statistics of the input (e.g., the voice input for executing the command) inputted from the user of the electronic device101. In operation505, in an embodiment, the processor260may convert the obtained sentence to at least one sentence having a style corresponding to the at least one persona determined, using the neural network. For example, the processor260may convert the first response sentence to at least one second response sentence having the style corresponding to at least one persona determined, using the neural network. In an embodiment, the processor260may generate (or acquire) at least one second response sentence, by converting the first response sentence to at least one second response sentence having the style corresponding to the at least one persona determined, using at least one style transfer module corresponding to the at least one persona determined, trained through the neural network. In an embodiment, the processor260may convert the first response sentence, to at least one sentence having the style corresponding to the persona selected by the user of the electronic device101among the one or more personas corresponding to the one or more style transfer modules of all of the style transfer models stored in the memory250. In operation507, in an embodiment, the processor260may provide the at least one sentence transferred. For example, the processor260may output at least one sentence transferred through operation505, through the display device230. In an embodiment, the processor260may output at least one sentence having the style corresponding to the selected persona (e.g., the sentence transferred through operation505) through the display device230, together with the persona selected by the user (e.g., the persona selected through operation503). In an embodiment, the processor260may determine one or more sentences selected by the user of the electronic device101, among at least one sentence outputted, as one or more sentences to be provided as a response to the text input (e.g., the input sentence inputted for the intent matching) to be used for the intent matching (or to be intent matched with the voice input received from the client device) or a response to the user input received from the client device. In an embodiment, the processor260may store the one or more determined sentences in the memory250. FIG.6is a flowchart600for illustrating a method for generating an interactive system, according to various embodiments. For example,FIG.6may be the flowchart600of the method for generating the interactive system using an API service (e.g., a chatbot builder service). Referring toFIG.6, in operation601, in an embodiment, the processor260may register a user's account of the electronic device101to generate the interactive system (e.g., a chatbot). For example, the processor260may register the account of the user of the electronic device101to an API service (e.g., a chatbot builder service) to create a chatbot for the user (or the developer), based on a user input. In operation603, in an embodiment, the processor260may obtain a sentence based on a user input. For example, the processor260may receive (or receive an input) a text input to be intent-matched with a voice input received from the client device, from the user. After receiving the text input, the processor260may receive, from the user, a first response sentence for the received text input (or in response to the received text input). In operation605, in an embodiment, the processor260may determine at least one persona, based on obtaining the sentence. For example, the processor260may determine, based on obtaining the first response sentence, at least one persona for generating a second response sentence (or corresponding to a style of the second response sentence). In an embodiment, based on obtaining the first response sentence, the processor260may determine one or more personas corresponding to one or more style transfer modules among all the style transfer models stored in the memory250as personas for generating the second response sentences. In an embodiment, based on obtaining the first response sentence, the processor260may determine at least one persona selected by the user of the device101, among the one or more personas corresponding to the one or more style transfer modules of the entire style transfer models stored in the memory250, as at least one persona for generating at least one second response sentence. For example, based on obtaining a response sentence inputted by the user, the processor260may display information indicating one or more personas corresponding to the one or more style transfer modules of the entire style transfer models stored in the memory250through a display device to recommend it to the user. Based on a user input for selecting at least one persona in the information indicating the one or more personas, the processor260may determine the at least one selected persona as at least one persona for generating the second response sentence. In operation607, in an embodiment, the processor260may generate and output a sentence having a style corresponding to the at least one persona selected. In an embodiment, the processor260may convert the obtained sentence to at least one sentence having the style corresponding to the at least one selected persona, using the neural network. For example, the processor260may convert the first response sentence to at least one second response sentence having the style corresponding to the at least one persona selected, using the neural network. In an embodiment, the processor260may generate (or acquire) at least one second response sentence, by converting the first response sentence to at least one second response sentence having the style corresponding to the at least one selected persona, using at least one style transfer module corresponding to the at least one selected persona, trained through the neural network. In an embodiment, the processor260may display at least one sentence having the style corresponding to at least one generated persona through the display device230, together with the persona selected by the user (e.g., the persona selected in operation605). In operation609, in an embodiment, the processor260may select one or more sentences from the at least one sentence outputted, based on a user input. In an embodiment, the processor260may modify at least part of the one or more sentences selected based on the user input, among the at least one sentence outputted. In an embodiment, the processor260may input an additional (or new) sentence based on a user input, in addition to the at least one sentence outputted. In an embodiment, if at least part of the one or more selected sentences is modified or an additional sentence is inputted, the processor260may store the modified or inputted sentence, and use it to train the problem_transfer model. In operation611, in an embodiment, the processor260may generate a response based on the one or more selected sentences. In an embodiment, the processor260may determine the one or more selected sentences (or the one or more selected sentences and the sentence modified or additionally inputted) as one or more sentences to be provided in response to the text input to be used for the intent matching or in response to the user input received from the client device. In an embodiment, the processor260may store the one or more determined sentences in the memory250. FIG.7is a flowchart700for illustrating a method for providing a sentence based on a persona based on an input from a client device, according to various embodiments. For example,FIG.7is the flowchart700of the method for providing a response to a user input received from the client device. Referring toFIG.7, in operation701, in an embodiment, the processor260may receive a user input from the client device. For example, the processor260may receive the user input for executing a command from the client device, through the communication module. In operation703, in an embodiment, the processor260may determine at least one persona for providing a response. In an embodiment, the processor260may determine a persona for providing the response, based on the user information of the client device. For example, if the user of the client device is a user residing in an area using a specific dialect, the processor260may determine a persona related to the specific dialect as the persona for providing the response. In an embodiment, the processor260may determine a persona for providing a response, based on the user information of the client device and a persona selected by the user of the electronic device101while generating the interactive system (e.g., the chatbot) (e.g., at least one response sentence corresponding to a persona selected by the user of the electronic device101) (e.g., a personal selected by the user in operation605ofFIG.6, or one or more sentences selected by the user in operation609ofFIG.6). For example, the processor260may determine a persona (e.g., a persona related to the age of 30s) corresponding to the user information (e.g., information that the user's age is 30s) of the client device among at least one persona selected by the user of the electronic device101while generating the interactive system (e.g., the chatbot) as the persona for providing the response. In an embodiment, the processor260may determine a persona for providing a response, based on the user information of the electronic device101. In an embodiment, the processor260may determine a persona for providing a response, based on a service (or content) provided by the user of the electronic device101. For example, if the service provided by the user of the electronic device101is a service for providing cocktail information, the processor260may determine a persona related to a professional bartender as the persona for providing the response. In an embodiment, the processor260may determine a persona for providing a response, based on content requested by the user of the client device. For example, if the user of the client device requests infant content information, the processor260may determine a persona related to the infant as the persona for providing the response. In an embodiment, the processor260may determine a persona for providing a response, based on the user setting of the electronic device101. For example, if the user of the electronic device101sets the first persona as the persona for providing the response, the processor260may determine the first persona as the persona for providing the response. In operation705, in an embodiment, the processor260may generate a sentence having a style corresponding to the determined persona. In an embodiment, the processor260may generate a sentence having the style corresponding to the determined persona using the style transfer module corresponding to the determined persona. For example, the processor260may generate a sentence having the first style in response to the received user input of the client device. The processor260may generate the sentence as the response, by converting the generated sentence having the first style to a sentence having the second style corresponding to the persona using the style transfer module corresponding to the determined persona. In operation707, in an embodiment, the processor260may provide the generated sentence to the client device. For example, the processor260may provide the generated sentence to the client device, through the communication module210. FIG.8is a diagram for illustrating a method for selecting a persona for providing a response by a user of an electronic device101, according to various embodiments. Referring toFIG.8, in an embodiment,FIG.8may show a screen800for selecting a persona to provide a response, based on an input from the user of the electronic device101. In an embodiment, the processor260may receive a user input for inputting an intent name such as ‘product advertisement’. In an embodiment, the processor260may receive a user input for inputting an event name such as ‘guide promotion’. In an embodiment, the processor260may display information indicating one or more personas through the display device230to recommend to the user. In an embodiment, the processor260may select a persona, by selecting at least one of information indicating one or more personas displayed on the display device230, based on a user input. For example, as shown inFIG.8, the processor260may select the personal related to the age group (e.g., 10s, 20s, 30s, 40s through 50s, 60s and above), the gender (male and female), or the linguistic personality (e.g., friendly, assertive), or a combination thereof, based on the user input. FIG.9is a diagram for illustrating a method for determining a sentence having a style corresponding to a persona by a user of an electronic device101, according to various embodiments. Referring toFIG.9, in an embodiment,FIG.9may show a screen900for determining at least one sentence to be provided as a response, based on an input from the user of the electronic device101. In an embodiment, the processor260may receive a user input for inputting a first response sentence such as ‘Have good news’. In an embodiment, the processor260may output at least one sentence having a style corresponding to the selected persona (e.g., the persona selected inFIG.8). For example, the processor260may output a sentence such as ‘Hey, it's good news’ with a style corresponding to a persona related to the age of 20s, the gender of male, and the linguistic personality of friendly. In an embodiment, the processor260may select one or more sentences, based on a user input, among at least one sentence outputted. In an embodiment, the processor260may modify at least part of one or more sentences selected based on the user input, in at least one sentence outputted. In an embodiment, the processor260may input an additional (or new) sentence based on a user input, in addition to at least one sentence outputted. In an embodiment, if at least part of the one or more selected sentences is corrected or an additional sentence is inputted, the processor260may store the modified or inputted sentence, and use it to train the problem transfer model. In an embodiment, processor260may determine the one or more selected sentences (or the one or more selected sentences and the sentence modified or additionally inputted) as one or more sentences to be provided in response to the text input to be used for the intent matching or in response to the user input from the client device. In an embodiment, the processor260may store the one or more determined sentences in the memory250. A method according to various embodiments of the present invention may include obtaining a sentence based on a user input, based on obtaining the sentence, determining at least one persona, converting the sentence to at least one sentence having a style corresponding to the at least one persona, using a neural network, and providing the converted at least one sentence. In various embodiments, determining at least one persona may include displaying one or more personas through a display device for recommendation to a user, in response to an input for selecting at least one persona of the one or more displayed personas, selecting at least one persona, and determining the at least one selected persona as the at least one persona. In various embodiments, converting the sentence to the at least one sentence having the style corresponding to the at least one persona may include converting the sentence to at least one sentence having the style corresponding to the determined at least one persona, and providing the converted at least one sentence may include displaying the at least one converted sentence through the display device. In various embodiments, the method may further include receiving an input for selecting one or more sentences from the at least one displayed sentence, and in response to receiving the input, storing the one or more selected sentences in the memory in response to an input requesting information, received from an external device. In various embodiments, the method may further include receiving an input for modifying one or more sentences among the at least one displayed sentence, and in response to receiving the input, storing the one or more modified sentences in the memory in response to an input for requesting information, received from an external device. In various embodiments, determining the at least one persona may include determining the at least one persona, based on information of a service provided by a user of the electronic device or the user of the electronic device. In various embodiments, converting the sentence to the at least one sentence having the style corresponding to the at least one persona may include converting the sentence to at least one sentence having a style corresponding to the at least one persona, using a style transfer model trained using the neural network and corresponding to each of the at least one persona. In various embodiments, the method may further include receiving an input for requesting information from an external device, based on receiving the input, identifying information of a user of the external device, determining a persona based on the user information of the external device, converting a response sentence for the request to a response sentence having a style corresponding to the determined persona, and providing the converted response sentence to the external device. In various embodiments, the method may include receiving an input for requesting information from an external device, based on receiving the input, identifying content contained in the information, determining the persona based on the content information, converting the response sentence for the request to a response sentence having a style corresponding to the determined persona, and providing the converted response sentence to the external device. In addition, a data structure used in the above-described embodiment of the present invention may be recorded on a computer-readable recording medium through various means. The computer-readable recording medium includes a storage medium such as a magnetic storage medium (e.g., a read only memory (ROM), a floppy disk, a hard disk, etc.), an optical reading medium (e.g., compact disk (CD)-ROM, a digital versatile disk (DVD), etc.). In an embodiment, a computer-readable recording medium may record a program for obtaining a sentence based on a user input, based on obtaining the sentence, determining at least one persona, converting the sentence to at least one sentence having a style corresponding to the at least one persona, using a neural network, and providing the converted at least one sentence, in an electronic device. So far, preferred embodiments of the present invention have been described. Those skilled in the art of the technical field which the present invention belongs to will appreciate that the present invention may be implemented in a modified form without departing from the essential characteristics of the present invention. Therefore, the disclosed embodiments should be considered in a descriptive sense and not in a restrictive sense. The scope of the present invention is disclosed in the claims, not in the above-stated descriptions, and all differences within the equivalent scope should be construed as being included in the present invention. | 74,460 |
11861319 | DETAILED DESCRIPTION Aspects of the present disclosure relate to autonomous agents (“chatbots”) that deliver content in the form of virtual dialogues that are automatically produced from a corpus of text. Example of virtual dialogues are a virtual social dialogue and a virtual persuasive dialogue. A virtual social dialogue is a multi-step dialogue between imaginary agents and/or user devices and can be presented within an interactive session between a user device and an autonomous agent. A virtual persuasive dialogue is a multi-step adversarial argumentation dialogue between imaginary agents obtained as a result of content transformation. Presentation of knowledge in dialogue format can be more effective than traditional search-based techniques. For example, usability studies have shown that for those acquiring information, dialogues often communicate information more effectively than monologue most of times. Chatbots can provide users with a deep domain knowledge, personalization, interactivity and the level of understanding that can be lacking in modern search engines. Chatbots can also implement social search, providing opinionated data from peers on request, performing personalization, and allow easy navigation through content. In an example, an autonomous agent executing on a computing device accesses an initial utterance from a user device. The utterance includes a search query, for example “mobile technology.” The agent locates multiple documents and determines topics based on the search query from the documents. Clustering can be used to group the determined topics into related clusters. Clustering can include greedy search and/or agglomerative clustering. Determined topics might include “what are the benefits of this technology?” or “when will the technology be ready?” Continuing the example, the agent presents the determined topics to the user device. The user device can then make a selection of a desired topic, for example, “what are the benefits of this technology?” Upon receiving a selection of a topic, the agent obtains a corpus of texts and creates avirtual social dialogue from the corpus of text. The virtual social dialogue includes questions and answers organized to appear as a dialogue between user devices and autonomous agents. The agent presents the virtual social dialogue to the user device. For example, the dialogue might include “this technology can be leveraged by mobile devices,” “how can the technology be leveraged?” and “by providing faster data downloads, thereby enabling new applications.” The user can continue to interact with the agent, for example, by requesting additional information, asking the agent questions, or invoking another virtual social dialogue on a different topic. Certain Definitions As used herein, “rhetorical structure theory” is an area of research and study that provided a theoretical basis upon which the coherence of a discourse could be analyzed. As used herein, “discourse tree” or “DT” refers to a structure that represents the rhetorical relations for a sentence of part of a sentence. As used herein, a “rhetorical relation,” “rhetorical relationship,” or “coherence relation” or “discourse relation” refers to how two segments of discourse are logically connected to one another. Examples of rhetorical relations include elaboration, contrast, and attribution. As used herein, a “sentence fragment,” or “fragment” is a part of a sentence that can be divided from the rest of the sentence. A fragment is an elementary discourse unit. For example, for the sentence “Dutch accident investigators say that evidence points to pro-Russian rebels as being responsible for shooting down the plane,” two fragments are “Dutch accident investigators say that evidence points to pro-Russian rebels” and “as being responsible for shooting down the plane.” A fragment can, but need not, include a verb. As used herein, “signature” or “frame” refers to a property of a verb in a fragment. Each signature can include one or more thematic roles. For example, for the fragment “Dutch accident investigators say that evidence points to pro-Russian rebels,” the verb is “say” and the signature of this particular use of the verb “say” could be “agent verb topic” where “investigators” is the agent and “evidence” is the topic. As used herein, “thematic role” refers to components of a signature used to describe a role of one or more words. Continuing the previous example, “agent” and “topic” are thematic roles. As used herein, “nuclearity” refers to which text segment, fragment, or span, is more central to a writer's purpose. The nucleus is the more central span, and the satellite is the less central one. As used herein, “coherency” refers to the linking together of two rhetorical relations. As used herein, “communicative verb” is a verb that indicates communication. For example, the verb “deny” is a communicative verb. As used herein, “communicative action” describes an action performed by one or more agents and the subjects of the agents. Turning now to the Figures,FIG.1depicts an example of a computing environment in accordance with an aspect of the present disclosure.FIG.1depicts one or more of computing device101, display130, network150, user device160, and external text corpus170. In the example depicted inFIG.1, computing device101communicates over network150with user device160. Computing device101answers questions transmitted by user device160and as appropriate, generates and inserts a virtual social dialogue into interactions between user device160and computing device101. User device160can be any mobile device such as a mobile phone, smart phone, tablet, laptop, smart watch, and the like. Computing device101includes one or more of dialogue application102, text corpus105, classification model120, and training data125. Dialogue application102can interact with user device160by receiving questions from user device160and answering those questions. In some cases, dialogue application102can facilitate a virtual social dialogue with user device160. An example of a process for facilitating virtual social dialogue is discussed further with respect toFIG.11. Examples of computing device101are distributed system1800and client computing devices1802,1804,1806, and1808. Examples of user device160include client computing devices1802,1804,1806, and1808. Computing device101can output interactions, e.g., questions and answers, on display130. User device160can also output interactions on a display. As depicted, display130includes various utterances. For example, dialogue application102asks a user a question via utterance131. In turn, the user responds with utterance132that he or she would “like to know more.” Dialogue application102outputs utterance133, which states “Here is what people are saying about (2).” Dialogue application102then generates and outputs virtual social dialogue134. Virtual social dialogue134includes utterances135-137, which are shown as utterances between virtual users. For example, utterance135appears to be from “User 1,” utterance136from “Agent 2,” and utterance137from “user 2.” Utterances within a virtual social dialogue can appear to be from an autonomous agent or a user. To generate content for the virtual social dialogue134, dialogue application102generates questions and answers from one or more corpuses of text. For example, dialogue application102can use text corpus105, which can be local to computing device101, and/or external text corpus170, which is accessible via network150. In an aspect, the generation of content can involve creating one or more communicative discourse trees. In an aspect, dialogue application102can use classification model120to determine rhetorical agreement between sentences (e.g., questions and answers). Classification model120can be trained with training data125. Classification model120can be trained to identify rhetorical similarity between text. Classification model120can be a predictive model, a classification model, or other model trained to detect a presence of particular features. An example of a model is a support vector machine. For example, classification model120can use one or more such models to analyze a communicative discourse tree. Examples of learning approaches include nearest neighbor models and tree kernel models. Examples of features that can be detected include a presence of argumentation, rhetoric agreement, a consecutive answer, or a feature present in text. Rhetoric Structure Theory and Discourse Trees Linguistics is the scientific study of language. For example, linguistics can include the structure of a sentence (syntax), e.g., subject-verb-object, the meaning of a sentence (semantics), e.g. dog bites man vs. man bites dog, and what speakers do in conversation, i.e., discourse analysis or the analysis of language beyond the sentence. The theoretical underpinnings of discourse, Rhetoric Structure Theory (RST), can be attributed to Mann, William and Thompson, Sandra, “Rhetorical structure theory: A Theory of Text organization,” Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243-281, 1988. Similar to how the syntax and semantics of programming language theory helped enable modern software compilers, RST helped enabled the analysis of discourse. More specifically RST posits structural blocks on at least two levels, a first level such as nuclearity and rhetorical relations, and a second level of structures or schemas. Discourse parsers or other computer software can parse text into a discourse tree. Rhetoric Structure Theory models logical organization of text, a structure employed by a writer, relying on relations between parts of text. RST simulates text coherence by forming a hierarchical, connected structure of texts via discourse trees. Rhetoric relations are split into the classes of coordinate and subordinate; these relations hold across two or more text spans and therefore implement coherence. These text spans are called elementary discourse units (EDUs). Clauses in a sentence and sentences in a text are logically connected by the author. The meaning of a given sentence is related to that of the previous and the following sentences. This logical relation between clauses is called the coherence structure of the text. RST is one of the most popular theories of discourse, being based on a tree-like discourse structure, discourse trees (DTs). The leaves of a DT correspond to EDUs, the contiguous atomic text spans. Adjacent EDUs are connected by coherence relations (e.g., Attribution, Sequence), forming higher-level discourse units. These units are then also subject to this relation linking. EDUs linked by a relation are then differentiated based on their relative importance: nuclei are the core parts of the relation, while satellites are peripheral ones. As discussed, in order to determine accurate request-response pairs, both topic and rhetorical agreement are analyzed. When a speaker answers a question, such as a phrase or a sentence, the speaker's answer should address the topic of this question. In the case of an implicit formulation of a question, via a seed text of a message, an appropriate answer is expected not only maintain a topic, but also match the generalized epistemic state of this seed. Rhetoric Relations Rhetorical relations can be described in different ways. For example, Mann and Thompson describe twenty-three possible relations. C. Mann, William & Thompson, Sandra. (1987) (“Mann and Thompson”). Rhetorical Structure Theory: A Theory of Text Organization. Other numbers of relations are possible. TABLE 1Relation NameNucleusSatelliteAntithesisideas favored by theideas disfavored by the authorauthorBackgroundtext whose understandingtext for facilitating understandingis being facilitatedCircumstancetext expressing the eventsan interpretive context of situation oror ideas occurring in thetimeinterpretive contextConcessionsituation affirmed bysituation which is apparentlyauthorinconsistent but also affirmed byauthorConditionaction or situation whoseconditioning situationoccurrence results fromthe occurrence of theconditioning situationElaborationbasic informationadditional informationEnablementan actioninformation intended to aid thereader in performing an actionEvaluationa situationan evaluative comment about thesituationEvidencea claiminformation intended to increase thereader's belief in the claimInterpretationa situationan interpretation of the situationJustifytextinformation supporting the writer'sright to express the textMotivationan actioninformation intended to increase thereader's desire to perform the actionNon-a situationanother situation which causes thatvolitionalone, but not by anyone's deliberateCauseactionNon-a situationanother situation which is caused byvolitionalthat one, but not by anyone'sResultdeliberate actionOtherwiseaction or situationwhose conditioning situation(antioccurrence results fromconditional)the lack of occurrence ofthe conditioning situationPurposean intended situationthe intent behind the situationRestatementa situationa reexpression of the situationSolutionhooda situation or methoda question, request, problem, orsupporting full or partialother expressed needsatisfaction of the needSummarytexta short summary of that textVolitionala situationanother situation which causes thatCauseone, by someone's deliberate actionVolitionala situationanother situation which is caused byResultthat one, by someone's deliberateaction Some empirical studies postulate that the majority of text is structured using nucleus-satellite relations. See Mann and Thompson. But other relations do not carry a definite selection of a nucleus. Examples of such relations are shown below. TABLE 2RelationNameSpanOther SpanContrastOne alternateThe other alternateJoint(unconstrained)(unconstrained)ListAn itemA next itemSequenceAn itemA next item FIG.2depicts an example of a discourse tree in accordance with an aspect of the present disclosure.FIG.2includes discourse tree200. Discourse tree includes text span201, text span202, text span203, relation210and relation238. The numbers inFIG.2correspond to the three text spans.FIG.3corresponds to the following example text with three text spans numbered 1, 2, 3: 1. Honolulu, Hawaii will be site of the 2017 Conference on Hawaiian History 2. It is expected that 200 historians from the U.S. and Asia will attend 3. The conference will be concerned with how the Polynesians sailed to Hawaii For example, relation210, or elaboration, describes the relationship between text span201and text span202. Relation238depicts the relationship, elaboration, between text span203and204. As depicted, text spans202and203elaborate further on text span201. In the above example, given a goal of notifying readers of a conference, text span 1 is the nucleus. Text spans 2 and 3 provide more detail about the conference. InFIG.2, a horizontal number, e.g., 1-3, 1, 2, 3 covers a span of text (possibly made up of further spans); a vertical line signals the nucleus or nuclei; and a curve represents a rhetoric relation (elaboration) and the direction of the arrow points from the satellite to the nucleus. If the text span only functions as a satellite and not as a nuclei, then deleting the satellite would still leave a coherent text. If fromFIG.2one deletes the nucleus, then text spans 2 and 3 are difficult to understand. FIG.3depicts a further example of a discourse tree in accordance with an aspect of the present disclosure.FIG.3includes components301and302, text spans305-307, relation310and relation328. Relation310depicts the relationship, enablement, between components306and305, and307, and305.FIG.3refers to the following text spans: 1. The new Tech Report abstracts are now in the journal area of the library near the abridged dictionary. 2. Please sign your name by any means that you would be interested in seeing. 3. Last day for sign-ups is 31 May. As can be seen, relation328depicts the relationship between entity307and306, which is enablement.FIG.3illustrates that while nuclei can be nested, there exists only one most nuclear text span. Constructing a Discourse Tree Discourse trees can be generated using different methods. A simple example of a method to construct a DT bottom up is: (1) Divide the discourse text into units by:(a) Unit size may vary, depending on the goals of the analysis(b) Typically, units are clauses(2) Examine each unit, and its neighbors. Is there a relation holding between them? (3) If yes, then mark that relation. (4) If not, the unit might be at the boundary of a higher-level relation. Look at relations holding between larger units (spans). (5) Continue until all the units in the text are accounted for. Mann and Thompson also describe the second level of building block structures called schemas applications. In RST, rhetoric relations are not mapped directly onto texts; they are fitted onto structures called schema applications, and these in turn are fitted to text. Schema applications are derived from simpler structures called schemas (as shown byFIG.4). Each schema indicates how a particular unit of text is decomposed into other smaller text units. A rhetorical structure tree or DT is a hierarchical system of schema applications. A schema application links a number of consecutive text spans, and creates a complex text span, which can in turn be linked by a higher-level schema application. RST asserts that the structure of every coherent discourse can be described by a single rhetorical structure tree, whose top schema creates a span encompassing the whole discourse. FIG.4depicts illustrative schemas in accordance with an aspect of the present disclosure.FIG.4shows a joint schema is a list of items consisting of nuclei with no satellites.FIG.4depicts schemas401-406. Schema401depicts a circumstance relation between text spans410and428. Scheme402depicts a sequence relation between text spans420and421and a sequence relation between text spans421and423. Schema403depicts a contrast relation between text spans430and431. Schema404depicts a joint relationship between text spans440and441. Schema405depicts a motivation relationship between450and451, and an enablement relationship between452and451. Schema406depicts joint relationship between text spans460and462. An example of a joint scheme is shown inFIG.4for the three text spans below: 1. Skies will be partly sunny in the New York metropolitan area today. 2. It will be more humid, with temperatures in the middle 80's. 3. Tonight will be mostly cloudy, with the low temperature between 65 and 70. WhileFIGS.2-4depict some graphical representations of a discourse tree, other representations are possible. FIG.5depicts a node-link representation of the hierarchical binary tree in accordance with an aspect of the present disclosure. As can be seen fromFIG.5, the leaves of a DT correspond to contiguous non-overlapping text spans called Elementary Discourse Units (EDUs). Adjacent EDUs are connected by relations (e.g., elaboration, attribution . . . ) and form larger discourse units, which are also connected by relations. “Discourse analysis in RST involves two sub-tasks: discourse segmentation is the task of identifying the EDUs, and discourse parsing is the task of linking the discourse units into a labeled tree.” See Joty, Shafiq R and Giuseppe Carenini, Raymond T Ng, and Yashar Mehdad. 2013. Combining intra- and multisentential rhetorical parsing for document-level discourse analysis. In ACL (1), pages 486-496. FIG.5depicts text spans that are leaves, or terminal nodes, on the tree, each numbered in the order they appear in the full text, shown inFIG.6.FIG.5includes tree500. Tree500includes, for example, nodes501-507. The nodes indicate relationships. Nodes are non-terminal, such as node501, or terminal, such as nodes502-507. As can be seen, nodes503and504are related by a joint relationship. Nodes502,505,506, and508are nuclei. The dotted lines indicate that the branch or text span is a satellite. The relations are nodes in gray boxes. FIG.6depicts an exemplary indented text encoding of the representation inFIG.5in accordance with an aspect of the present disclosure.FIG.6includes text600and text sequences602-604. Text600is presented in a manner more amenable to computer programming. Text sequence602corresponds to node502, sequence603corresponds to node503, and sequence604corresponds to node504. InFIG.6, “N” indicates a nucleus and “S” indicates a satellite. Examples of Discourse Parsers Automatic discourse segmentation can be performed with different methods. For example, given a sentence, a segmentation model identifies the boundaries of the composite elementary discourse units by predicting whether a boundary should be inserted before each particular token in the sentence. For example, one framework considers each token in the sentence sequentially and independently. In this framework, the segmentation model scans the sentence token by token, and uses a binary classifier, such as a support vector machine or logistic regression, to predict whether it is appropriate to insert a boundary before the token being examined. In another example, the task is a sequential labeling problem. Once text is segmented into elementary discourse units, sentence-level discourse parsing can be performed to construct the discourse tree. Machine learning techniques can be used. In one aspect of the present invention, two Rhetorical Structure Theory (RST) discourse parsers are used: CoreNLPProcessor which relies on constituent syntax, and FastNLPProcessor which uses dependency syntax. See Surdeanu, Mihai & Hicks, Thomas & Antonio Valenzuela-Escarcega, Marco. Two Practical Rhetorical Structure Theory Parsers. (2015). In addition, the above two discourse parsers, i.e., CoreNLPProcessor and FastNLPProcessor use Natural Language Processing (NLP) for syntactic parsing. For example, the Stanford CoreNLP gives the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and syntactic dependencies, indicate which noun phrases refer to the same entities. Practically, RST is a still theory that may work in many cases of discourse, but in some cases, it may not work. There are many variables including, but not limited to, what EDU's are in a coherent text, i.e., what discourse segmenters are used, what relations inventory is used and what relations are selected for the EDUs, the corpus of documents used for training and testing, and even what parsers are used. So for example, in Surdeanu, et al., “Two Practical Rhetorical Structure Theory Parsers,” paper cited above, tests must be run on a particular corpus using specialized metrics to determine which parser gives better performance. Thus unlike computer language parsers which give predictable results, discourse parsers (and segmenters) can give unpredictable results depending on the training and/or test text corpus. Thus, discourse trees are a mixture of the predicable arts (e.g., compilers) and the unpredictable arts (e.g., like chemistry were experimentation is needed to determine what combinations will give you the desired results). In order to objectively determine how good a Discourse analysis is, a series of metrics are being used, e.g., Precision/Recall/F1 metrics from Daniel Marcu, “The Theory and Practice of Discourse Parsing and Summarization,” MIT Press, (2000). Precision, or positive predictive value is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instances. Both precision and recall are therefore based on an understanding and measure of relevance. Suppose a computer program for recognizing dogs in photographs identifies eight dogs in a picture containing 12 dogs and some cats. Of the eight dogs identified, five actually are dogs (true positives), while the rest are cats (false positives). The program's precision is ⅝ while its recall is 5/12. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30=⅔ while its recall is 20/60=⅓. Therefore, in this case, precision is ‘how useful the search results are’, and recall is ‘how complete the results are.’” The F1 score (also F-score or F-measure) is a measure of a test's accuracy. It considers both the precision and the recall of the test to compute the score: F1=2×((precision×recall)/(precision+recall)) and is the harmonic mean of precision and recall. The F1 score reaches its best value at 1 (perfect precision and recall) and worst at0. Autonomous Agents or Chatbots A conversation between Human A and Human B is a form of discourse. For example, applications exist such as FaceBook® Messenger, WhatsApp®, Slack,® SMS, etc., a conversation between A and B may typically be via messages in addition to more traditional email and voice conversations. A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans. An example ultimate goal is that human A cannot tell whether B is a human or a machine (the Turning test, developed by Alan Turing in 1950). Discourse analysis, artificial intelligence, including machine learning, and natural language processing, have made great strides toward the long-term goal of passing the Turing test. Of course, with computers being more and more capable of searching and processing vast repositories of data and performing complex analysis on the data to include predictive analysis, the long-term goal is the chatbot being human-like and a computer combined. For example, users can interact with the Intelligent Bots Platform through a conversational interaction. This interaction, also called the conversational user interface (UI), is a dialog between the end user and the chatbot, just as between two human beings. It could be as simple as the end user saying “Hello” to the chatbot and the chatbot responding with a “Hi” and asking the user how it can help, or it could be a transactional interaction in a banking chatbot, such as transferring money from one account to the other, or an informational interaction in a HR chatbot, such as checking for vacation balance, or asking an FAQ in a retail chatbot, such as how to handle returns. Natural language processing (NLP) and machine learning (ML) algorithms combined with other approaches can be used to classify end user intent. An intent at a high level is what the end user would like to accomplish (e.g., get account balance, make a purchase). An intent is essentially, a mapping of customer input to a unit of work that the backend should perform. Therefore, based on the phrases uttered by the user in the chatbot, these are mapped that to a specific and discrete use case or unit of work, for e.g. check balance, transfer money and track spending are all “use cases” that the chatbot should support and be able to work out which unit of work should be triggered from the free text entry that the end user types in a natural language. The underlying rational for having an AI chatbot respond like a human is that the human brain can formulate and understand the request and then give a good response to the human request much better than a machine. Thus, there should be significant improvement in the request/response of a chatbot, if human B is mimicked. So an initial part of the problem is how does the human brain formulate and understand the request? To mimic, a model is used. RST and DT allow a formal and repeatable way of doing this. At a high level, there are typically two types of requests: (1) A request to perform some action; and (2) a request for information, e.g., a question. The first type has a response in which a unit of work is created. The second type has a response that is, e.g., a good answer, to the question. The answer could take the form of, for example, in some aspects, the AI constructing an answer from its extensive knowledge base(s) or from matching the best existing answer from searching the internet or intranet or other publically/privately available data sources. Discourse Trees More specifically, to represent linguistic features of text, certain aspects described herein use rhetoric relations and speech acts (or communicative actions). Rhetoric relations are relationships between the parts of the sentences, typically obtained from a discourse tree. Speech acts are obtained as verbs from a verb resource such as VerbNet. By using both rhetoric relations and communicative actions, aspects described herein can correctly recognize valid request-response pairs. To do so, aspects correlate the syntactic structure of a question with that of an answer. By using the structure, a better answer can be determined. For example, when an autonomous agent receives an indication from a person that the person desires to sell an item with certain features, the autonomous agent should provide a search result that not only contains the features but also indicates an intent to buy. In this manner, the autonomous agent has determined the user's intent. Similarly, when an autonomous agent receives a request from a person to share knowledge about a particular item, the search result should contain an intent to receive a recommendation. When a person asks an autonomous agent for an opinion about a subject, the autonomous agent shares an opinion about the subject, rather than soliciting another opinion. Analyzing Request and Response Pairs FIG.7depicts an exemplary discourse tree for an example request about property tax in accordance with an aspect of the present disclosure. The node labels are the relations and the arrowed line points to the satellite. The nucleus is a solid line.FIG.7depicts the following text. Request: “My husbands' grandmother gave him his grandfather's truck. She signed the title over but due to my husband having unpaid fines on his license, he was not able to get the truck put in his name. I wanted to put in my name and paid the property tax and got insurance for the truck. By the time it came to sending off the title and getting the tag, I didn't have the money to do so. Now, due to circumstances, I am not going to be able to afford the truck. I went to the insurance place and was refused a refund. I am just wondering that since I am not going to have a tag on this truck, is it possible to get the property tax refunded?” Response: “The property tax is assessed on property that you own. Just because you chose to not register it does not mean that you don't own it, so the tax is not refundable. Even if you have not titled the vehicle yet, you still own it within the boundaries of the tax district, so the tax is payable. Note that all states give you a limited amount of time to transfer title and pay the use tax. If you apply late, there will be penalties on top of the normal taxes and fees. You don't need to register it at the same time, but you absolutely need to title it within the period of time stipulated in state law.” As can be seen inFIG.7, analyzing the above text results in the following. “My husbands' grandmother gave him his grandfather's truck” is elaborated by “She signed the title over but due to my husband” elaborated by “having unpaid fines on his license, he was not able to get the truck put in his name.” which is elaborated by “I wanted to put in my name,” “and paid the property tax”, and “and got insurance for the truck.” “My husbands' grandmother gave him his grandfather's truck. She signed the title over but due to my husband having unpaid fines on his license, he was not able to get the truck put in his name. I wanted to put in my name and paid the property tax and got insurance for the truck.” is elaborated by; “I didn't have the money” elaborated by “to do so” contrasted with “By the time” elaborated by “it came to sending off the title” “and getting the tag” “My husbands' grandmother gave him his grandfather's truck. She signed the title over but due to my husband having unpaid fines on his license, he was not able to get the truck put in his name. I wanted to put in my name and paid the property tax and got insurance for the truck. By the time it came to sending off the title and getting the tag, I didn't have the money to do so” is contrasted with “Now, due to circumstances,” elaborated with “I am not going to be able to afford the truck.” which is elaborated with “I went to the insurance place” “and was refused a refund” “My husbands' grandmother gave him his grandfather's truck. She signed the title over but due to my husband having unpaid fines on his license, he was not able to get the truck put in his name. I wanted to put in my name and paid the property tax and got insurance for the truck. By the time it came to sending off the title and getting the tag, I didn't have the money to do so. Now, due to circumstances, I am not going to be able to afford the truck. I went to the insurance place and was refused a refund.” is elaborated with “I am just wondering that since I am not going to have a tag on this truck, is it possible to get the property tax refunded?” “I am just wondering” has attribution to “that” is the same unit as “is it possible to get the property tax refunded?” which has condition “since I am not going to have a tag on this truck” As can be seen, the main subject of the topic is “Property tax on a car”. The question includes the contradiction: on one hand, all properties are taxable, and on the other hand, the ownership is somewhat incomplete. A good response has to address both topic of the question and clarify the inconsistency. To do that, the responder is making even stronger claim concerning the necessity to pay tax on whatever is owned irrespectively of the registration status. This example is a member of positive training set from our Yahoo! Answers evaluation domain. The main subject of the topic is “Property tax on a car”. The question includes the contradiction: on one hand, all properties are taxable, and on the other hand, the ownership is somewhat incomplete. A good answer/response has to address both topic of the question and clarify the inconsistency. The reader can observe that since the question includes rhetoric relation of contrast, the answer has to match it with a similar relation to be convincing. Otherwise, this answer would look incomplete even to those who are not domain experts. FIG.8depicts an exemplary response for the question represented inFIG.7, according to certain aspects of the present invention. The central nucleus is “the property tax is assessed on property” elaborated by “that you own”. “The property tax is assessed on property that you own” is also a nucleus elaborated by “Just because you chose to not register it does not mean that you don't own it, so the tax is not refundable. Even if you have not titled the vehicle yet, you still own it within the boundaries of the tax district, so the tax is payable. Note that all states give you a limited amount of time to transfer title and pay the use tax.” The nucleus “The property tax is assessed on property that you own. Just because you chose to not register it does not mean that you don't own it, so the tax is not refundable. Even if you have not titled the vehicle yet, you still own it within the boundaries of the tax district, so the tax is payable. Note that all states give you a limited amount of time to transfer title and pay the use tax.” is elaborated by “there will be penalties on top of the normal taxes and fees” with condition “If you apply late,” which in turn is elaborated by the contrast of “but you absolutely need to title it within the period of time stipulated in state law.” and “You don't need to register it at the same time.” Comparing the DT ofFIG.7and DT ofFIG.8, enables a determination of how well matched the response (FIG.8) is to the request (FIG.7). In some aspects of the present invention, the above framework is used, at least in part, to determine the DTs for the request/response and the rhetoric agreement between the DTs. In another example, the question “What does The Investigative Committee of the Russian Federation do” has at least two answers, for example, an official answer or an actual answer. FIG.9illustrates a discourse tree for an official answer in accordance with an aspect of the present disclosure. As depicted inFIG.9, an official answer, or mission statement states that “The Investigative Committee of the Russian Federation is the main federal investigating authority which operates as Russia's Anti-corruption agency and has statutory responsibility for inspecting the police forces, combating police corruption and police misconduct, is responsible for conducting investigations into local authorities and federal governmental bodies.” FIG.10illustrates a discourse tree for a raw answer in accordance with an aspect of the present disclosure. As depicted inFIG.10, another, perhaps more honest, answer states that “Investigative Committee of the Russian Federation is supposed to fight corruption. However, top-rank officers of the Investigative Committee of the Russian Federation are charged with creation of a criminal community. Not only that, but their involvement in large bribes, money laundering, obstruction of justice, abuse of power, extortion, and racketeering has been reported. Due to the activities of these officers, dozens of high-profile cases including the ones against criminal lords had been ultimately ruined.” The choice of answers depends on context. Rhetoric structure allows differentiating between “official”, “politically correct”, template-based answers and “actual”, “raw”, “reports from the field”, or “controversial” answers, seeFIGS.9and10). Sometimes, the question itself can give a hint about which category of answers is expected. If a question is formulated as a factoid or definitional one, without a second meaning, then the first category of answers is suitable. Otherwise, if a question has the meaning “tell me what it really is”, then the second category is appropriate. In general, after extracting a rhetoric structure from a question, selecting a suitable answer that would have a similar, matching, or complementary rhetoric structure is easier. The official answer is based on elaboration and joints, which are neutral in terms of controversy a text might contain (SeeFIG.9). At the same time, the raw answer includes the contrast relation. This relation is extracted between the phrase for what an agent is expected to do and what this agent was discovered to have done. Communicative Discourse Trees (CDTs) Dialogue application102can create, analyze, and compare communicative discourse trees. Communicative discourse trees are designed to combine rhetoric information with speech act structures. CDTs include with arcs labeled with expressions for communicative actions. By combining communicative actions, CDTs enable the modeling of RST relations and communicative actions. A CDT is a reduction of a parse thicket. See Galitsky, B, Ilvovsky, D. and Kuznetsov SO. Rhetoric Map of an Answer to Compound Queries Knowledge Trail Inc. ACL 2015, 681-686. (“Galitsky 2015”). A parse thicket is a combination of parse trees for sentences with discourse-level relationships between words and parts of the sentence in one graph. By incorporating labels that identify speech actions, learning of communicative discourse trees can occur over a richer features set than just rhetoric relations and syntax of elementary discourse units (EDUs). An exemplary process for building a communicative discourse tree is described below. Dialogue application102accesses a sentence comprising fragments. At least one fragment includes a verb and words and each word includes a role of the words within the fragment, and each fragment is an elementary discourse unit. For example, discourse application102accesses a sentence such as “Rebels, the self-proclaimed Donetsk People's Republic, deny that they controlled the territory from which the missile was allegedly fired.” Continuing the example, dialogue application102determines that the sentence includes several fragments. For example, a first fragment is “rebels . . . deny.” A second fragment is “that they controlled the territory.” A third fragment is “from which the missile was allegedly fired.” Each fragment includes a verb, for example, “deny” for the first fragment and “controlled” for the second fragment. Although, a fragment need not include a verb. Dialogue application102generates a discourse tree that represents rhetorical relationships between the sentence fragments. The discourse tree includes nodes. Each nonterminal node represents a rhetorical relationship between two of the sentence fragments and each terminal node of the nodes of the discourse tree is associated with one of the sentence fragments. Continuing the example, dialogue application102generates a discourse tree. For example, referring back to the text above, the third fragment, “from which the missile was allegedly fired” elaborates on “that they controlled the territory.” The second and third fragments together relate to attribution of what happened, i.e., the attack cannot have been the rebels because they do not control the territory. Dialogue application102accesses multiple verb signatures. Continuing the example, dialogue application102accesses a list of verbs, e.g., from VerbNet. Each verb matches or is related to the verb of the fragment. For example, the for the first fragment, the verb is “deny.” Accordingly, dialogue application102accesses a list of verb signatures that relate to the verb deny. Each verb signature includes the verb of the fragment and one or more of thematic roles. For example, a signature includes one or more of noun phrase (NP), noun (N), communicative action (V), verb phrase (VP), or adverb (ADV). The thematic roles describing the relationship between the verb and related words. For example “the teacher amused the children” has a different signature from “small children amuse quickly.” For the first fragment, the verb “deny,” discourse application102accesses a list of frames, or verb signatures for verbs that match “deny.” The list is “NP V NP to be NP,” “NP V that S” and “NP V NP.” Each verb signature includes thematic roles. A thematic role refers to the role of the verb in the sentence fragment. Dialogue application102determines the thematic roles in each verb signature. Example thematic roles include actor, agent, asset, attribute, beneficiary, cause, location destination source, destination, source, location, experiencer, extent, instrument, material and product, material, product, patient, predicate, recipient, stimulus, theme, time, or topic. Dialogue application102determines, for each verb signature of the verb signatures, a number of thematic roles of the respective signature that match a role of a word in the fragment. For the first fragment, dialogue application102determines that the verb “deny” has only three roles, “agent”, “verb” and “theme.” Dialogue application selects a particular verb signature from the verb signatures based on the particular verb signature having a highest number of matches. For example, referring again to the text above, deny in the first fragment “the rebels deny . . . that they control the territory” is matched to verb signature deny “NP V NP”, and “control” is matched to control (rebel, territory). Verb signatures are nested, resulting in a nested signature of “deny(rebel, control(rebel, territory)).” Dialogue Management Using a Virtual Social Dialogue Aspects of the present disclosure relate to autonomous agents (chatbots) that deliver content in the form of virtual social dialogues that are automatically produced from textual documents. A virtual social dialogue can be presented as part of an interaction between user and agent. Dialogue management, which can be performed by dialogue application102, includes processing clarification requests and hints received from the user device (e.g., an indication that a user is further interested in a specific topic or item of content). Once a current answer is delivered to the user device, the agent can ask whether the user is happy with the answer provided. The agent can suggest options for further interactions, for example, a more traditional question and answer approach or a virtual social dialogue. FIG.11depicts a process1100for dialogue management using a virtual social dialogue, in accordance with an aspect of the present disclosure.FIG.11can be implemented by dialogue application102. For illustrative purposes,FIG.11is discussed in conjunction withFIG.12. FIG.12depicts an exemplary user interface depicting a session using an autonomous agent, depicting conventional and virtual social dialogues, in accordance with an aspect of the present disclosure.FIG.12depicts dialogue session1200. Dialogue session1200includes utterances1201-1210and virtual social dialogue sessions1220and1230. In the example depicted inFIG.12, dialogue application102, implementing an autonomous agent, interacts with user device160. Dialogue session1200includes two virtual social dialogues1220and1230, but fewer or more virtual social dialogue sessions are possible. Virtual social dialogue1200is merely an example; other dialogue sessions can differ. Dialogue application102initiates the session by outputting utterance1201, which states “ask a new question.” At block1101, process1100involves receiving, from a user device, a search query including text fragments. Continuing the example, dialogue application102receives the user utterance1202, which states “advantages and new features of 5G.” A user utterance can be in the form of a sentence, question, or simply a few words. At block1102, process1100involves obtaining search results by performing a search of electronic documents using the search query. Continuing the example, dialogue application102searches electronic documents, for example, text corpus105or external text corpus170. Dialogue application102can use any standard search techniques to locate relevant electronic documents. For example, keyword-based searching can be employed. In this case, the search results can include results that included a threshold level of keyword matches with the search query. In some cases, dialogue application102outputs a status, such as utterance1203. As can be seen, utterance1203lists some universal resource locators (URLs) which dialogue application102analyzes in response to user utterance1202. Continuing the example, dialogue application102retrieves search results based on the search query. At block1103, process1100involves forming a set of topics by clustering the search results. Clustering involves determining a number of topics from the search results by grouping semantically similar and/or relevant search results together into a topic. Continuing the example, at block1103, dialogue application102forms a set of topics from the search results by using clustering. Clustering is described further with respect toFIG.13. From the clustering, dialogue application102obtains a set of topics. At block1104, process1100involves outputting, to the user device, the set of topics. Continuing the example, dialogue application102outputs utterance1204, which includes the set of determined topics from block1103. Utterance1204lists options as “demonstrating the benefits of the technology[1],” “wide range of people from student [2],” “next wireless network[2]. are already being built,” “5g-ready[3],” “5g new radio nr specification [3]” and “next generation mobile networks alliance[9].” In some cases, as can be seen, dialogue application102asks for clarification, for example “I believe these are the main topics of your query: is this what you meant?” At block1105, process1100involves receiving, from the user device, a selection of a topic from the set of topics. Continuing the example, a user device can transmit a selection to dialogue application102. The user device selects “next state in technology (or [9]),” as depicted in utterance1205. Continuing the example, dialogue application102outputs utterance1206, which states “Put simply, it's the next stage in mobile technology. It follows 4G and 4G LTE and builds on what they offer, delivering everything at a faster speed . . . .” In the example shown, dialogue application102also asks the user device for further clarification, for example “Are you OK with this answer? yes/more/no/specify [different topic]/reduce search to web domain/virtual social dialogue.” Dialogue application102can present different options to the user. Examples of options include accepting the answer and concluding the session, navigating to another answer, rejecting the answer, and reformulating the query, narrowing search results to a particular domain, e.g., quota.com, and proceeding to obtain more search results in the form of a virtual social dialogue. As can be seen by utterance1207, the user device requests a “virtual social dialogue.” At block1106, process1100involves constructing a virtual social dialogue. An example of a process of constructing a virtual social dialogue is discussed further with respect toFIG.16. For example, dialogue application102identifies, from the electronic documents, one or more pairs of questions and answers that are relevant to the selected topic. Each question and the answer form a virtual conversation. In an aspect, dialogue application102can screen questions and answers to ensure that the questions and answers are in rhetorical agreement with each other and/or with the other questions and answers in the virtual social dialogue. Classification model120can be trained and used for this purpose. At block1107, process1100involves providing the virtual conversation to the user device. Continuing the example, dialogue application102outputs utterance1208, which indicates to the user that a virtual social dialogue follows. Dialogue application102outputs virtual social dialogue1220. As can be seen, the virtual social dialogue1220appears as a conversation between imaginary users (“User1” and “User2”) and a chatbot (“Agent1”). But any number of users or chatbots can be depicted. The topic of the utterances by the users and chatbots remains consistent with the original query. As long as imaginary chatbot responds to the same person, the dialog is intended to stay cohesive; coreferences in the follow-up questions are maintained. Further, as depicted, the virtual social dialogue1220is shown in frames to draw a visual distinction. The primary dialogue can be viewed as a one in the meta-level, and the object-level dialogue is naturally embedded into the meta-level one. Continuing the example, a virtual social dialogue can be used one or more times during a dialogue session. For example, as depicted, in response to virtual social dialogue1220, as indicated by utterance1209, the user asks “Are these features right for me?” In response, dialogue application102outputs utterance1210, which states “This is what has been answered to people with similar questions.” Dialogue application102then outputs virtual social dialogue1230. Dialogue application102can continue to interact with user device160as necessary. For example, the user device160can navigate through different topics, optionally using virtual social dialogues for the topics. Clustering When search queries are formed that express a broad user intent, frequently, fairly large result sets are returned, which can pose a problem for navigation. Clustering can address this problem. Clustering involves grouping search results into semantically similar results (possibly in real-time), and presenting descriptive summaries of these groups to a user. In some cases, clustering allows a user to identify a useful subset of the results, which can be provided as input as a refinement into a clustering algorithm, thereby identifying narrower subsets. Narrower subsets can be easier to navigate. These narrower subsets can be narrowed further. To be useful, clusters of search results should meet some basic criteria. Firstly, each cluster should be associated with a meaning communicated with the user (by labels, snippets or individual search results indicative of this cluster). Secondly, search results of the same cluster should have a similarity with each other. Each cluster needs to be a coherent subset of possible search intents. Thirdly, search results assigned to different clusters should be substantially different from one another. Each cluster needs to contain a distinct subset of search intents. For example, a clustering algorithm should implement clustering as a classification of a document into a cluster. Documents can be treated as vectors of weight-term pairs. The system designer needs to decide on which terms to choose and whether to use the whole document or only a part of it as the source of terms. The classification algorithm should be selected. The existing clustering techniques vary in accuracy, robustness, speed and storage requirements. The output of the classifier, or cluster representations, should be determined. The classification process results in a set of clusters, where every cluster contains documents about a unique topic. Clusters can be represented using a selected document or term list, and more creativity with cluster representation is needed. A set of evaluation criteria should be developed. After the classification tool is created, the results need to be analyzed and performance evaluated from the effectiveness and efficiency viewpoint. Evaluation can be difficult in some cases. Different clustering methods can be used. Primary differences between clustering approaches involve defining the similarity function, adjusting the clustering algorithm, and producing informative snippets for the obtained clusters. Traditional clustering approaches involve embedding documents into vectors and then computing a geometric function on them, such as cosine, to measuring their similarity. While such approaches have a solid theoretical foundation, the results are frequently random and illogical, highly subject to the peculiarities of the documents being clustered. In an aspect, hierarchical clustering algorithms can also be used. Hierarchical clustering algorithms are either top-down or bottom-up. The former class of algorithms tackles each document as a singleton cluster at the outset and then successively merge (or agglomerate) pairs of clusters until all clusters have been merged into a single cluster that contains all documents. Bottom-up hierarchical clustering is therefore called hierarchical agglomerative clustering. Top-down clustering requires a method for splitting a cluster, doing it recursively until individual documents are reached. An example of clustering approaches are discussed with respect toFIGS.13-15. FIG.13depicts an exemplary process1300for clustering, in accordance with an aspect of the present disclosure. As discussed with respect toFIG.11, clustering can be used to determine topics from search results obtained from queries of electronic documents. Process1300can be implemented by dialogue application102. Generally, clustering involves grouping two objects together that have a similarity that is less than a threshold amount, or within a tolerance. For example, each object can be represented by a vector. In the case of objects that are text (e.g., a sentence), the vector can represent a distribution of words (e.g., a histogram). Given a numerical representation, a difference between two fragments of text (e.g., two sentences or utterances) can be quantified. Dialogue application102can cluster text based on syntactic similarity and/or relevance. In some cases, clustering of text can involve comparing both syntactic similarity, e.g., a similarity of the meaning of the objects. For example, consider the example phrases “cellular phone,” “mobile phone,” “5G [fifth generation cellular] technology,” “base station,” and “Windows 10.” At block1301, process1300involves generating a syntactic similarity matrix that numerically represents a syntactic similarity between each of the search results. Table 3, below, depicts an example of a syntactic similarity matrix. In table 3, the numbers indicate distance. For example, a “1” indicates high distance (and therefore lower syntactic similarity), whereas a “0” indicates lower distance (and therefore high syntactic similarity). As can be seen, the object “cellular phone” has a high syntactic similarity with “mobile phone” as these objects refer to the same thing. TABLE 3Object 1Object 2Object 3Object 4Object 5Object 1X0111“cellular phone”Object 20X111“mobile phone”Object 3 “5G11X11technology”Object 4 “base111X1station”Object 511111“Windows 10” At block1302, process1300involves generating a relevance similarity matrix that numerically represents a relevancy between each of the search results. Clustering can also involve a relevance similarity, e.g., how relevant a first object is to a second object. In the table below, the numbers indicate relevance distance. For example, a “1” indicates high distance (and therefore lower relevancy), whereas a “0” indicates lower distance (and therefore higher relevancy). Continuing the example, table 4, below depicts an example of a relevance similarity matrix. Table 4 lists objects “cellular phone,” “mobile phone,” “5G technology, “base station” which are relevant to each other, as reflected in a relevance distance of zero. Table 4 also lists “5G technology,” which has a relevance distance of 0.1 from “cellular phone,” and “mobile phone.” But as can be seen, “Windows 10” is not relevant to any other object, thereby having a relevance distance of 1. TABLE 4Object 1Object 2Object 3Object 4Object 5Object 1X0001“cellular phone”Object 20X001“mobile phone”Object 3 “5G0.10.1X0.11technology”Object 4 “base000X1station”Object 511111“Windows 10” At block1303, process1300involves clustering the search results into clusters by identifying pairs of the search results that (i) are separated in the syntactic similarity matrix by less than a first minimum distance and (ii) are separated in the relevance similarity matrix by less than a second minimum distance. Continuing the example, if a second distance (relevance) is less than 0.2, then the objects “cellular phone,” “mobile phone,” “5G technology,” and “base station” are grouped together but “Windows 10” is not. Therefore, at block1303, two clusters are formed. The first cluster includes “cellular phone,” “mobile phone,” “5G technology,” and “base station.” The second cluster includes “Windows 10.” At block1304, process1300involves forming a set of topics by identifying, for each cluster of the clusters, a noun phrase from one or more search results in the cluster. Continuing the example, the first cluster might be named “cellular,” from a word extracted from “cellular phone.” The second cluster might be named “Windows.” In some cases, the noun phrase occurs in all search results associated with the cluster and/or occupies a position in a title, top-level nucleus (of a discourse tree), abstract, or keyword of the respective search result. In this manner, an importance of the noun to the rest of the text associated with the search result. A Greedy Search Algorithm In an example, a greedy search algorithm is used as part of a clustering approach. One example is depicted inFIG.14. FIG.14illustrates an example of a greedy search algorithm, in accordance with an aspect of the present disclosure.FIG.14depicts greedy search algorithm, which includes operations1401-1433. The input of the algorithm is a user query q in NL and a subset of snippets A*lastranked by their relevance for the last successful refined query, each snippet a∈A*lasthas a particular real-valued weight w∈R. These weights are assigned to snippets by a search engine and reflect not only relevance to the query, but also might take into account the user's profile, item popularity, geo-location, his search history, etc. The input at the initial call is a user query q and the empty set of snippets A*last. At the first step (line 1) the request is sent to a search engine. Then, a function δ is applied to the set of returned snippets A and the request q in order to obtain their unique formal representations δ(q) and Aδ={δ(a)|a∈A}, respectively. This representation makes texts comparable to each other. To compute clusters (operation1404) of similar snippets we use two matrices: the matrix of syntactic similarity S and search relevance similarity matrix W with the entries sij=sim(δ(ai),δ(aj)), i,j=1, . . . , |A| and wij=rel_sim(wi,wj), i,j=1, . . . , |A|, respectively. Values of both similarity matrices can be scaled to [0,1]. Centroids of the computed clusters C are the candidates for a new refined request. Specific information about the clusters is being presented to the user until a cluster with relevant specification is found (operations1407-1422). In some cases, a user can further refine the approach. In an example, the interaction with the user is carried out in 4 steps:1) The biggest clusters C is chosen, i.e., C=argmaxC∈|{δ(a)|δ(a)∈C} (line 8);2) The added information in C w.r.t. q is computed. In can be done formally by computing the difference between a centroid of cluster C and δ(q) (see ComputeDifference function, line 9);3) The computed difference is translated into a set of phrases;4T is shown to the user and feedback r∈{ShowDetails, Relevant, Irrelevant} is received. The feedback defines the further strategy of the chatbot. ShowDetails means that the user has found the information he or she searched for and all the snippets/documents corresponding to the cluster will be returned to the user ranked by their relevance weights (operation1425) assigned by the search engine. Relevant answer is the case where the user has found a proposed query specification quite useful, but not enough (i.e., the further query specification is required), in this case a new augmented query qavgis sent to the search engine (operation1427) via the recursive call of GreedySearch(qavg, A*), Irrelevant answer describes the case where specifications do not contain relevant information. When all proposed specifications in C are irrelevant, the algorithm returns a subset of snippets from a cluster with the last relevant specification (operation1431). Agglomerative Clustering Algorithm An example of a clustering algorithm is agglomerative clustering. Agglomerative clustering can be applied to the search queries such as those generated at block1101of process1000. Termination criteria ensure that each centroid of clusters (i.e., the shared information of snippets in a cluster) will be the shortest specification of the request. FIG.15illustrates an approach to Agglomerative Clustering, in accordance with an aspect of the present disclosure.FIG.15depicts agglomerative clustering algorithm1500, which includes operations1501-1514. In agglomerative clustering algorithm1500, a cluster is denoted by capital letter C and the corresponding centroid by lower case letter c. For the sake of convenience some functions are defined: Input: query δ(q), snippet set Aδ Output: set of subsets of snippets {A*|A*⊆A}=AgglomerativeClustering(δ(q),Aδ) As mentioned above, requests and snippets are given in NL. We define a mapping δ: L→V that maps a text in natural language to a unique formal representation, L is a space of all possible texts in natural language, V is a space of their formal representations. Further we consider the examples of spaces V and discuss how the functions defined in this section can be rewritten for the considered spaces. sim: Vx V→[0,1]⊂R is a function that evaluates similarity between two objects, the similarity between an object and its copy is equal to 1. merge: Vx V→V is a function that returns a shared description of its two arguments, the shared description is in the same space as the merged arguments. is included: V×V→{True, False} is a function that returns True if the description of the first argument is included in the description of the second one, False otherwise. rel_sim: R×R→p [0,1]⊂R is a function that evaluates relevance similarity between two objects by their relevance weights, the similarity between an object and its copy is equal to 1. Agglomerative clustering receives a query δ(q) and a snippet set Aδas input, represented in the space where sim, merge and is_included functions are defined. Initially, each snippet a∈Aδis an individual cluster centroid in. Pairwise syntactic similarity between cluster centroids is stored in a matrix S of the size ||×||, the relevance similarity is stored in matrix W of the same size ||×||. On each iteration the most similar cluster centroids are chosen (line1511) to compute a new centroid c, which is their shared description (line1512). The weight of a new cluster C is the maximal relevance weight of its members, i.e., wC=max{wa|δ(a)∈C}. Here we use capital letters for clusters and lowercase letters for their centroids, i.e. C⊆Aδfor a cluster and c for its centroid. To compute similarity between centroids, both syntactic and relevant similarities are taken into account. We use a weighted average of the similarities, i.e., similarity between centroids ciand cjis defined as k1sij+k2wij, where k1,k2∈R are coefficients of importance of syntactic and relevance similarities, respectively. If a newly created centroid contains the description of the original query (i.e., it retains complete information about the query) the two merged centroids are replaced by their shared description, the weight of the cluster is the maximal weight of the members of the merged clusters, i.e., wC=max{wa|δ(a)∈Ci∪Cj}. When all the centroids that do not lose the information from the original query are computed (the centroids that include as many snippets as possible and retain information from the query), the subsets of snippets corresponding to the computed centroids are returned. Computing Similarity Representing text as a vector. Once the snippets are received, a new set of terms from □∪{q} is computed. The N found terms correspond to the vector entries. Each text is represented by a vector of size N and filled with 0s and 1s. The “1” at i means that the ith term is contained in the text. 1. merge(d1,d2)=d1·d2 2. sim(d1,d2):(a) sim(d1,d2)=JaccardSimdarity(d1,d2)(b) sim(d1,d2)=CosineSimdarity(d1,d2)(c) sim(d1,d2)=SimpleMatchingCoefficent(d1,d2) 3. is_included(d1,d2)=d1⊆d2=merge(d1,d2)=d1 The following similarity measure is based on Parse Thickets (Chapter 7) 1. merge(d1,d2)=d1d2 2. sim(d1,d2): (a)simmax(d1,d2):=maxchunk∈(d1⋂d2)Score(chunk)(b)simavg(d1,d2):=1(d1⋂d2)∑chunk∈(d1⋂d2)Score(chunk) 3. is_included(d1,d2)=d1d2(c) Relevance Similarity relsim(wi,wj)=1-wi-wjmaxi,j∈1,…,Awij Virtual Social Dialogue Construction To develop the virtual social dialogue, dialogue application102forms questions and answers. Dialogue application102identifies, from the electronic documents, a question and an answer that are relevant to the selected topic. The answer is in rhetorical agreement with the question. Together, the question and the answer form a virtual conversation that can be depicted as between one or more agents or users. For example, to build a question from a paragraph of text, the text is divided into elementary discourse units (EDUs). A discourse tree is formed, in which the EDUs are at the bottom level. From the EDUs, satellite EDUs are then selected as answers to questions, which are derived from these EDUs by means of generalization. The questions are inserted into the corresponding text as if someone is interrupting the speaker in the moments of transition from nucleus to satellite EDUs. FIG.16depicts an exemplary process1600for a construction of a virtual social dialogue, in accordance with an aspect of the present disclosure. Process1600can be implemented by dialogue application102. For illustrative purposes, process1600is discussed with respect toFIG.17. FIG.17illustrates an approach to virtual social dialogue construction, in accordance with an aspect.FIG.17depicts part of a discourse tree1700. Discourse tree includes various rhetorical relations and elementary discourse units. In some cases, discourse tree1700can be a communicative discourse tree. Discourse tree1700includes satellite EDUs1701,1702, and1703and corresponding questions1711,1712, and1713. At block1601, process1600involves constructing a discourse tree from the electronic documents. Dialogue application102creates discourse tree1700. In some cases, dialogue application102creates a sequence of discourse trees for the electronic document, a single communicative discourse tree for every a paragraph (e.g., average 3-5 sentences). At block1602, process1600involves identifying, from the discourse tree, satellite elementary discourse units. Dialogue application102identifies satellite EDUs1701,1702, and1703. Each satellite elementary discourse unit can represent an answer. At block1603, process1600involves identifying a sentence corresponding to a satellite elementary discourse unit. For example, the sentence corresponding to satellite EDU1703is “However, the Investigative Committee of the Russian Federation believes that the plane was hit by a missile from the air which was not produced in Russia.” At block1604, process1600involves identifying a question from the satellite elementary discourse unit. Disclosed solutions employ one or more techniques such as rhetorical structure theory, communicative discourse trees, template matching, syntactic generalization, and web-mining. For example, in an aspect, disclosed solutions use rhetorical structure theory to form questions that correspond to the answers. In a further aspect, disclosed solutions use syntactic generalization and other discourse techniques to generate a set of question templates. The question templates can be used to verify that a generated question is of sufficient specificity. For example, a question should not be too specific as to give away the answer (e.g., “What is the name of a rock band from Liverpool, England with four members”). a word that represents either (i) a noun, (ii) a verb, or (iii) adjective. Continuing the example, the satellite elementary discourse unit (EDU)1703is “which was not produced in Russia.” At block1605, process1600involves inserting the question into the electronic document. Discourse approaches can be used to guide placement of the questions in the electronic documents. Evaluation of Dialogue Effectiveness and Coverage Evaluating the effectiveness of information delivery via virtual social dialogues, we compare the traditional chatbot sessions where users were given plain-text answers, and the ones where users were given virtual social dialogues. Results on comparative usability of conventional dialogue and virtual social dialogue are presented. Dialogues are assessed with respect to following usability properties:1) The speed of arriving to the sought piece of information. It is measured as a number of iteration (a number of user utterances) preceding the final reply of the chatbot which gave an answer wanted by the user. We measure the number of steps only if the user confirms that she accepts the answer.2) The speed of arriving to a decision to commit a transaction such as purchase or reservation or selection. A user is expected to accumulate sufficient information, and this information such as reviews should be convincing enough for making such decision.3) A number of entities that were explored during a session with the chatbot is also measured. How thorough and comprehensive the chatbot session is of particular interest, in particular, how much the user actually learns from it. This assessment is sometimes opposite to the above two measures but is nevertheless important for understanding the overall usability of various conversational modes. Precision and recall of search sessions with either dialogue mode are not compared, because the same information is delivered, but in distinct modes. The evaluation of usability is presented in Table 1. TABLE 1Evaluation of comparative effectiveness of conventional and virtual social dialoguesConventional dialoguesVirtual social dialoguesCoverageCoverage#of#of# ofiterationsexploration# ofiterationsexplorationiterationstill# ofiterationstill# oftill founddecisionentitiestill founddecisionentitiesConventional only4.66.310.8———Virtual only———4.16.013.7Conventional4.05.77.66.111.315.1followed by virtualVirtual followed5.67.112.33.77.011.5by conventional In the second and third rows, we assess the stand-alone systems. One can observe that virtual social dialogues take less iteration on average for information access and about the same number of iterations for decisions as conventional dialogues do. Virtual social dialogues stimulate the user to explore a higher number of entities though. Notice that the bottom row, the chat scenario proceeds from right to left. In the bottom two rows, we observe the usability of the hybrid system. When a conventional dialogue is followed by a virtual one, a lower portion of users is satisfied by the first step in comparison to the inverse architecture, where virtual is followed by conventional. Related Work and Conclusions (Piwek et al 2007) were pioneers of automated construction of dialogues, proposing Text2Dialogue system. The authors provided a theoretical foundation of the mapping that the system performs from RST structures to Dialogue representation structures. The authors introduced a number of requirements for a dialogue generation system (robustness, extensibility, and variation and control) and reported on the evaluation of the mapping rules. An important body of work concerns tutorial dialogue systems. Some of the work in that area focuses on authoring tools for generating questions, hints, and prompts. Typically, these are, however, single utterances by a single interlocutor, rather than an entire conversation between two agents. Some researchers have concentrated on generating questions together with possible answers such as multiple choice test items, but this work is restricted to a very specific type of question-answer pairs (Mitkov et al 2006). Conversion of a text into a dialogue is different from the dialogue generation problem; the former is a training set—based foundation for the latter. Response generation for dialogue can be viewed as a source-to-target transduction problem. (Sordoni et al. 2015) rescores the outputs of a phrasal machine translation-based conversation system with a neural model incorporating prior context. Recent progress in sequence-to-sequence models has been leveraged (Luan et al., 2016) to build an end-to-end dialogue systems that firstly applies an utterance message to a distributed vector representation using an encoder, then secondly generates a response from this representation. (Li et al. 2016) simulate dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity, coherence, and ease of answering. We measured comparable dialogue effectiveness properties such as the speed of arrival to a search result, a decision and domain coverage, in the current study. Dialogue acts is an important source which differentiates between a plain text and a dialogue. Proposed algorithm of virtual social dialogues can assist with building domain-specific chatbot training datasets. Recently released dataset, DailyDialog (Li et al., 2017b), is the only dataset that has utterances annotated with dialogue acts and is large enough for learning conversation models. Unlike the virtual social dialogues produced in this study, in DailyDialog conversations are not task oriented, and each conversation focuses on one topic. Each utterance is annotated with four dialogue acts. We proposed a novel mode of chatbot interaction via virtual social dialogue. It addresses sparseness of dialogue data on one hand and convincingness, perceived authenticity of information presented via dialogues on the other hand. We quantitatively evaluated improvement of user satisfaction with virtual social dialogue in comparison to regular chatbot replies and confirmed the strong points of the former. We conclude that virtual social dialogue is an important feature related to social search to be leveraged by a chatbot. Example Computing Systems FIG.18depicts a simplified diagram of a distributed system1800for implementing one of the aspects. In the illustrated aspect, distributed system1800includes one or more client computing devices1802,1804,1806, and1808, which are configured to execute and operate a client application such as a web browser, proprietary client (e.g., Oracle Forms), or the like over one or more network(s)1810. Server1812may be communicatively coupled with remote client computing devices1802,1804,1806, and1808via network1810. In various aspects, server1812may be adapted to run one or more services or software applications provided by one or more of the components of the system. The services or software applications can include nonvirtual and virtual environments. Virtual environments can include those used for virtual events, tradeshows, simulators, classrooms, shopping exchanges, and enterprises, whether two- or three-dimensional (3D) representations, page-based logical environments, or otherwise. In some aspects, these services may be offered as web-based or cloud services or under a Software as a Service (SaaS) model to the users of client computing devices1802,1804,1806, and/or1808. Users operating client computing devices1802,1804,1806, and/or1808may in turn utilize one or more client applications to interact with server1812to utilize the services provided by these components. In the configuration depicted in the figure, the software components1818,1820and1822of system1800are shown as being implemented on server1812. In other aspects, one or more of the components of system1800and/or the services provided by these components may also be implemented by one or more of the client computing devices1802,1804,1806, and/or1808. Users operating the client computing devices may then utilize one or more client applications to use the services provided by these components. These components may be implemented in hardware, firmware, software, or combinations thereof. It should be appreciated that various different system configurations are possible, which may be different from distributed system1800. The aspect shown in the figure is thus one example of a distributed system for implementing an aspect system and is not intended to be limiting. Client computing devices1802,1804,1806, and/or1808may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry 10, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry®, or other communication protocol enabled. The client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems. The client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS. Alternatively, or in addition, client computing devices1802,1804,1806, and1808may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over network(s)1810. Although exemplary distributed system1800is shown with four client computing devices, any number of client computing devices may be supported. Other devices, such as devices with sensors, etc., may interact with server1812. Network(s)1810in distributed system1800may be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially-available protocols, including without limitation TCP/IP (transmission control protocol/Internet protocol), SNA (systems network architecture), IPX (Internet packet exchange), AppleTalk, and the like. Merely by way of example, network(s)1810can be a local area network (LAN), such as one based on Ethernet, Token-Ring and/or the like. Network(s)1810can be a wide-area network and the Internet. It can include a virtual network, including without limitation a virtual private network (VPN), an intranet, an extranet, a public switched telephone network (PSTN), an infra-red network, a wireless network (e.g., a network operating under any of the Institute of Electrical and Electronics (IEEE) 802.18 suite of protocols, Bluetooth®, and/or any other wireless protocol); and/or any combination of these and/or other networks. Server1812may be composed of one or more general purpose computers, specialized server computers (including, by way of example, PC (personal computer) servers, UNIX® servers, mid-range servers, mainframe computers, rack-mounted servers, etc.), server farms, server clusters, or any other appropriate arrangement and/or combination. Server1812can include one or more virtual machines running virtual operating systems, or other computing architectures involving virtualization. One or more flexible pools of logical storage devices can be virtualized to maintain virtual storage devices for the server. Virtual networks can be controlled by server1812using software defined networking. In various aspects, server1812may be adapted to run one or more services or software applications described in the foregoing disclosure. For example, server1812may correspond to a server for performing processing described above according to an aspect of the present disclosure. Server1812may run an operating system including any of those discussed above, as well as any commercially available server operating system. Server1812may also run any of a variety of additional server applications and/or mid-tier applications, including HTTP (hypertext transport protocol) servers, FTP (file transfer protocol) servers, CGI (common gateway interface) servers, JAVA® servers, database servers, and the like. Exemplary database servers include without limitation those commercially available from Oracle, Microsoft, Sybase, IBM (International Business Machines), and the like. In some implementations, server1812may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client computing devices802,804,806, and808. As an example, data feeds and/or event updates may include, but are not limited to, Twitter® feeds, Facebook® updates or real-time updates received from one or more third party information sources and continuous data streams, which may include real-time events related to sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like. Server1812may also include one or more applications to display the data feeds and/or real-time events via one or more display devices of client computing devices1802,1804,1806, and1808. Distributed system1800may also include one or more databases1814and1816. Databases1814and1816may reside in a variety of locations. By way of example, one or more of databases1814and1816may reside on a non-transitory storage medium local to (and/or resident in) server1812. Alternatively, databases1814and1816may be remote from server1812and in communication with server1812via a network-based or dedicated connection. In one set of aspects, databases1814and1816may reside in a storage-area network (SAN). Similarly, any necessary files for performing the functions attributed to server1812may be stored locally on server1812and/or remotely, as appropriate. In one set of aspects, databases1814and1816may include relational databases, such as databases provided by Oracle, that are adapted to store, update, and retrieve data in response to SQL-formatted commands. FIG.19is a simplified block diagram of one or more components of a system environment1900by which services provided by one or more components of an aspect system may be offered as cloud services, in accordance with an aspect of the present disclosure. In the illustrated aspect, system environment1900includes one or more client computing devices1904,1906, and1908that may be used by users to interact with a cloud infrastructure system1902that provides cloud services. The client computing devices may be configured to operate a client application such as a web browser, a proprietary client application (e.g., Oracle Forms), or some other application, which may be used by a user of the client computing device to interact with cloud infrastructure system1902to use services provided by cloud infrastructure system1902. It should be appreciated that cloud infrastructure system1902depicted in the figure may have other components than those depicted. Further, the aspect shown in the figure is only one example of a cloud infrastructure system that may incorporate an aspect of the invention. In some other aspects, cloud infrastructure system1902may have more or fewer components than shown in the figure, may combine two or more components, or may have a different configuration or arrangement of components. Client computing devices1904,1906, and1908may be devices similar to those described above for1802,1804,1806, and1808. Although exemplary system environment1900is shown with three client computing devices, any number of client computing devices may be supported. Other devices such as devices with sensors, etc. may interact with cloud infrastructure system1902. Network(s)1910may facilitate communications and exchange of data between client computing devices1904,1906, and1908and cloud infrastructure system1902. Each network may be any type of network familiar to those skilled in the art that can support data communications using any of a variety of commercially-available protocols, including those described above for network(s)1810. Cloud infrastructure system1902may comprise one or more computers and/or servers that may include those described above for server1812. In certain aspects, services provided by the cloud infrastructure system may include a host of services that are made available to users of the cloud infrastructure system on demand, such as online data storage and backup solutions, Web-based e-mail services, hosted office suites and document collaboration services, database processing, managed technical support services, and the like. Services provided by the cloud infrastructure system can dynamically scale to meet the needs of its users. A specific instantiation of a service provided by cloud infrastructure system is referred to herein as a “service instance.” In general, any service made available to a user via a communication network, such as the Internet, from a cloud service provider's system is referred to as a “cloud service.” Typically, in a public cloud environment, servers and systems that make up the cloud service provider's system are different from the customer's own on-premises servers and systems. For example, a cloud service provider's system may host an application, and a user may, via a communication network such as the Internet, on demand, order and use the application. In some examples, a service in a computer network cloud infrastructure may include protected computer network access to storage, a hosted database, a hosted web server, a software application, or other service provided by a cloud vendor to a user, or as otherwise known in the art. For example, a service can include password-protected access to remote storage on the cloud through the Internet. As another example, a service can include a web service-based hosted relational database and a script-language middleware engine for private use by a networked developer. As another example, a service can include access to an email software application hosted on a cloud vendor's web site. In certain aspects, cloud infrastructure system1902may include a suite of applications, middleware, and database service offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such a cloud infrastructure system is the Oracle Public Cloud provided by the present assignee. Large volumes of data, sometimes referred to as big data, can be hosted and/or manipulated by the infrastructure system on many levels and at different scales. Such data can include data sets that are so large and complex that it can be difficult to process using typical database management tools or traditional data processing applications. For example, terabytes of data may be difficult to store, retrieve, and process using personal computers or their rack-based counterparts. Such sizes of data can be difficult to work with using most current relational database management systems and desktop statistics and visualization packages. They can require massively parallel processing software running thousands of server computers, beyond the structure of commonly used software tools, to capture, curate, manage, and process the data within a tolerable elapsed time. Extremely large data sets can be stored and manipulated by analysts and researchers to visualize large amounts of data, detect trends, and/or otherwise interact with the data. Tens, hundreds, or thousands of processors linked in parallel can act upon such data in order to present it or simulate external forces on the data or what it represents. These data sets can involve structured data, such as that organized in a database or otherwise according to a structured model, and/or unstructured data (e.g., emails, images, data blobs (binary large objects), web pages, complex event processing). By leveraging an ability of an aspect to relatively quickly focus more (or fewer) computing resources upon an objective, the cloud infrastructure system may be better available to carry out tasks on large data sets based on demand from a business, government agency, research organization, private individual, group of like-minded individuals or organizations, or other entity. In various aspects, cloud infrastructure system1902may be adapted to automatically provision, manage and track a customer's subscription to services offered by cloud infrastructure system1902. Cloud infrastructure system1902may provide the cloud services via different deployment models. For example, services may be provided under a public cloud model in which cloud infrastructure system1902is owned by an organization selling cloud services (e.g., owned by Oracle) and the services are made available to the general public or different industry enterprises. As another example, services may be provided under a private cloud model in which cloud infrastructure system1902is operated solely for a single organization and may provide services for one or more entities within the organization. The cloud services may also be provided under a community cloud model in which cloud infrastructure system1902and the services provided by cloud infrastructure system1902are shared by several organizations in a related community. The cloud services may also be provided under a hybrid cloud model, which is a combination of two or more different models. In some aspects, the services provided by cloud infrastructure system1902may include one or more services provided under Software as a Service (SaaS) category, Platform as a Service (PaaS) category, Infrastructure as a Service (IaaS) category, or other categories of services including hybrid services. A customer, via a subscription order, may order one or more services provided by cloud infrastructure system1902. Cloud infrastructure system1902then performs processing to provide the services in the customer's subscription order. In some aspects, the services provided by cloud infrastructure system1902may include, without limitation, application services, platform services and infrastructure services. In some examples, application services may be provided by the cloud infrastructure system via a SaaS platform. The SaaS platform may be configured to provide cloud services that fall under the SaaS category. For example, the SaaS platform may provide capabilities to build and deliver a suite of on-demand applications on an integrated development and deployment platform. The SaaS platform may manage and control the underlying software and infrastructure for providing the SaaS services. By utilizing the services provided by the SaaS platform, customers can utilize applications executing on the cloud infrastructure system. Customers can acquire the application services without the need for customers to purchase separate licenses and support. Various different SaaS services may be provided. Examples include, without limitation, services that provide solutions for sales performance management, enterprise integration, and business flexibility for large organizations. In some aspects, platform services may be provided by the cloud infrastructure system via a PaaS platform. The PaaS platform may be configured to provide cloud services that fall under the PaaS category. Examples of platform services may include without limitation services that enable organizations (such as Oracle) to consolidate existing applications on a shared, common architecture, as well as the ability to build new applications that leverage the shared services provided by the platform. The PaaS platform may manage and control the underlying software and infrastructure for providing the PaaS services. Customers can acquire the PaaS services provided by the cloud infrastructure system without the need for customers to purchase separate licenses and support. Examples of platform services include, without limitation, Oracle Java Cloud Service (JCS), Oracle Database Cloud Service (DBCS), and others. By utilizing the services provided by the PaaS platform, customers can employ programming languages and tools supported by the cloud infrastructure system and also control the deployed services. In some aspects, platform services provided by the cloud infrastructure system may include database cloud services, middleware cloud services (e.g., Oracle Fusion Middleware services), and Java cloud services. In one aspect, database cloud services may support shared service deployment models that enable organizations to pool database resources and offer customers a Database as a Service in the form of a database cloud. Middleware cloud services may provide a platform for customers to develop and deploy various business applications, and Java cloud services may provide a platform for customers to deploy Java applications, in the cloud infrastructure system. Various different infrastructure services may be provided by an IaaS platform in the cloud infrastructure system. The infrastructure services facilitate the management and control of the underlying computing resources, such as storage, networks, and other fundamental computing resources for customers utilizing services provided by the SaaS platform and the PaaS platform. In certain aspects, cloud infrastructure system1902may also include infrastructure resources1930for providing the resources used to provide various services to customers of the cloud infrastructure system. In one aspect, infrastructure resources1930may include pre-integrated and optimized combinations of hardware, such as servers, storage, and networking resources to execute the services provided by the PaaS platform and the SaaS platform. In some aspects, resources in cloud infrastructure system1902may be shared by multiple users and dynamically re-allocated per demand. Additionally, resources may be allocated to users in different time zones. For example, cloud infrastructure system1902may enable a first set of users in a first time zone to utilize resources of the cloud infrastructure system for a specified number of hours and then enable the re-allocation of the same resources to another set of users located in a different time zone, thereby maximizing the utilization of resources. In certain aspects, a number of internal shared services1932may be provided that are shared by different components or modules of cloud infrastructure system1902and by the services provided by cloud infrastructure system1902. These internal shared services may include, without limitation, a security and identity service, an integration service, an enterprise repository service, an enterprise manager service, a virus scanning and white list service, a high availability, backup and recovery service, service for enabling cloud support, an email service, a notification service, a file transfer service, and the like. In certain aspects, cloud infrastructure system1902may provide comprehensive management of cloud services (e.g., SaaS, PaaS, and IaaS services) in the cloud infrastructure system. In one aspect, cloud management functionality may include capabilities for provisioning, managing and tracking a customer's subscription received by cloud infrastructure system1902, and the like. In one aspect, as depicted in the figure, cloud management functionality may be provided by one or more modules, such as an order management module1920, an order orchestration module1922, an order provisioning module1924, an order management and monitoring module1926, and an identity management module1928. These modules may include or be provided using one or more computers and/or servers, which may be general purpose computers, specialized server computers, server farms, server clusters, or any other appropriate arrangement and/or combination. In exemplary operation1934, a customer using a client device, such as client computing device1904,1906or1908, may interact with cloud infrastructure system1902by requesting one or more services provided by cloud infrastructure system1902and placing an order for a subscription for one or more services offered by cloud infrastructure system1902. In certain aspects, the customer may access a cloud User Interface (UI), cloud UI1912, cloud UI1914and/or cloud UI1916and place a subscription order via these UIs. The order information received by cloud infrastructure system1902in response to the customer placing an order may include information identifying the customer and one or more services offered by the cloud infrastructure system1902that the customer intends to subscribe to. After an order has been placed by the customer, the order information is received via the cloud UIs,1912,1914and/or1916. At operation1936, the order is stored in order database1918. Order database1918can be one of several databases operated by cloud infrastructure system1902and operated in conjunction with other system elements. At operation1938, the order information is forwarded to an order management module1920. In some instances, order management module1920may be configured to perform billing and accounting functions related to the order, such as verifying the order, and upon verification, booking the order. At operation1940, information regarding the order is communicated to an order orchestration module1922. Order orchestration module1922may utilize the order information to orchestrate the provisioning of services and resources for the order placed by the customer. In some instances, order orchestration module1922may orchestrate the provisioning of resources to support the subscribed services using the services of order provisioning module1924. In certain aspects, order orchestration module1922enables the management of business processes associated with each order and applies business logic to determine whether an order should proceed to provisioning. At operation1942, upon receiving an order for a new subscription, order orchestration module1922sends a request to order provisioning module1924to allocate resources and configure those resources needed to fulfill the subscription order. Order provisioning module1924enables the allocation of resources for the services ordered by the customer. Order provisioning module1924provides a level of abstraction between the cloud services provided by cloud infrastructure system1902and the physical implementation layer that is used to provision the resources for providing the requested services. Order orchestration module1922may thus be isolated from implementation details, such as whether or not services and resources are actually provisioned on the fly or pre-provisioned and only allocated/assigned upon request. At operation1944, once the services and resources are provisioned, a notification of the provided service may be sent to customers on client computing devices1904,1906and/or1908by order provisioning module1924of cloud infrastructure system1902. At operation1946, the customer's subscription order may be managed and tracked by an order management and monitoring module1926. In some instances, order management and monitoring module1926may be configured to collect usage statistics for the services in the subscription order, such as the amount of storage used, the amount data transferred, the number of users, and the amount of system up time and system down time. In certain aspects, cloud infrastructure system1902may include an identity management module1928. Identity management module1928may be configured to provide identity services, such as access management and authorization services in cloud infrastructure system1902. In some aspects, identity management module1928may control information about customers who wish to utilize the services provided by cloud infrastructure system1902. Such information can include information that authenticates the identities of such customers and information that describes which actions those customers are authorized to perform relative to various system resources (e.g., files, directories, applications, communication ports, memory segments, etc.). Identity management module1928may also include the management of descriptive information about each customer and about how and by whom that descriptive information can be accessed and modified. FIG.20illustrates an exemplary computer system2000, in which various aspects of the present invention may be implemented. The computer system2000may be used to implement any of the computer systems described above. As shown in the figure, computer system2000includes a processing unit2004that communicates with a number of peripheral subsystems via a bus subsystem2002. These peripheral subsystems may include a processing acceleration unit2006, an I/O subsystem2008, a storage subsystem2018and a communications subsystem2024. Storage subsystem2018includes tangible computer-readable storage media2022and a system memory2010. Bus subsystem2002provides a mechanism for letting the various components and subsystems of computer system2000communicate with each other as intended. Although bus subsystem2002is shown schematically as a single bus, alternative aspects of the bus subsystem may utilize multiple buses. Bus subsystem2002may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include an Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE P2086.1 standard. Processing unit2004, which can be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of computer system2000. One or more processors may be included in processing unit2004. These processors may include single core or multicore processors. In certain aspects, processing unit2004may be implemented as one or more independent processing units2032and/or2034with single or multicore processors included in each processing unit. In other aspects, processing unit2004may also be implemented as a quad-core processing unit formed by integrating two dual-core processors into a single chip. In various aspects, processing unit2004can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processing unit2004and/or in storage subsystem2018. Through suitable programming, processing unit2004can provide various functionalities described above. Computer system2000may additionally include a processing acceleration unit2006, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like. I/O subsystem2008may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices. User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands. User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands. User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like. User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system2000to a user or other computer. For example, user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems. Computer system2000may comprise a storage subsystem2018that comprises software elements, shown as being currently located within a system memory2010. System memory2010may store program instructions that are loadable and executable on processing unit2004, as well as data generated during the execution of these programs. Depending on the configuration and type of computer system2000, system memory2010may be volatile (such as random access memory (RAM)) and/or non-volatile (such as read-only memory (ROM), flash memory, etc.) The RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated and executed by processing unit2004. In some implementations, system memory2010may include multiple different types of memory, such as static random access memory (SRAM) or dynamic random access memory (DRAM). In some implementations, a basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer system2000, such as during start-up, may typically be stored in the ROM. By way of example, and not limitation, system memory2010also illustrates application programs2012, which may include client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), etc., program data2014, and an operating system2016. By way of example, operating system2016may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® 10 OS, and Palm® OS operating systems. Storage subsystem2018may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some aspects. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem2018. These software modules or instructions may be executed by processing unit2004. Storage subsystem2018may also provide a repository for storing data used in accordance with the present invention. Storage subsystem2018may also include a computer-readable storage media reader2020that can further be connected to computer-readable storage media2022. Together and, optionally, in combination with system memory2010, computer-readable storage media2022may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. Computer-readable storage media2022containing code, or portions of code, can also include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible, non-transitory computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media. When specified, this can also include nontangible, transitory computer-readable media, such as data signals, data transmissions, or any other medium which can be used to transmit the desired information and which can be accessed by computer system2000. By way of example, computer-readable storage media2022may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media2022may include, but is not limited to, Zip® drives, flash memory cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like. Computer-readable storage media2022may also include, solid-state drives (SSD) based on non-volatile memory such as flash-memory based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system2000. Communications subsystem2024provides an interface to other computer systems and networks. Communications subsystem2024serves as an interface for receiving data from and transmitting data to other systems from computer system2000. For example, communications subsystem2024may enable computer system2000to connect to one or more devices via the Internet. In some aspects, communications subsystem2024can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.18 family standards, or other mobile communication technologies, or any combination thereof), global positioning system (GPS) receiver components, and/or other components. In some aspects, communications subsystem2024can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface. In some aspects, communications subsystem2024may also receive input communication in the form of structured and/or unstructured data feeds2026, event streams2028, event updates2030, and the like on behalf of one or more users who may use computer system2000. By way of example, communications subsystem2024may be configured to receive unstructured data feeds2026in real-time from users of social media networks and/or other communication services such as Twitter® feeds, Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third party information sources. Additionally, communications subsystem2024may also be configured to receive data in the form of continuous data streams, which may include event streams2028of real-time events and/or event updates2030, that may be continuous or unbounded in nature with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g. network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like. Communications subsystem2024may also be configured to output the structured and/or unstructured data feeds2026, event streams2028, event updates2030, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system2000. Computer system2000can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system. Due to the ever-changing nature of computers and networks, the description of computer system2000depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or a combination. Further, connection to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various aspects. In the foregoing specification, aspects of the invention are described with reference to specific aspects thereof, but those skilled in the art will recognize that the invention is not limited thereto. Various features and aspects of the above-described invention may be used individually or jointly. Further, aspects can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. | 118,337 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.